Package com.hw.langchain.llms.openai
Class BaseOpenAI
java.lang.Object
com.hw.langchain.llms.base.BaseLLM
com.hw.langchain.llms.openai.BaseOpenAI
- All Implemented Interfaces:
BaseLanguageModel
- Direct Known Subclasses:
OpenAI
Wrapper around OpenAI large language models.
- Author:
- HamaWhite
-
Field Summary
Modifier and TypeFieldDescriptionSet of special tokens that are allowed.protected int
Batch size to use when passing multiple documents to generate.protected int
Generates best_of completions server-side and returns the "best".protected OpenAiClient
Set of special tokens that are not allowed.protected float
Penalizes repeated tokens according to frequency.protected List<okhttp3.Interceptor>
list of okhttp interceptorAdjust the probability of specific tokens being generated.protected int
Maximum number of retries to make when generating.protected int
The maximum number of tokens to generate in the completion.protected String
Model name to use.protected int
How many completions to generate for each prompt.protected String
Base URL for OpenAI API.protected String
API key for OpenAI.protected OpenaiApiType
Api type for Azure OpenAI API.protected String
Api version for Azure OpenAI API.protected String
Organization ID for OpenAI.protected String
Support explicit proxy for OpenAIprotected float
Penalizes repeated tokens.protected String
the password for proxy authentication (optional)protected String
the username for proxy authentication (optional)protected long
Timeout for requests to OpenAI completion API.protected boolean
Whether to stream the results or not.protected float
What sampling temperature to use.protected float
Total probability mass of tokens to consider at each step. -
Constructor Summary
-
Method Summary
Modifier and TypeMethodDescriptionprotected reactor.core.publisher.Flux<AsyncLLMResult>
asyncInnerGenerate
(List<String> prompts, List<String> stop) Run the LLM on the given prompts async.protected LLMResult
innerGenerate
(List<String> prompts, List<String> stop) Call out to OpenAI's endpoint with k unique prompts.llmType()
Return type of llm.Methods inherited from class com.hw.langchain.llms.base.BaseLLM
asyncGeneratePrompt, asyncPredict, call, call, generate, generatePrompt, predict, predictMessages
Methods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
Methods inherited from interface com.hw.langchain.base.language.BaseLanguageModel
asyncGeneratePrompt, asyncPredict, asyncPredictMessages, predict, predictMessages
-
Field Details
-
client
-
model
Model name to use. -
temperature
protected float temperatureWhat sampling temperature to use. -
maxTokens
protected int maxTokensThe maximum number of tokens to generate in the completion. -1 returns as many tokens as possible given the prompt and the model's maximal context size. -
topP
protected float topPTotal probability mass of tokens to consider at each step. -
frequencyPenalty
protected float frequencyPenaltyPenalizes repeated tokens according to frequency. -
presencePenalty
protected float presencePenaltyPenalizes repeated tokens. -
n
protected int nHow many completions to generate for each prompt. -
bestOf
protected int bestOfGenerates best_of completions server-side and returns the "best". -
openaiApiKey
API key for OpenAI. -
openaiApiBase
Base URL for OpenAI API. -
openaiApiType
Api type for Azure OpenAI API. -
openaiApiVersion
Api version for Azure OpenAI API. -
openaiOrganization
Organization ID for OpenAI. -
openaiProxy
Support explicit proxy for OpenAI -
proxyUsername
the username for proxy authentication (optional) -
proxyPassword
the password for proxy authentication (optional) -
batchSize
protected int batchSizeBatch size to use when passing multiple documents to generate. -
requestTimeout
protected long requestTimeoutTimeout for requests to OpenAI completion API. Default is 16 seconds. -
logitBias
Adjust the probability of specific tokens being generated. -
maxRetries
protected int maxRetriesMaximum number of retries to make when generating. -
stream
protected boolean streamWhether to stream the results or not. -
allowedSpecial
Set of special tokens that are allowed. -
disallowedSpecial
Set of special tokens that are not allowed. -
interceptorList
list of okhttp interceptor
-
-
Constructor Details
-
BaseOpenAI
public BaseOpenAI()
-
-
Method Details
-
llmType
Description copied from class:BaseLLM
Return type of llm. -
innerGenerate
Call out to OpenAI's endpoint with k unique prompts.- Specified by:
innerGenerate
in classBaseLLM
- Parameters:
prompts
- The prompts to pass into the model.stop
- list of stop words to use when generating.- Returns:
- The full LLM output.
-
asyncInnerGenerate
protected reactor.core.publisher.Flux<AsyncLLMResult> asyncInnerGenerate(List<String> prompts, List<String> stop) Description copied from class:BaseLLM
Run the LLM on the given prompts async.- Specified by:
asyncInnerGenerate
in classBaseLLM
-