Package com.hw.langchain.llms.openai
Class OpenAIChat
java.lang.Object
com.hw.langchain.llms.base.BaseLLM
com.hw.langchain.llms.openai.OpenAIChat
- All Implemented Interfaces:
BaseLanguageModel
Wrapper around OpenAI Chat large language models.
- Author:
- HamaWhite
-
Field Summary
Modifier and TypeFieldDescriptionprotected OpenAiClient
protected float
Penalizes repeated tokens according to frequency.Adjust the probability of specific tokens being generated.protected int
Maximum number of retries to make when generating.protected int
The maximum number of tokens to generate in the completion.protected String
Model name to use.protected int
How many completions to generate for each prompt.protected String
Base URL for OpenAI API.protected String
API key for OpenAI.protected OpenaiApiType
Api type for Azure OpenAI API.protected String
Api version for Azure OpenAI API.protected String
Organization ID for OpenAI.protected String
Support explicit proxy for OpenAIprotected float
Penalizes repeated tokens.protected long
Timeout for requests to OpenAI completion API.protected boolean
Whether to stream the results or not.protected float
What sampling temperature to use.protected float
Total probability mass of tokens to consider at each step. -
Constructor Summary
-
Method Summary
Modifier and TypeMethodDescriptionprotected reactor.core.publisher.Flux<AsyncLLMResult>
asyncInnerGenerate
(List<String> prompts, List<String> stop) Run the LLM on the given prompts async.init()
protected LLMResult
innerGenerate
(List<String> prompts, List<String> stop) Run the LLM on the given prompts.llmType()
Return type of llm.Methods inherited from class com.hw.langchain.llms.base.BaseLLM
asyncGeneratePrompt, asyncPredict, call, call, generate, generatePrompt, predict, predictMessages
Methods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
Methods inherited from interface com.hw.langchain.base.language.BaseLanguageModel
asyncGeneratePrompt, asyncPredict, asyncPredictMessages, predict, predictMessages
-
Field Details
-
client
-
model
Model name to use. -
temperature
protected float temperatureWhat sampling temperature to use. -
maxTokens
protected int maxTokensThe maximum number of tokens to generate in the completion. -1 returns as many tokens as possible given the prompt and the model's maximal context size. -
topP
protected float topPTotal probability mass of tokens to consider at each step. -
frequencyPenalty
protected float frequencyPenaltyPenalizes repeated tokens according to frequency. -
presencePenalty
protected float presencePenaltyPenalizes repeated tokens. -
n
protected int nHow many completions to generate for each prompt. -
openaiApiKey
API key for OpenAI. -
openaiApiBase
Base URL for OpenAI API. -
openaiApiType
Api type for Azure OpenAI API. -
openaiApiVersion
Api version for Azure OpenAI API. -
openaiOrganization
Organization ID for OpenAI. -
openaiProxy
Support explicit proxy for OpenAI -
maxRetries
protected int maxRetriesMaximum number of retries to make when generating. -
requestTimeout
protected long requestTimeoutTimeout for requests to OpenAI completion API. Default is 16 seconds. -
logitBias
Adjust the probability of specific tokens being generated. -
stream
protected boolean streamWhether to stream the results or not.
-
-
Constructor Details
-
OpenAIChat
public OpenAIChat()
-
-
Method Details
-
init
-
llmType
Description copied from class:BaseLLM
Return type of llm. -
innerGenerate
Description copied from class:BaseLLM
Run the LLM on the given prompts.- Specified by:
innerGenerate
in classBaseLLM
-
asyncInnerGenerate
protected reactor.core.publisher.Flux<AsyncLLMResult> asyncInnerGenerate(List<String> prompts, List<String> stop) Description copied from class:BaseLLM
Run the LLM on the given prompts async.- Specified by:
asyncInnerGenerate
in classBaseLLM
-