Package com.hw.langchain.llms.openai
Class OpenAIChat
java.lang.Object
com.hw.langchain.llms.base.BaseLLM
com.hw.langchain.llms.openai.OpenAIChat
- All Implemented Interfaces:
BaseLanguageModel
Wrapper around OpenAI Chat large language models.
- Author:
- HamaWhite
-
Field Summary
FieldsModifier and TypeFieldDescriptionprotected OpenAiClientprotected floatPenalizes repeated tokens according to frequency.Adjust the probability of specific tokens being generated.protected intMaximum number of retries to make when generating.protected intThe maximum number of tokens to generate in the completion.protected StringModel name to use.protected intHow many completions to generate for each prompt.protected StringBase URL for OpenAI API.protected StringAPI key for OpenAI.protected OpenaiApiTypeApi type for Azure OpenAI API.protected StringApi version for Azure OpenAI API.protected StringOrganization ID for OpenAI.protected StringSupport explicit proxy for OpenAIprotected floatPenalizes repeated tokens.protected longTimeout for requests to OpenAI completion API.protected booleanWhether to stream the results or not.protected floatWhat sampling temperature to use.protected floatTotal probability mass of tokens to consider at each step. -
Constructor Summary
Constructors -
Method Summary
Modifier and TypeMethodDescriptionprotected reactor.core.publisher.Flux<AsyncLLMResult>asyncInnerGenerate(List<String> prompts, List<String> stop) Run the LLM on the given prompts async.init()protected LLMResultinnerGenerate(List<String> prompts, List<String> stop) Run the LLM on the given prompts.llmType()Return type of llm.Methods inherited from class com.hw.langchain.llms.base.BaseLLM
asyncGeneratePrompt, asyncPredict, call, call, generate, generatePrompt, predict, predictMessagesMethods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, waitMethods inherited from interface com.hw.langchain.base.language.BaseLanguageModel
asyncGeneratePrompt, asyncPredict, asyncPredictMessages, predict, predictMessages
-
Field Details
-
client
-
model
Model name to use. -
temperature
protected float temperatureWhat sampling temperature to use. -
maxTokens
protected int maxTokensThe maximum number of tokens to generate in the completion. -1 returns as many tokens as possible given the prompt and the model's maximal context size. -
topP
protected float topPTotal probability mass of tokens to consider at each step. -
frequencyPenalty
protected float frequencyPenaltyPenalizes repeated tokens according to frequency. -
presencePenalty
protected float presencePenaltyPenalizes repeated tokens. -
n
protected int nHow many completions to generate for each prompt. -
openaiApiKey
API key for OpenAI. -
openaiApiBase
Base URL for OpenAI API. -
openaiApiType
Api type for Azure OpenAI API. -
openaiApiVersion
Api version for Azure OpenAI API. -
openaiOrganization
Organization ID for OpenAI. -
openaiProxy
Support explicit proxy for OpenAI -
maxRetries
protected int maxRetriesMaximum number of retries to make when generating. -
requestTimeout
protected long requestTimeoutTimeout for requests to OpenAI completion API. Default is 16 seconds. -
logitBias
Adjust the probability of specific tokens being generated. -
stream
protected boolean streamWhether to stream the results or not.
-
-
Constructor Details
-
OpenAIChat
public OpenAIChat()
-
-
Method Details
-
init
-
llmType
Description copied from class:BaseLLMReturn type of llm. -
innerGenerate
Description copied from class:BaseLLMRun the LLM on the given prompts.- Specified by:
innerGeneratein classBaseLLM
-
asyncInnerGenerate
protected reactor.core.publisher.Flux<AsyncLLMResult> asyncInnerGenerate(List<String> prompts, List<String> stop) Description copied from class:BaseLLMRun the LLM on the given prompts async.- Specified by:
asyncInnerGeneratein classBaseLLM
-