Class ChatOpenAI
java.lang.Object
com.hw.langchain.chat.models.base.BaseChatModel
com.hw.langchain.chat.models.openai.ChatOpenAI
- All Implemented Interfaces:
BaseLanguageModel
Wrapper around OpenAI Chat large language models.
- Author:
- HamaWhite
-
Field Summary
FieldsModifier and TypeFieldDescriptionprotected OpenAiClientprotected intMaximum number of retries to make when generating.protected IntegerMaximum number of tokens to generate.protected StringModel name to use.Holds any model parameters valid for `create` call not explicitly specified.protected intNumber of chat completions to generate for each prompt.protected Stringprotected StringBase URL path for API requests, leave blank if not using a proxy or service emulator.protected OpenaiApiTypeprotected Stringprotected Stringprotected StringTo support explicit proxy for OpenAI.protected longTimeout for requests to OpenAI completion API.protected booleanWhether to stream the results or not.protected floatWhat sampling temperature to use. -
Constructor Summary
Constructors -
Method Summary
Modifier and TypeMethodDescriptioncombineLlmOutputs(List<Map<String, Object>> llmOutputs) convertMessages(List<BaseMessage> messages) createChatResult(ChatCompletionResp response) init()Validate parameters and init clientinnerGenerate(List<BaseMessage> messages, List<String> stop) Top Level callllmType()Return type of chat model.Methods inherited from class com.hw.langchain.chat.models.base.BaseChatModel
call, call, generate, generate, generatePrompt, predict, predictMessagesMethods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, waitMethods inherited from interface com.hw.langchain.base.language.BaseLanguageModel
asyncGeneratePrompt, asyncGeneratePrompt, asyncPredict, asyncPredict, asyncPredictMessages, predict, predictMessages
-
Field Details
-
client
-
model
Model name to use. -
temperature
protected float temperatureWhat sampling temperature to use. -
modelKwargs
Holds any model parameters valid for `create` call not explicitly specified. -
openaiApiKey
Base URL path for API requests, leave blank if not using a proxy or service emulator. -
openaiApiBase
-
openaiApiType
-
openaiApiVersion
-
openaiOrganization
-
openaiProxy
To support explicit proxy for OpenAI. -
requestTimeout
protected long requestTimeoutTimeout for requests to OpenAI completion API. Default is 16 seconds. -
maxRetries
protected int maxRetriesMaximum number of retries to make when generating. -
stream
protected boolean streamWhether to stream the results or not. -
n
protected int nNumber of chat completions to generate for each prompt. -
maxTokens
Maximum number of tokens to generate.
-
-
Constructor Details
-
ChatOpenAI
public ChatOpenAI()
-
-
Method Details
-
init
Validate parameters and init client -
combineLlmOutputs
- Overrides:
combineLlmOutputsin classBaseChatModel
-
innerGenerate
Description copied from class:BaseChatModelTop Level call- Specified by:
innerGeneratein classBaseChatModel
-
convertMessages
-
createChatResult
-
llmType
Description copied from class:BaseChatModelReturn type of chat model.- Specified by:
llmTypein classBaseChatModel
-