Class ChatOpenAI
java.lang.Object
com.hw.langchain.chat.models.base.BaseChatModel
com.hw.langchain.chat.models.openai.ChatOpenAI
- All Implemented Interfaces:
BaseLanguageModel
Wrapper around OpenAI Chat large language models.
- Author:
- HamaWhite
-
Field Summary
Modifier and TypeFieldDescriptionprotected OpenAiClient
protected int
Maximum number of retries to make when generating.protected Integer
Maximum number of tokens to generate.protected String
Model name to use.Holds any model parameters valid for `create` call not explicitly specified.protected int
Number of chat completions to generate for each prompt.protected String
protected String
Base URL path for API requests, leave blank if not using a proxy or service emulator.protected OpenaiApiType
protected String
protected String
protected String
To support explicit proxy for OpenAI.protected long
Timeout for requests to OpenAI completion API.protected boolean
Whether to stream the results or not.protected float
What sampling temperature to use. -
Constructor Summary
-
Method Summary
Modifier and TypeMethodDescriptioncombineLlmOutputs
(List<Map<String, Object>> llmOutputs) convertMessages
(List<BaseMessage> messages) createChatResult
(ChatCompletionResp response) init()
Validate parameters and init clientinnerGenerate
(List<BaseMessage> messages, List<String> stop) Top Level callllmType()
Return type of chat model.Methods inherited from class com.hw.langchain.chat.models.base.BaseChatModel
call, call, generate, generate, generatePrompt, predict, predictMessages
Methods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
Methods inherited from interface com.hw.langchain.base.language.BaseLanguageModel
asyncGeneratePrompt, asyncGeneratePrompt, asyncPredict, asyncPredict, asyncPredictMessages, predict, predictMessages
-
Field Details
-
client
-
model
Model name to use. -
temperature
protected float temperatureWhat sampling temperature to use. -
modelKwargs
Holds any model parameters valid for `create` call not explicitly specified. -
openaiApiKey
Base URL path for API requests, leave blank if not using a proxy or service emulator. -
openaiApiBase
-
openaiApiType
-
openaiApiVersion
-
openaiOrganization
-
openaiProxy
To support explicit proxy for OpenAI. -
requestTimeout
protected long requestTimeoutTimeout for requests to OpenAI completion API. Default is 16 seconds. -
maxRetries
protected int maxRetriesMaximum number of retries to make when generating. -
stream
protected boolean streamWhether to stream the results or not. -
n
protected int nNumber of chat completions to generate for each prompt. -
maxTokens
Maximum number of tokens to generate.
-
-
Constructor Details
-
ChatOpenAI
public ChatOpenAI()
-
-
Method Details
-
init
Validate parameters and init client -
combineLlmOutputs
- Overrides:
combineLlmOutputs
in classBaseChatModel
-
innerGenerate
Description copied from class:BaseChatModel
Top Level call- Specified by:
innerGenerate
in classBaseChatModel
-
convertMessages
-
createChatResult
-
llmType
Description copied from class:BaseChatModel
Return type of chat model.- Specified by:
llmType
in classBaseChatModel
-