Class BaseOpenAI

java.lang.Object
com.hw.langchain.llms.base.BaseLLM
com.hw.langchain.llms.openai.BaseOpenAI
All Implemented Interfaces:
BaseLanguageModel
Direct Known Subclasses:
OpenAI

public class BaseOpenAI extends BaseLLM
Wrapper around OpenAI large language models.
Author:
HamaWhite
  • Field Details

    • client

      protected OpenAiClient client
    • model

      protected String model
      Model name to use.
    • temperature

      protected float temperature
      What sampling temperature to use.
    • maxTokens

      protected int maxTokens
      The maximum number of tokens to generate in the completion. -1 returns as many tokens as possible given the prompt and the model's maximal context size.
    • topP

      protected float topP
      Total probability mass of tokens to consider at each step.
    • frequencyPenalty

      protected float frequencyPenalty
      Penalizes repeated tokens according to frequency.
    • presencePenalty

      protected float presencePenalty
      Penalizes repeated tokens.
    • n

      protected int n
      How many completions to generate for each prompt.
    • bestOf

      protected int bestOf
      Generates best_of completions server-side and returns the "best".
    • openaiApiKey

      protected String openaiApiKey
      API key for OpenAI.
    • openaiApiBase

      protected String openaiApiBase
      Base URL for OpenAI API.
    • openaiApiType

      protected OpenaiApiType openaiApiType
      Api type for Azure OpenAI API.
    • openaiApiVersion

      protected String openaiApiVersion
      Api version for Azure OpenAI API.
    • openaiOrganization

      protected String openaiOrganization
      Organization ID for OpenAI.
    • openaiProxy

      protected String openaiProxy
      Support explicit proxy for OpenAI
    • proxyUsername

      protected String proxyUsername
      the username for proxy authentication (optional)
    • proxyPassword

      protected String proxyPassword
      the password for proxy authentication (optional)
    • batchSize

      protected int batchSize
      Batch size to use when passing multiple documents to generate.
    • requestTimeout

      protected long requestTimeout
      Timeout for requests to OpenAI completion API. Default is 16 seconds.
    • logitBias

      protected Map<String,Float> logitBias
      Adjust the probability of specific tokens being generated.
    • maxRetries

      protected int maxRetries
      Maximum number of retries to make when generating.
    • stream

      protected boolean stream
      Whether to stream the results or not.
    • allowedSpecial

      protected Set<String> allowedSpecial
      Set of special tokens that are allowed.
    • disallowedSpecial

      protected Set<String> disallowedSpecial
      Set of special tokens that are not allowed.
    • interceptorList

      protected List<okhttp3.Interceptor> interceptorList
      list of okhttp interceptor
  • Constructor Details

    • BaseOpenAI

      public BaseOpenAI()
  • Method Details

    • llmType

      public String llmType()
      Description copied from class: BaseLLM
      Return type of llm.
      Specified by:
      llmType in class BaseLLM
    • innerGenerate

      protected LLMResult innerGenerate(List<String> prompts, List<String> stop)
      Call out to OpenAI's endpoint with k unique prompts.
      Specified by:
      innerGenerate in class BaseLLM
      Parameters:
      prompts - The prompts to pass into the model.
      stop - list of stop words to use when generating.
      Returns:
      The full LLM output.
    • asyncInnerGenerate

      protected reactor.core.publisher.Flux<AsyncLLMResult> asyncInnerGenerate(List<String> prompts, List<String> stop)
      Description copied from class: BaseLLM
      Run the LLM on the given prompts async.
      Specified by:
      asyncInnerGenerate in class BaseLLM