Serialized Form

  • Package com.hw.openai.entity.chat

    • Class com.hw.openai.entity.chat.ChatCompletion

      class ChatCompletion extends Object implements Serializable
      • Serialized Fields

        • frequencyPenalty
          float frequencyPenalty
          Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.
        • logitBias
          Map<String,Float> logitBias
          Modify the likelihood of specified tokens appearing in the completion.
        • maxTokens
          Integer maxTokens
          The maximum number of tokens to generate in the chat completion.

          The total length of input tokens and generated tokens is limited by the model's context length.

        • messages
          @NotEmpty List<ChatMessage> messages
          A list of messages describing the conversation so far.
        • model
          @NotBlank String model
          ID of the model to use.
        • n
          Integer n
          How many chat completion choices to generate for each input message.
        • presencePenalty
          float presencePenalty
          Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.
        • seed
          Integer seed
          This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result.
        • stop
          List<String> stop
          Up to 4 sequences where the API will stop generating further tokens.
        • stream
          boolean stream
          If set, partial message deltas will be sent, like in ChatGPT. Tokens will be sent as data-only server-sent events as they become available, with the stream terminated by a data: [DONE] message.
        • temperature
          float temperature
          What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.

          We generally recommend altering this or top_p but not both.

        • toolChoice
          String toolChoice
          Controls which (if any) function is called by the model. none means the model will not call a function and instead generates a message.

          none is the default when no functions are present. auto is the default if functions are present.

        • tools
          List<Tool> tools
          A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for.
        • topP
          float topP
          An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.

          We generally recommend altering this or temperature but not both.

        • user
          String user
          A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse.
    • Class com.hw.openai.entity.chat.ChatMessage

      class ChatMessage extends Object implements Serializable
      • Serialized Fields

        • content
          String content
          The contents of the message. content should always exist in the call, even if it is null
        • name
          String name
          The name of the author of this message. May contain a-z, A-Z, 0-9, and underscores, with a maximum length of 64 characters.
        • role
          @NotNull ChatMessageRole role
          The role of the author of this message. One of system, user, or assistant.
        • toolCalls
          List<ToolCall> toolCalls
          The name and arguments of a function that should be called, as generated by the model.
  • Package com.hw.openai.entity.completions

    • Class com.hw.openai.entity.completions.Completion

      class Completion extends Object implements Serializable
      • Serialized Fields

        • bestOf
          Integer bestOf
          Generates best_of completions server-side and returns the "best" (the one with the highest log probability per token). Results cannot be streamed.

          When used with n, best_of controls the number of candidate completions and n specifies how many to return – best_of must be greater than n.

        • echo
          boolean echo
          Echo back the prompt in addition to the completion
        • frequencyPenalty
          float frequencyPenalty
          Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.
        • logitBias
          Map<String,Float> logitBias
          Modify the likelihood of specified tokens appearing in the completion.
        • logprobs
          Integer logprobs
          Include the log probabilities on the logprobs most likely tokens, as well the chosen tokens. For example, if logprobs is 5, the API will return a list of the 5 most likely tokens. The API will always return the logprobs of the sampled token, so there may be up to logprobs+1 elements in the response.
        • maxTokens
          Integer maxTokens
          The maximum number of tokens to generate in the completion.
        • model
          @NotBlank String model
          ID of the model to use.
        • n
          Integer n
          How many completions to generate for each prompt.
        • presencePenalty
          float presencePenalty
          Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.
        • prompt
          List<String> prompt
          The prompt(s) to generate completions for, encoded as a string, array of strings, array of tokens, or array of token arrays.
        • stop
          List<String> stop
          Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence.
        • stream
          boolean stream
          Whether to stream back partial progress. If set, tokens will be sent as data-only server-sent events as they become available, with the stream terminated by a data: [DONE] message.
        • suffix
          String suffix
          The suffix that comes after a completion of inserted text.
        • temperature
          float temperature
          What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.

          We generally recommend altering this or top_p but not both.

        • topP
          float topP
          An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.

          We generally recommend altering this or temperature but not both.

        • user
          String user
          A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse.
  • Package com.hw.openai.entity.embeddings

    • Class com.hw.openai.entity.embeddings.Embedding

      class Embedding extends Object implements Serializable
      • Serialized Fields

        • input
          @NotEmpty List<?> input
          Input text to embed, encoded as a string or array of tokens. To embed multiple inputs in a single request, pass an array of strings or array of token arrays. Each input must not exceed the max input tokens for the model (8191 tokens for text-embedding-ada-002).
        • model
          @NotBlank String model
          ID of the model to use.
        • user
          String user
          A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse.
  • Package com.hw.openai.exception