Package com.hw.langchain.llms.base
Class BaseLLM
java.lang.Object
com.hw.langchain.llms.base.BaseLLM
- All Implemented Interfaces:
BaseLanguageModel
- Direct Known Subclasses:
BaseOpenAI,LLM,Ollama,OpenAIChat
LLM wrapper should take in a prompt and return a string.
- Author:
- HamaWhite
-
Constructor Summary
Constructors -
Method Summary
Modifier and TypeMethodDescriptionList<reactor.core.publisher.Flux<AsyncLLMResult>>asyncGeneratePrompt(List<PromptValue> prompts, List<String> stop) Take in a list of prompt values and return an Fluxfor every PromptValue. protected abstract reactor.core.publisher.Flux<AsyncLLMResult>asyncInnerGenerate(List<String> prompts, List<String> stop) Run the LLM on the given prompts async.reactor.core.publisher.Flux<String>asyncPredict(String text, List<String> stop) Predict text from text async.Check Cache and run the LLM on the given prompt and input.Run the LLM on the given prompt and input.generatePrompt(List<PromptValue> prompts, List<String> stop) Take in a list of prompt values and return an LLMResult.protected abstract LLMResultinnerGenerate(List<String> prompts, List<String> stop) Run the LLM on the given prompts.abstract StringllmType()Return type of llm.Predict text from text.predictMessages(List<BaseMessage> messages, List<String> stop) Predict message from messages.Methods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, waitMethods inherited from interface com.hw.langchain.base.language.BaseLanguageModel
asyncGeneratePrompt, asyncPredict, asyncPredictMessages, predict, predictMessages
-
Constructor Details
-
BaseLLM
public BaseLLM()
-
-
Method Details
-
llmType
Return type of llm. -
innerGenerate
Run the LLM on the given prompts. -
asyncInnerGenerate
protected abstract reactor.core.publisher.Flux<AsyncLLMResult> asyncInnerGenerate(List<String> prompts, List<String> stop) Run the LLM on the given prompts async. -
call
Check Cache and run the LLM on the given prompt and input. -
call
-
generate
Run the LLM on the given prompt and input. -
generatePrompt
Description copied from interface:BaseLanguageModelTake in a list of prompt values and return an LLMResult.- Specified by:
generatePromptin interfaceBaseLanguageModel
-
asyncGeneratePrompt
public List<reactor.core.publisher.Flux<AsyncLLMResult>> asyncGeneratePrompt(List<PromptValue> prompts, List<String> stop) Description copied from interface:BaseLanguageModelTake in a list of prompt values and return an Fluxfor every PromptValue. - Specified by:
asyncGeneratePromptin interfaceBaseLanguageModel
-
predict
Description copied from interface:BaseLanguageModelPredict text from text.- Specified by:
predictin interfaceBaseLanguageModel
-
asyncPredict
Description copied from interface:BaseLanguageModelPredict text from text async.- Specified by:
asyncPredictin interfaceBaseLanguageModel
-
predictMessages
Description copied from interface:BaseLanguageModelPredict message from messages.- Specified by:
predictMessagesin interfaceBaseLanguageModel
-