Package com.hw.langchain.llms.base
Class BaseLLM
java.lang.Object
com.hw.langchain.llms.base.BaseLLM
- All Implemented Interfaces:
BaseLanguageModel
- Direct Known Subclasses:
BaseOpenAI
,LLM
,Ollama
,OpenAIChat
LLM wrapper should take in a prompt and return a string.
- Author:
- HamaWhite
-
Constructor Summary
-
Method Summary
Modifier and TypeMethodDescriptionList<reactor.core.publisher.Flux<AsyncLLMResult>>
asyncGeneratePrompt
(List<PromptValue> prompts, List<String> stop) Take in a list of prompt values and return an Fluxfor every PromptValue. protected abstract reactor.core.publisher.Flux<AsyncLLMResult>
asyncInnerGenerate
(List<String> prompts, List<String> stop) Run the LLM on the given prompts async.reactor.core.publisher.Flux<String>
asyncPredict
(String text, List<String> stop) Predict text from text async.Check Cache and run the LLM on the given prompt and input.Run the LLM on the given prompt and input.generatePrompt
(List<PromptValue> prompts, List<String> stop) Take in a list of prompt values and return an LLMResult.protected abstract LLMResult
innerGenerate
(List<String> prompts, List<String> stop) Run the LLM on the given prompts.abstract String
llmType()
Return type of llm.Predict text from text.predictMessages
(List<BaseMessage> messages, List<String> stop) Predict message from messages.Methods inherited from class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
Methods inherited from interface com.hw.langchain.base.language.BaseLanguageModel
asyncGeneratePrompt, asyncPredict, asyncPredictMessages, predict, predictMessages
-
Constructor Details
-
BaseLLM
public BaseLLM()
-
-
Method Details
-
llmType
Return type of llm. -
innerGenerate
Run the LLM on the given prompts. -
asyncInnerGenerate
protected abstract reactor.core.publisher.Flux<AsyncLLMResult> asyncInnerGenerate(List<String> prompts, List<String> stop) Run the LLM on the given prompts async. -
call
Check Cache and run the LLM on the given prompt and input. -
call
-
generate
Run the LLM on the given prompt and input. -
generatePrompt
Description copied from interface:BaseLanguageModel
Take in a list of prompt values and return an LLMResult.- Specified by:
generatePrompt
in interfaceBaseLanguageModel
-
asyncGeneratePrompt
public List<reactor.core.publisher.Flux<AsyncLLMResult>> asyncGeneratePrompt(List<PromptValue> prompts, List<String> stop) Description copied from interface:BaseLanguageModel
Take in a list of prompt values and return an Fluxfor every PromptValue. - Specified by:
asyncGeneratePrompt
in interfaceBaseLanguageModel
-
predict
Description copied from interface:BaseLanguageModel
Predict text from text.- Specified by:
predict
in interfaceBaseLanguageModel
-
asyncPredict
Description copied from interface:BaseLanguageModel
Predict text from text async.- Specified by:
asyncPredict
in interfaceBaseLanguageModel
-
predictMessages
Description copied from interface:BaseLanguageModel
Predict message from messages.- Specified by:
predictMessages
in interfaceBaseLanguageModel
-