Package com.hw.langchain.chains.llm
Class LLMChain
java.lang.Object
com.hw.langchain.chains.base.Chain
com.hw.langchain.chains.llm.LLMChain
- Direct Known Subclasses:
ConversationChain
Chain to run queries against LLMs
- Author:
- HamaWhite
-
Field Summary
FieldsModifier and TypeFieldDescriptionprotected BaseLanguageModelprotected Stringprotected BaseLLMOutputParser<String>Output parser to use.protected BasePromptTemplatePrompt object to use.protected booleanWhether to return only the final parsed result. -
Constructor Summary
ConstructorsConstructorDescriptionLLMChain(BaseLanguageModel llm, BasePromptTemplate prompt) LLMChain(BaseLanguageModel llm, BasePromptTemplate prompt, String outputKey) -
Method Summary
Modifier and TypeMethodDescriptionasyncInnerCall(Map<String, Object> inputs) Runs the logic of this chain and returns the async output.reactor.core.publisher.Flux<String>asyncPredict(Map<String, Object> kwargs) Format prompt with kwargs and pass to LLM async.Runs the logic of this chain and returns the output.Will be whatever keys the prompt expects.Will always return text key.Format prompt with kwargs and pass to LLM.<T> TpredictAndParse(Map<String, Object> kwargs) Call predict and then parse the results.
-
Field Details
-
llm
-
prompt
Prompt object to use. -
outputKey
-
outputParser
Output parser to use. Defaults to one that takes the most likely string but does not change it. -
returnFinalOnly
protected boolean returnFinalOnlyWhether to return only the final parsed result. Defaults to true. If false, will return a bunch of extra information about the generation.
-
-
Constructor Details
-
LLMChain
-
LLMChain
-
-
Method Details
-
chainType
-
inputKeys
Will be whatever keys the prompt expects. -
outputKeys
Will always return text key.- Specified by:
outputKeysin classChain- Returns:
- the list of output keys
-
innerCall
Description copied from class:ChainRuns the logic of this chain and returns the output. -
asyncInnerCall
Description copied from class:ChainRuns the logic of this chain and returns the async output.- Overrides:
asyncInnerCallin classChain- Parameters:
inputs- the inputs to be processed by the chain- Returns:
- a map flux containing the output generated event by the chain
-
predict
Format prompt with kwargs and pass to LLM.- Parameters:
kwargs- Keys to pass to prompt template.- Returns:
- Completion from LLM.
-
asyncPredict
Format prompt with kwargs and pass to LLM async.- Parameters:
kwargs- Keys to pass to prompt template.- Returns:
- Completion from LLM.
-
predictAndParse
Call predict and then parse the results.
-