Currently, when using an LLMChain in LangChain, I can get the template prompt used and the response from the model, but is it possible to get the exact text message sent as query to the model, without having to manually do the prompt template filling?
An example:
from langchain.llms import OpenAI
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain
llm = OpenAI(model_name="gpt-3.5-turbo-0613")
prompt = PromptTemplate(input_variables=["a", "b"], template="Hello {a} and {b}")
chain = LLMChain(llm=llm, prompt=prompt)
result = chain.call({"a": "some text", "b": "some other text"})
I cannot find something like I am looking for in the chain or result objects. I tried some options such as return_final_only=True and include_run_info=True but they don't include what I am looking for.
pass verbose=True to LLMChain constructor
chain = LLMChain(prompt=..., llm=..., verbose=True)
but the problems is that it just prints thru stdout.
i'm also finding how to get exact used prompt string
Here is the way to see that:
LLMChain(prompt=prompt,llm=llm).prompt.format_prompt(your_prompt_variables_here).to_string()
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With