Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to get exact message sent to LLM using LangChain's LLMChain (python)?

Currently, when using an LLMChain in LangChain, I can get the template prompt used and the response from the model, but is it possible to get the exact text message sent as query to the model, without having to manually do the prompt template filling?

An example:

from langchain.llms import OpenAI
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain

llm = OpenAI(model_name="gpt-3.5-turbo-0613")
prompt = PromptTemplate(input_variables=["a", "b"], template="Hello {a} and {b}")
chain = LLMChain(llm=llm, prompt=prompt)
result = chain.call({"a": "some text", "b": "some other text"})

I cannot find something like I am looking for in the chain or result objects. I tried some options such as return_final_only=True and include_run_info=True but they don't include what I am looking for.

like image 364
cserpell Avatar asked Feb 26 '26 16:02

cserpell


2 Answers

pass verbose=True to LLMChain constructor

chain = LLMChain(prompt=..., llm=..., verbose=True)

but the problems is that it just prints thru stdout.

i'm also finding how to get exact used prompt string

like image 189
adoji Avatar answered Mar 01 '26 04:03

adoji


Here is the way to see that:

LLMChain(prompt=prompt,llm=llm).prompt.format_prompt(your_prompt_variables_here).to_string()
like image 27
Gabriel Lopez Avatar answered Mar 01 '26 05:03

Gabriel Lopez