Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to restrict llama_index queries to respond only from local data

Tags:

llama-index

As given in https://gpt-index.readthedocs.io/en/latest/guides/tutorials/building_a_chatbot.html we wrote a chatbot to index our reference materials and it works fine. The biggest issue it has is that the bot sometimes respond to questions with its own knowledge which are outside the reference manuals.

While this is helpful sometime there are situations where such answers are completely wrong in terms of the context of our reference materials.

Is there a way to restrict the bot to answer only using the indexes we created using our own documents and use the LLM to format the response in a conversational way?

like image 427
Ishan Hettiarachchi Avatar asked Dec 19 '25 21:12

Ishan Hettiarachchi


1 Answers

You can try evaluating your result with BinaryResponseEvaluator, which will give you a Yes or No if any of the source nodes were used in your response. The documentation says:

This allows you to measure hallucination - if the response does not match the retrieved sources, this means that the model may be "hallucinating" an answer since it is not rooting the answer in the context provided to it in the prompt.

from llama_index import GPTVectorStoreIndex
from llama_index.evaluation import ResponseEvaluator

# build service context
llm_predictor = LLMPredictor(llm=ChatOpenAI(temperature=0, model_name="gpt-4"))
service_context = ServiceContext.from_defaults(llm_predictor=llm_predictor)

# build index
...

# define evaluator
evaluator = ResponseEvaluator(service_context=service_context)

# query index
query_engine = vector_index.as_query_engine()
response = query_engine.query("What battles took place in New York City in the American Revolution?")
eval_result = evaluator.evaluate(response)
print(str(eval_result))

My other suggestion would be to make a custom QuestionAnswering prompt where you will state in your query to state if the answer is not from context. For example:

QA_PROMPT_TMPL = (
"We have provided context information below. \n"
"---------------------\n"
"{context_str}"
"\n---------------------\n"
"Do not give me an answer if it is not mentioned in the context as a fact. \n"
"Given this information, please provide me with an answer to the following:\n{query_str}\n")
like image 165
Trajanov Risto Avatar answered Dec 22 '25 21:12

Trajanov Risto



Donate For Us

If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!