Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Ollama Status Code 404 Error when trying to run with LangChain

I am getting the following error when trying to run Ollama with LLama3 and invoking the model from LangChain (python)

langchain_community.llms.ollama.OllamaEndpointNotFoundError: Ollama call failed with status code 404. Maybe your model is not found and you should pull the model with `ollama pull llama3`.

Context:

  • Running on MacOs
  • Code:
from langchain_community.llms import Ollama

llm = Ollama(model="llama3", base_url="http://localhost:11434/")
llm.invoke("Why is the sky blue?")
  • Tried running Ollama as a service with ollama serve (It does not seem to make a difference)

  • I am able to see that Ollama is running on the localhost:11434

  • I am getting a 404 error when I try to access localhost:11434/llama3

  • ollama list shows llama 3 installed

like image 238
Parvez Shah Avatar asked Oct 18 '25 11:10

Parvez Shah


2 Answers

I did faced the same issue with both llama3 and llama2 on my Mac

Here is how it got resolved-

  1. Remove the ending slash from the base URL param

llm = Ollama(model="llama3", base_url="http://localhost:11434")

  1. Restart the Jupytar karnel.
  2. Run all the cells again.
like image 175
Pavan Gupta Avatar answered Oct 20 '25 19:10

Pavan Gupta


It works for me in Jupyter notebook when I set model="llama2"

llm = Ollama(model="llama2")
like image 36
Abaid Khan Avatar answered Oct 20 '25 18:10

Abaid Khan



Donate For Us

If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!