Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

GPT-3 davinci gives different results with the same prompt

I am not sure if you have access to GPT-3, particularly DaVinci (the complete-a-sentence tool). You can find the API and info here

I've been trying this tool for the past hour and every time I hit their API using the same prompt (indeed the same input), I received a different response.

  1. Do you happen to encounter the same situation?
  2. If this is expected, do you happen to know the reason behind it?

Here are some examples

Request header (I tried to use the same example they provide)

{
  "prompt": "Once upon a time",
  "max_tokens": 3,
  "temperature": 1,
  "top_p": 1,
  "n": 1,
  "stream": false,
  "logprobs": null,
  "stop": "\n"
}

Output 1

"choices": [
        {
            "text": ", this column",
            "index": 0,
            "logprobs": null,
            "finish_reason": "length"
        }
    ]

Output 2

"choices": [
        {
            "text": ", winter break",
            "index": 0,
            "logprobs": null,
            "finish_reason": "length"
        }
    ]

Output 3

"choices": [
        {
            "text": ", the traditional",
            "index": 0,
            "logprobs": null,
            "finish_reason": "length"
        }
    ]
like image 397
Duy Bui Avatar asked Sep 18 '25 11:09

Duy Bui


1 Answers

I just talked to OpenAI and they said that their response is not deterministic. It's probabilistic so that it can be creative. In order to make it deterministic or reduce the risk of being probabilistic, they suggest adjusting the temperature parameter. By default, it is 1 (i.e. 100% taking risks). If we want to make it completely deterministic, set it to 0.

Another parameter is top_p (default=1) that can be used to set the state of being deterministic. But they don't recommend tweaking both temperature and top_p. Only one of them would do the job.

like image 74
Duy Bui Avatar answered Sep 21 '25 13:09

Duy Bui