I have an assistant with an ID created at https://platform.openai.com/.
I want to use it instead of the Completions API or Chat Completions API. How do I do that?
import { config } from "dotenv";
config();
import { OpenAI } from "openai";
import readline from "readline";
const openai = new OpenAI({ apiKey: process.env.API_KEY });
const userInterface = readline.createInterface({
input: process.stdin,
output: process.stdout,
});
let messages = [];
userInterface.prompt();
userInterface.on("line", async (input) => {
messages.push({ role: "user", content: input });
const res = await openai.chat.completions.create({
model: "gpt-3.5-turbo",
messages: contextMessages,
});
const assistantMessage = res.choices[0].message.content;
console.log(assistantMessage);
messages.push({ role: "assistant", content: assistantMessage });
userInterface.prompt();
});
I can fetch the assistant through the API by creating a thread and a run like below. I just don't know how to use it in the code above, so the user is chatting with my assistant.
const emptyThread = await openai.beta.threads.create();
const run = await openai.beta.threads.runs.create(emptyThread.id, {
assistant_id: "asst_xxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
});
console.log(run);
Using the OpenAI Assistants API is fundamentally different (i.e., more complex) than using any other APIs, like the Completions API or Chat Completions API.
I've made a YouTube tutorial on how to use the Assistants API and posted the code on my GitHub profile.
Here are the steps you need to follow to get a response from the assistant:
The easiest way to do this is to use the OpenAI platform.

Python:
my_thread = client.beta.threads.create()
Node.js:
const myThread = await openai.beta.threads.create();
Python:
my_thread_message = client.beta.threads.messages.create(
thread_id=my_thread.id,
role="user",
content=user_input,
)
Node.js:
const myThreadMessage = await openai.beta.threads.messages.create(
(thread_id = myThread.id),
{
role: "user",
content: userInput,
}
);
Python:
my_run = client.beta.threads.runs.create(
thread_id=my_thread.id,
assistant_id=assistant_id,
)
Node.js:
const myRun = await openai.beta.threads.runs.create(
(thread_id = myThread.id),
{
assistant_id: assistantID,
}
);
Note: This is the trickiest part. See the code for the Step 5 and Step 6 below.
Python:
# Periodically retrieve the run to check its status
while my_run.status in ["queued", "in_progress"]:
keep_retrieving_run = client.beta.threads.runs.retrieve(
thread_id=my_thread.id, run_id=my_run.id
)
# If the run status is completed
if keep_retrieving_run.status == "completed":
# Retrieve the assistant's response
all_messages = client.beta.threads.messages.list(thread_id=my_thread.id)
# Display the assistant's response
print(f"Assistant: {all_messages.data[0].content[0].text.value}")
break
elif keep_retrieving_run.status in ["queued", "in_progress"]:
pass
else:
break
Node.js:
// Asynchronous function to retrieve the run status and display the assistant's response
async function retrieveRun(threadId, runId) {
while (true) {
const keepRetrievingRun = await openai.beta.threads.runs.retrieve(
(thread_id = threadId),
(run_id = runId)
);
if (keepRetrievingRun.status === "completed") {
// Retrieve the messages added by the assistant to the thread
const allMessages = await openai.beta.threads.messages.list(
(thread_id = threadId)
);
// Display assistant message
console.log("Assistant: ", allMessages.data[0].content[0].text.value, "\n");
break;
} else if (
keepRetrievingRun.status === "queued" ||
keepRetrievingRun.status === "in_progress"
) {
// Pass
} else {
break;
}
}
}
await retrieveRun(myThread.id, myRun.id);
See the full Python code or the full Node.js code.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With