Replies: 1 comment 1 reply
-
Hii snowyday, The error Here’s how you can correctly handle the streaming and access the response from an OpenAI model in Python: from openai import OpenAI
# Create an instance of the model
lm = OpenAI("gpt-3.5-turbo-instruct", echo=False)
# Start the stream by directly adding the initial message to the stream call
stream = lm.stream(gen(name="response", temperature=0.1, prompt="Hi!"))
# Variable to accumulate the output
step_output = ""
# Iterate over the events from the stream
for event in stream:
# Access the response if it's available in the event
if 'response' in event:
post_text = event['response']
for word in post_text[len(step_output):]:
step_output += word
# step_output now contains the entire accumulated response from the model
print(step_output) In this revised code, the streaming process is initiated with a direct call to Make sure you are using the correct libraries and methods as per the OpenAI API documentation to avoid other potential issues. Hope this helps! |
Beta Was this translation helpful? Give feedback.
-
I'm encountering a TypeError when attempting to process a guidance model stream with the OpenAI API in Python. Here's the relevant code:
When executing this code, I get the following error: TypeError: 'ModelStream' object is not subscriptable.
Could someone clarify why this error occurs and how to properly access the 'response' in a ModelStream object?
Beta Was this translation helpful? Give feedback.
All reactions