Is there a reason why we can not use o1 models? It came to API some time ago and I am able to use it in other apps through API.
As far as I’m aware o1 does not support text-streaming yet (the feature which enables responses to appear character by character). We can’t add models that don’t support that currently. Maybe they’ve recently started supporting it, in which case we could make the integration.
Thanks, Mike; I think it supports streaming now. I can use o1 preview on ChatHub. And below is part of the message I received from OpenAI on 20 November:
"Streaming is supported with o1 today, and we’re working on adding more capabilities soon. These models have the same rate limits as GPT-4o, 5,000 requests per minute. To get immediately notified of updates, follow @OpenAIDevs. I can’t wait to see what you build with o1—please don’t hesitate to reply with any questions.
I will look into integrating it into the platform! @nathaniel we should check out adding o1 preview