AI not operating as it should compared to in ChatGPT

okay, I’ve connected my with my API, specifically to my assistant using the services api set up on OpenAI.
Everything starts out fine in Pickaxe, operates as it’s meant to, but instead of moving on to the third step in it’s instructions. It starts again from the beginning.
Don’t understand why.

Great question! I have a suspicion what’s wrong.

In the builder, click on Advanced Options and look for a toggle called Memory Buffer. Slide this way up. This setting controls how much of the conversation the model should remembers.

2 Likes

I’d realised that a while after I’d posted, I’d actually taken that to mean something else… then it hit me. 100% correct. Thanks.

Another question you might be able to help me with… How do I work out token distributions to suit my the requirements of my GPT? lol. Thanks

I just saw that you are one of the co-founders of Pickaxe. I have a really unusual use case that I’m trying to launch (3 different GPT’s that all work together), embed them on my website/monetise to my community. The issues I’m having though, in trying to get them to operate in pickaxe they way they do in openai playground.
On top of that, I don’t know how to code and everything I’ve done so far has been learning as I go. lol.

You can set all the token distributions under the Advanced Options.

If you’re noticing different behavior between your GPT on OpenAI and your Pickaxe, I can guarantee it possible to replicate the results from OpenAI as we’re using the same underlying models. What is the difference you’re experiencing?