Switching from GPT-4 to GPT-4o

Hi, I’ve been using a single Pickaxe embedded on my paywalled Kajabi site for testing purposes. It was working fine in April 2025.

  1. I noticed today that the behaviour of the Pickaxe has degraded, in that it used to answer specific questions correctly, and would pull info from my knowledge base. It now fails to answer specific questions correctly.
  2. As an experiment, I changed from the GPT-4 model to the GPT-4o model, and noticed that token lengths increased (as expected), so I pushed some of those sliders up … and yet the Pickaxe is still getting those basic questions incorrect.
  3. I notice that whether I’m using GPT-4 or GPT-4o there is zero change in the behaviour of my Pickaxe. It gets the same questions correct, and the same questions incorrect, regardless of which model I’m using.

Any thoughts welcome!

Thanks.

I’ve also adjusted the randomness slider with zero effect, i.e. nothing is different, regardless of model.

I’m happy to use any model as long as it works. I’m not wedded to 4o or 4.

@somatics can you try with Gemini to understand if the problem is with OpenAI?

Tried that. Gemini gives considerably worse answers to basic questions. i.e. zero correct answers

Ok, it seems that I’m experiencing almost complete knowledge loss, as per:

Can you look at your Pickaxe functionality to see if it’s even calling the knowledge base?

Curious what was the outcome of this. Sounds a bit worrisome as a fellow user…

Hi @somatics and @mjhalliburton,

Check out this video on how to test for knowledge base retrieval:

1 Like

Bizarrely it just started working out of the blue. I reverted to GPT-4… then GPT4-0, then GPT-4.1 mini. Nothing changed, the Pickaxe essentially stopped working properly with no changes from me, and then started working properly with no changes from me.