Is there anyone here who has succesfully replicated a multi-step Custom GPT into Pickaxe with results the same or very similar to Chat GPT? it’s not working for me at all in Pickaxe, and I’m using the exact instruction code and using the exact prompt interactions from my side.
My custom GPTs that work great in Chat GPT. Follows instructions, gives detailed outputs, goes step by step by step including asking user to review content and if they need any revisions before proceeding to the next step.
When I paste the exact custom GPT instructions in Pickaxe, then engage with the chatbot, the output is terrible. Beyond bad. It gives simple short generic answers without deep diving, it ignores input the user provided to start the conversation. When asked it if it remembers the original request it keeps saying it is not able to recall any prior information that I provided during the chat.
I have to literally start over from the beginning, then it gets stuck and gives bad answers again.
If anyone has successfully replicated GPTs, I would love to hear your experiences and advice.
It can be done, and you can get even better results with Pickaxe vs vanilla GPTs. Many users have done this knowing that you can monetize your GPTs when you port them over to Pickaxe. Essentially, you want to:
Port over the prompt (which you already did).
Tweak the settings (check the KB Config video below for details).
Add any connected actions to expand the use-cases of your Pickaxe and call on external APIs.
Test and tweak until you get satisfactory results.
Here are resources that can help you get the results you envision for your Pickaxe:
(this video also shows you how to tweak your Pickaxe settings to get the best outputs).
I’m having the exact same problem. Mine is a revision and exam prep GPT for a professional qualificiation. It works perfectly as a CustomGPT, gives wrong answers as a Pickaxe - even though I have given it the same knowledge files.
Maddening, isn’t it? Is there any support available from Pickaxe, or is this community the support?
Hey @tcr, thank you for sharing your experience and for building such a valuable tool. Sometimes getting knowledge files and prompts just right takes a bit of tweaking, especially when switching between platforms. If you haven’t already, take a look at the helpful replies above as well.
You might want to try adjusting the Relevance Cutoff and Amount settings in the Knowledge tab. It can also help to make your prompt as clear as possible about how you want Pickaxe to use your files. One more thing to check is which back-end API or model (like OpenAI or Claude) your Pickaxe is set to use, just to make sure it matches your CustomGPT setup.
We really appreciate the idea you’re working on. Everyone here is happy to help, so if you have more questions or want to share details, just let us know.
I can relate to your struggle — replicating a true multi-step custom GPT experience outside of OpenAI’s native ChatGPT can be really hit or miss, especially with tools like Pickaxe that may not handle conversation memory or context persistence the same way.
In my experience, the biggest bottleneck is that these platforms don’t always maintain the same token window or session continuity — so instructions that rely on remembering previous steps or user edits tend to break down.
One workaround I’ve seen is splitting the workflow: instead of one monolithic prompt, break your GPT into smaller, clearly modular instructions with explicit re-prompting at each step. Also, some devs handle part of the “memory” externally through backend workflows or APIs that stitch user inputs together.
If you’re experimenting with this seriously, I’d recommend digging deeper into how GPT context windows and system prompts actually work under the hood — we recently covered some basics on how to build your own GPT in this blog: https://www.solulab.com/build-gpt-model/