Pickaxe not following prompt

Hi there,

My pickaxes have recently stopped following the prompts as they used to.

I’ve not changed how I go about creating the prompts, but a couple have recently been behaving erratically.

here’s an example of what I mean:

Have you tried adding your copyrighting brief to a PDF and uploading it as a knowledge base as opposed to a google docs link.

Try troubleshooting your main prompt with ChatGPT 4o. Checkout this example. You can always take screenshots and attach them to the convo with ChatGPT for added context.

If that doesn’t work, try adding a Code Interpreter action, and use ChatGPT to help you generate a trigger prompt. The job of the Code Interpreter action would be to figure out the IF-Statement computation in the background.

How to video:

Example of an IF-statement in a Core Prompt:

Here’s how it might look conceptually in a core prompt:

IF the user question includes "how to," provide a step-by-step guide.
IF the user asks about pricing, explain the tiers clearly and suggest the most relevant option.
IF no specific query is provided, ask the user clarifying questions to better understand their needs.

Notes & considerations:

  1. Using ChatGPT 4o OR 4o-mini for the LLM model. Either should be enough.

  2. You mentioned you tried pushing the Token Lengths to the limit. Although I don’t believe that’s the culprit, here is some useful info:
    those should be maintained within controlled ranges to control the input and outputs, the intended duration of each chat session, and the overall LLM API cost on your business.

Let me know!

~Ned