Prompting best practices? Template example?

Hey,

I’d like some guidance on best practices in prompting with Pickaxe as I get the impression it’s different from prompting for chatgpt (or is it?)

To start with, should I structure the Pickaxe in a certain way, for example given the prompt headings of:

Persona, Greeting, Questions, Knowledge content, Rules, Multilanguage capabilities, Privacy etc etc.

Does it matter what order I put things in?

Also does it matter how I format things? Should I use markup for example? Bullet points, bold? Italics separators etc? , what effect if any do these choices have? Is there a benefit or a drawback to how things are formatted and ordered?

How detailed should my prompt be?
How long should it be?
What are the implications and effects of detailed versus concise and longer versus shorter prompts? Is there an ideal word length for a prompt? What are the implications of going beyond this?

I could really use some best practices/guidelines for getting the most out of creating prompts that can catch a lot of questions and act in a consistent manner (but with personality).

If I use the prompt injection feature do I still need to include the instruction there in the main prompt?

Also how should I format the syntax of the prompt?

For example I could say to my pickaxe:

Ask Q1
Ask Q2
Ask Q3

Based on the answers, make a plan.

Or I guess I could say:

Tell me what you want to work on , A or B?

Conditional Logic:

  • If “A” is selected: “Hey, {user_name}. Give instuction”
  • If “B” is selected: “Give other instruction”

Sorry for the rambling question… Hoping I can get some clarification, or ideally we should have some kind of template that would be universal and include best practices in terms of formatting, structuring and prompting syntax with a few examples of content at its best to take some of the guesswork out of things.

Cheers
Terry

1 Like

Every LLM has a different prompting scheme. For example, chatgpt works well with /n as a delimiter while claude uses html tags to indicate prompt structure.

The same prompt has to be optimized differently to optimize performance based on the model. Ex. You can be more abstract with gpt4o but need to be more explicit with gpt4omini.

I find it best to keep system instructions to moderate length with good training dialogue examples to be the best approach.

Regardless, if you write long prompts a common problem with LLMs is that they prioritize the start and end portion of a prompt.For your use case (choose A and B) I feel the form template works better than the chat template because it will help reduce the amount of characters in the prompt leaving more space for examples.

I would highly recommend going through openai and anthropics prompt cookbook alongwith pickaxe’s blog on semantic prompting. You can also check learnprompting.org. Just google it and you’ll find it.

1 Like

Hi zerodot! What would you consider a moderate length of prompt?

Thanks for that Zerodot,

So what your saying is it’s model dependent. Does that mean that Pickaxe does not effect the prompt but only feeds the content of the prompt to chatgpt, clause etc for processing then receives the output back?

So I can simply concentrate on learning best practices for each model and I’ll be good?

Regarding the dialogue examples, should these be in the prompt or in the seperate training fields below? Is there a difference?

I’m also curious about what ‘moderate’ might be in terms of system instructions/(entire prompt?).

Thanks again for taking the time to reply and those links, very much appreciated!

Cheers.

That is correct. Learning about prompting LLMs is what you need.

If you use ChatGPT or any other LLM, you generally include the training dialogue in the prompt (it’s actually called few-shot prompting). Pickaxe tries to nudge the user more clearly by separating it in the structure.

Generally, input prompts are roughly 7,000 characters. A moderate prompt would be around 2,000 to 3,000 characters followed by training examples. (The combined total has to be below 7,000 characters.) Ideally, the shorter the prompt, the better the LLM would be at following it (at the cost of detail). This is where human thinking and judgment become very important.

Thank you!!!

That will help me a lot. I had a feeling mine were going too long, this confirms it.

It also occurs to me that the prompt chaining actions for GPT and Pickaxe are going to come in handy! :fire::fire: