You can also instruct your Pickaxe’s to produce outputs in Markdown or HTML for consistent formatting. Additionally, consider using the Model Reminder section to reinforce formatting rules and ensure your outputs remain clear and well-structured.
This is also a problem I often face, but I am rather more inclined to think it is one of the problems with pickaxe’s markdown rendering. You can see it also happens with code blocks and math blocks, and that regular markdown renderers usually can handle it fine.
I have reported this issue to our engineering team for further investigation. In the meantime, I reviewed the role prompt from your screenshot, rewrote it, and confirmed that the updated version works as expected.
It is unlikely that the problem lies in the Markdown renderer itself, as we use the same technology found across other platforms. The key difference with Pickaxe is that the AI model is responsible for generating and formatting the Markdown. Since AI models can occasionally produce formatting errors, it is important to provide clear and detailed instructions. Using the Model Reminder section can help reinforce formatting rules. If your chatbot outputs properly structured Markdown as plain text, the renderer should display it correctly.
Hey @leoreche I ran into similar issues for a client build a month ago.
Here is a two-part solution to enforce consistent formatting for both numbered lists and mathematical equations.
Part 1: Fix Inconsistent List Numbering
The root cause of a numbered list restarting (e.g., 1, 2, 3, 1, 2) is typically the AI model losing track of the sequence across multiple generation steps. We can enforce this with a direct rule in your system prompt.
The Solution: Add a Formatting Mandate to Your System Prompt
Add the following XML block inside the <Rules> section of your main system prompt. This creates a critical reminder for the AI to maintain list integrity in every single response.
<rule>
**Numbered List Integrity:** When generating a numbered list, you MUST ensure the numbering is sequential and continuous. Never restart the numbering mid-list. Before outputting your final response, double-check all numbered lists to verify their sequence is correct.
</rule>
As a complementary step, using a more advanced model (like GPT-4 Turbo or Claude 3 Opus) can improve the model’s ability to follow these kinds of complex instructions over longer conversations.
Part 2: Fix Unrendered LaTeX Math & Code Blocks
To solve the issue of raw LaTeX or code appearing in the output, the correct architecture is to use the Code Interpreter action. This forces the AI to process the code/math and render it correctly before delivering the final text to the user.
Step 1: Connect the Code Interpreter Action
If you haven’t already, go to the Actions tab in your Pickaxe editor, search for the “Code Interpreter” action, and connect it.
Step 2: Set the Trigger Prompt for the Code Interpreter Action
In the “Trigger prompt” field for the Code Interpreter action, paste the following instructions. This prompt is engineered to be very specific about when and how the action should be used.
Trigger this action when your planned response contains mathematical formulas, LaTeX syntax (e.g., text enclosed in $$...$$ or $...$), or any block of code.
Your primary function is to act as a self-contained Python environment. You must:
1. **Process Calculations:** Execute any mathematical computations. Autonomously import any necessary Python libraries (e.g., NumPy, SciPy) to handle the request.
2. **Render Text:** Convert all raw LaTeX or code into a properly formatted response that the Pickaxe markdown renderer can display correctly.
3. **Generate Visuals:** If the user's request requires a visual output like a chart or graph, use libraries such as Matplotlib or Seaborn to generate the visual and output it as a complete PNG image file.
Step 3: Synchronize Your Main System Prompt
This is the most critical step. You must inform your primary agent that it now has this tool and is required to use it. Add the following rule to the <Rules> section of your main system prompt.
`<rule>
**Code & Mathematical Rendering Mandate:** When your response requires mathematical equations or code blocks, you MUST use the appropriate format (e.g., LaTeX, Python) and call the connected Code Interpreter action to render it. You are forbidden from outputting raw, unrendered LaTeX or code directly in your response.
</rule>`
By implementing these structured rules, you are moving from simple prompting to more robust prompt engineering, which should significantly reduce the formatting inconsistencies you are experiencing.
Let me know how it goes!
Here’s a How-to generate charts tutorial video I made that might be relevant to your use case:
Thank you so much for the prompting suggestions. I play around with my prompts extensively, and I always like to try new ideas. I have a few things to report back:
Prompting only on the system prompt is very fragile (even using XML tags like suggested by @Ned.Malki and the rules suggested by @danny_support) - They barely caused any effect on the output: still incorrectly rendered most times
When prompting on the Prompt Injector the effect on prompt was more permanent, here’s the best results I got so far:
This is still not a permanent solution, as the rendering is still fragile and fails 1/3 of the time, especially on long conversations or complicated tasks.
With the exact same prompt (and on the same conversation):
All things considered, prompting doesn’t seem like the best solution. Working with such big prompts is inefficient, more expensive for pickaxe, and causes the bot to divert part of his attention and thinking effort to formatting.
Not to mention that it destroys the plug and play experience for inexperienced users using GPT5.
Do you really think this is not a problem that can be solved by software somehow?
As you can see in the content below (Copied from 3.2), the numbering in the bot’s output is correct (continuous). It does feel to me like something a better renderer could handle.
Hey @leoreche appreciate the detailed repros. I agree this isn’t just “use XML and pray.” What you’re seeing is a mix of model drift across long turns and Markdown list merge rules. Two fixes in parallel usually make it stable:
1) Structure the list so the renderer never restarts it
Most Markdown engines continue an ordered list only if you keep items contiguous and indent sub-bullets correctly. Small things break continuity:
A blank line between items.
Inconsistent indent on sub-bullets.
Mixing * bullets without 2–3 spaces of indent.
Minimal pattern that holds up:
1. First item
- Sub point
2. Second item
- Sub point
3. Third item
No blank lines between items. Sub bullets are indented under the number.
If you must break the list into sections or need countdowns, use HTML so the numbering is explicit:
Pickaxe can render HTML, so this avoids auto-renumbering.
[P.S. Treat it as a standards-based fallback that should work given HTML rendering; best practice is to double-check with a tiny paste test:
2) Lock formatting behavior in the prompt where it belongs
Instead of telling the model “you are GPT-5,” give it a role and add a specific formatting rule. Example:
<Role>
You are a clear, concise technical assistant.
</Role>
<Rules>
<rule>For ordered lists, produce one contiguous list with no blank lines. Indent sub-bullets under the number. Before finalizing, verify numbering is continuous and correct.</rule>
</Rules>
If you prefer to keep your system prompt lighter, put the exact rule above into Model Reminder so it persists. This is the same idea Danny mentioned, but with stricter, testable constraints.
Bonus: math and code rendering
For math and code blocks, route through Code Interpreter so raw LaTeX or code isn’t dumped as plaintext. I outlined that workflow earlier in this thread if you want to reuse it.
If you try the contiguous Markdown pattern or the <ol start=… reversed> fallback and it still flips 1 out of 3 times, ping me here with the raw model output that preceded the render. That will let us separate “model produced a broken list” from “renderer merged it wrong,” and I can help you tighten the guardrails.