It seems like my Pickaxes are running slower lately. I use Claude 3.5 often and I have about 10-15 knowledge docs on each PA. I also think my system prompts are getting kind of long. 10,000 characters.
Any advice? Best practices? Resources to learn more about streamlining?
1 Like
Hey @dougfresh this is a common challenge as AI tools become more complex, and you’re asking the right questions.
Here are a few best practices that should help you optimize your Pickaxes for better performance and reliability.
1. On LLM Choice (GPT vs. Claude Vs. Grok)
You mentioned using Claude 3.5 Sonnet. While it’s a powerful model, especially for tasks requiring nuanced prose, you might want to experiment with others if you’re experiencing performance lags.
-
Try the Default: The default model in Pickaxe, GPT-4.1 mini, is generally an excellent choice as it’s designed to be smart, fast, and cost-effective. It’s a great baseline for most use cases.
-
Personal Observation: In my agency’s work, we’ve sometimes observed that OpenAI’s models (like the GPT series) and Grok’s models can offer more consistent uptime compared to others. This can vary, but it’s worth testing to see if a simple model swap improves your experience.
2. Streamline Your Knowledge Base with a Multi-Agent Approach
This is likely the biggest area for improvement. Instead of loading 10-15 KB of documents onto a single Pickaxe, you can create a more efficient system by delegating tasks to specialized “sub-agents.”
The core idea is to build a “manager” agent that calls on other, more focused agents to do specific jobs.
Here’s how to do it:
-
Create Specialized “Worker” Agents: Instead of one Pickaxe with all your documents, create multiple, simpler agents. For instance:
-
Law Agent: This Pickaxe only has your law-related documents in its Knowledge Base.
-
Marketing Agent: This one only has your marketing documents. Each agent is now an expert at one thing, and its knowledge base is smaller and more relevant, which helps prevent the model from getting confused or “diluting” its responses.
-
Build a “Manager” Agent: This will be your main chatbot that you interact with. Its prompt won’t contain detailed knowledge. Instead, its prompt will instruct it on how to delegate tasks.
-
Connect Them with Actions: In your “Manager” agent, go to the Actions tab. You can add your other Pickaxes (the “worker” agents) as connected tools. Now, your Manager can call the Law Agent when it gets a legal question and the Marketing Agent for a marketing question.
This modular approach makes your system faster, more accurate, and much easier to debug.
3. Optimize Your System Prompt
You’re right, a 10,000-character prompt is quite long and can definitely slow things down. The best prompts are clear and explicit without being overly verbose.
Here are two tips to shorten and strengthen your prompt:
-
Distill the Core Instructions: Copy your entire prompt and paste it into a powerful LLM (like GPT-4o or Claude 3 Opus) with a simple instruction: “Refine and shorten this system prompt for an AI agent. Retain all key instructions but remove redundancies and make it as clear and concise as possible.” This is a quick way to get a much tighter version.
-
Add Structure with Markdown: Structure your prompt with clear headers. This helps the AI understand its core function, rules, and constraints more effectively. A simple, powerful structure is:
# Role
State the agent's primary purpose in one or two sentences.
## Rules
Use a bulleted or numbered list for key instructions, constraints, and operational steps. This is where you'd tell a "Manager" agent when to call a specific sub-agent/action.
Hope this helps!
Thanks! Great tips. I will try some of these.
My Studio is actually multi agent, but I got nervous and most agents actually have access to the same knowledge files. I will trust the process and try scaling that back and have clear Actions to call the specialized agents.
Thanks again
1 Like