I’m building an AI-powered educational assistant using Pickaxe and I would like to optimize costs by dynamically selecting the AI model based on question complexity.
My goal is to:
Use GPT-3.5 Turbo for simple questions.
Use GPT-4 Turbo for more complex questions.
I’d like to know:
Where in Pickaxe should I insert the logic to classify a question as “easy” or “difficult” before choosing the model?
Is there a way to integrate multiple Pickaxes (one for GPT-3.5 and one for GPT-4) and route questions dynamically?
Can this be done directly within Pickaxe’s workflow, or do I need an external function (e.g., API call before sending the query to Pickaxe)?
Hi @mcagigas that’s a great question and idea. Although it can be done within drag-and-drop AI agent architecture apps, I don’t think it is possible natively within Pickaxe. @admin_mike can confirm.
@mcagigas the optimalsolution depends on the specific requirements of the use case, including the need for consistent performance (with make you might need an expert on monthly retainer), your budget, and the nature of the tasks.
A quick and elegant solution might be to use a veryintelligentmodel natively within Pickaxe that can handle less intelligent tasks and intelligent tasks - without having to set up yourself or hire someone to create a complicated make/zapier workflow that could be overkill for your project. OpenAI’s o3-mini might be a decent middle-ground for you.
Suggestion for next steps:
A) Try using o3-mini natively in Pickaxe to see if it does the trick.
B) Consider that after tweaking your Knowledgebase which is essential to have a robust and highly functional Pickaxe, you have to test and tweak until you get satisfactory results.
C) Design your core prompt to recall specific information from your KB that is of varying complexity. Think when and why.
Here is a video tutorial I made about tweaking your Pickaxe KB: