Error handling on LLMs

I would like to suggest a way to automatically switch to another LLM in case of an error. For example, we know that Claude occasionally experiences “overload” issues. When this happens, I would like it to be possible to execute GPT-4 automatically with the same prompt, so that my user does not remain without a response. Thank you.

You may be able to create two identical Pickaxes, with the only difference being the LLM used (e.g., one using Claude and the other using GPT-4). Then, set up an action trigger to link them. For instance, if Claude encounters an “overload” issue or returns a fallback response like “I Don’t Know,” the action trigger can automatically forward the same prompt to the second Pickaxe using GPT-4. This ensures the user always receives a response seamlessly, even if one LLM fails.

not sure if it would work exactly and getting the I don’t know trigger can be a bit difficult

Hi @exosconsultoria I get what you’re saying. That would be a nice feature to have. I don’t know if such a feature will be included in this month’s upcoming update, but I suggest sticking to GPT 4o. In my experience, it’s currently the most balanced LLM model for most Pickaxe projects.

It’s indeed a good feature. There are a couple hiccups. A big one is model context windows. They differ from model to model. And you may end up switching to a model that can run the request due to tokens. One workaround would be to always use the largest context window size model as the backup.