Hi Pickaxe Community,
I’m working on developing a Pickaxe studio focused on supporting mental health. The platform will allow individuals to share their feelings and experiences, and I aim to make it completely free for users.
I’m exploring the possibility of integrating an open-source LLM (Large Language Model) to power this tool. Can you please guide me on:
- Whether it’s possible to use an open-source LLM (like LLaMA, Falcon, or others) instead of the LLMs currently listed in the dropdown options in Pickaxe?
- If open-source integration is feasible, how can I connect such a model with the Pickaxe platform while ensuring smooth functioning?
- Any recommendations for free or low-cost open-source LLMs that would work well for this purpose?
Your insights and suggestions would be greatly appreciated!
Thank you,
Yash
1 Like
Hi @yashreddy the Llama model is free to use. Just switch from the default LLM over to Llama to start testing.
Alternatively, you can create a custom make.com webhook > create a workflow with Open Router and use whichever LLM you want!
Here are some threads on using Open Router to connect other LLMs:
More info on the Llama model provided by Pickaxe in this thread: What are the specs on the Llama 3 Model option?
3 Likes
Hi @ned_rvth
I did change it with an existing pickaxe and tried to publish it. Then I got a message - “ Your pickaxe exceeds the model’s maximum token context limit“ in red colour on the top.
Then I tried with a new pick axe and the above image is the reference where I haven’t given any context, it already crossed the limits.
What can we do now?
And the alternative method you’ve suggested is there. Can we get any video references on how to do it. Im pretty much new to all of this.
2 Likes
We currently offer one open-sourced model, Llama. While Llama was developed by Meta, it is a truly open-sourced model that does not touch Meta’s infrastructure at all. It’s currently the only open-sourced model we provide. We don’t have immediate plans to add more.
As @ned_rvth expertly points out, you can connect to pretty much any model through a Make.com integration.
1 Like
Hi @yashreddy you have to configure the token lengths from under the ‘Configure’ tab!
Slide the toggles to the left to reduce the values until the context counter in the bottom right corner is balanced (= or less than 6,048 tokens).
Things to consider when choosing an LLM for your Pickaxe:
-
What is the intended use case? (customer service chatbot, course companion/learning accelerator, etc…).
-
Each LLM’s context window varies. Pickaxe conveniently added brief explanations of each LLM’s top quality:
-
If you want your Pickaxe to comprehend images, you want to go with GPT-4o since it’s vision-enabled.
-
For customer service chatbots, GPT-4o-mini is enough for most businesses.
-
If you want your Pickaxe to process code you can use Claude 3.5 Sonnet.
-
If you plan on building a creative (yet terse) Pickaxes, you can test out Llama 3.
If you have a complex project that you want to launch to market, I can help you build and ship quickly! Send me a DM with details whenever
1 Like
Thank you very much for the detailed response.
Appreciate it. Let me try this out.