Can we now, or will we be able to, use the Deepseek api? The model is reviewed as very capable, and inexpensive. Thanks.
Currently we do not provide the famed Chinese AI model DeepSeek on our platform. We currently provide models from OpenAI, Anthropic, and the French LLM provider Mistral. We also recently added an open-source LLama model from Meta which is, shall we say, very open-minded.
We have no immediate plans to add DeepSeek though we have nothing against it. I suspect the next model we will add shall be Google Gemini.
All that being said, you can probably connect Deepseek via an Action.
Thanks, Mike. The main reason I’m interested in DeepSeek is that it is astonishingly inexpensive and powerful. I’ll poke into adding it via an Action.
John
You can run a make action calling a webhook that then can run an Openrouter module with Deepseek.
Regards
Great to know, ihmunro! Thanks for taking the time to reply.
John
would really appreciate if you share any code or action you make. I’d also love to use DeepSeek instead of Sonnet haha
Will do, although with o3 mini coming out soon, we may have a really capable inexpensive OpenAI model to use (assuming Pickaxe will support that)
Just create an action to a Make.com scenario where you can call out Open Router with Deepseek
That’s a great idea. Have you been using Deepseek via OpenRouter?
Yes - I am now swapping out my chatgpt modules with openrouter
If you’re up for it, please share your code etc.
Wow! Gemini sounds fantastic! We are using it a lot internally due to its context window length. It would be a great addition!
Nice!
Agreed! I tried out aistudio.google.com recently and was blown away by how far they have come. They’re serious competitors right now
Google is really catching up. Google moves more slowly than the other AI companies due to their size but we’re starting to see the dividends from their DeepMind lab, large dataset (search + youtube) and giant GPU hoard.
We will need to integrate Gemini into the platform sooner rather than later!
A request. The real utility of gemini lies in its context window and multimodality. Even if its integrated, if the file uploads are not fed into the context window of gemini (and sent to a vector db instead), the utility would be lost. Similarly, if one cant send it audio/video files, it pretty much becomes a gpt-4o-mini.
Atleast, for the user uploaded files (not the knowledge base files), could there be a way to send audio/video and text/pdf files directly to gemini alongwith the prompt and so one can benefit from Gemini’s context window and multimodal functions?
Yes there is actually a way to do this @zerodot. Let me explain.
The way end-user uploads functions actually allows this.
Here’s a rundown:
-
The end-user upload process always looks the maximum input length setting of a Pickaxe. This can be 1000 tokens or 100,000 tokens.
-
Then it looks at the size of the document. This could be 1000 tokens, 30,000 tokens, or 3,000,000 tokens.
-
If the document fits into the maximum input length, then the system dumps the entire document contents into the context conversation. No vector embeddings. If it does not fit, it’s turned into vector embeddings.
The takeaway for Pickaxe users is you can select which process you prefer based on your use case by increasing the the size of the maximum input length.
Oh Great! Didnt realize that. Thank you!