Are the Pickaxe generator and Semantic Forge already incorporated in the builder?

Are the Pickaxe generator and Semantic Forge already incorporated in the builder?

Can anyone share their experience using the Semantic Forge? If and how it improved their bots performance?

Hey Nancy,

Thanks for your question. The builder integrated into Pickaxe uses a different paradigm for constructing prompts. In essence it defines a role, explains a task and then provides a few additional parameters to further tune the behavior.

The Pickaxe Generator and Semantic Forge were built on a more defined framework I called Semantic Prompt Design at the time. The foundations of which are now integrated into a variety of prompt frameworks.

I have not extensively tested Pickaxe Generator or Semantic Forge against, the new builder. I also havenā€™t used Pickaxe Generator or Semantic Forge with Claude as it was designed for GPT-4.

2 Likes

@intellibotique thanks champ U R A legend, but you already knew that. Keep up the world class work!

So- IMHO it might be worth investigating the ā€œsemantic primeā€ which is the ā€œdifference that makes the difference all the difference in the universeā€ by that I mean what is the one thing that MUST be in the prompt in order for it to get the results you want. If that one thing is there then you get what you want, no matter what else is there. If that one thing is NOT there then no matter what else is in the prompt you will NOT get the results you want.

3 Likes

Thanks for the kind words. Flattery will get you everywhere.

I think you make a very good thesis around identifying what must and must not be in the prompt. Iā€™m not sure itā€™s a consistent answer, and Iā€™m pretty sure itā€™s going to vary across LLMs and their makers.

When it comes to prompts that will be run hundreds of thousands of times in succession, it becomes very important to reduce any excess instructions from a prompt because it will cost money overtime.

For most of my purposes, I donā€™t really worry too much about making the prompt as short as possible to still yield the necessary results because it doesnā€™t actually offer any benefits to my current projects either for the end users or the client themselves.

1 Like

OH! Can you pls explain a bit more- are you saying the more excess instructions it will cost me more credits if I get them from Pickaxe or more tokens if I have an API?

Let me know and have a great day

1 Like

As best I understand it, initial prompt (AKA the Pickaxe or GPT) along with the entire conversation up to that point is sent through the LLM with each subsequent interaction. This means that a long prompt will cost more API usage at a significantly accelerated rate compared to a short prompt, as will longer responses on either end of the conversation.

When using Pickaxe credits, this isnā€™t a factor because it tabulates one credit per engagement and doesnā€™t actually take into account the length of the string being sent to the LLM API.

At the same time, with these tiny models available from most providers at a rather low cost, it seems like these issues are unlikely to be a major factor based on what one could presumably charge to use your pickaxe.

A celebrity just entered the chat. @intellibotique ā€˜wrote the bookā€™ on Semantic Prompt Design as the saying goes. He also builds and sells studios to customers as well.

3 Likes

@admin_mike I know, he is amazing eh! It is for sure worth reading his blog posts on the pickaxe blogs they are next generation eh?