Your Pickaxe is only as smart as the data it pulls from! Properly configuring your Knowledge Base settings ensures your AI delivers accurate, relevant, and efficient responses every time. Whether you’re fine-tuning relevancy cutoff, managing data retrieval, or setting up context instructions, these settings shape how your Pickaxe interprets and responds to user queries.
In this tutorial, I break down everything you need to know—from optimizing your knowledge base to refining prompt injection and core prompts (think of it as your AI’s employee handbook). If you want your Pickaxe to work smarter, not harder, this is the guide you don’t want to miss! Watch the full breakdown and start building better, more responsive AI tools today!
I just want a legal chatbot that retrieves the knowledge it has. I’m playing with knowledge base settings but it just doesn’t work. Doesn’t retrieve info provided in many files. Whether the relevance is 20 60 90… the amount 2400 or 15000…
Hey @tristanroth send me screenshots of your project on here or DM me. I need to see the core prompt, kb overview (screenshot of files - specifically types and chunks count), and the other settings you mentioned above.
Legal is one of the most difficult use cases. Even notebooklm bungles it. It would require a custom RAG pipeline in my experience which requires significant work.
I have several systems out there that work well, was just testing here. Yes legal is difficult, I would expect hallucinations, the issue here is that even if I feed a given list of “100 things numeroted from 1 to 100” in the kb and I ask “tell me about thing number 57” it tells me it’s unaware of any “thing”.
As if the RAG was just not working. I use 01, and made sure total sum of tokens was under the limit. Temp is low to avoid hallucinations, around 0.4. But changing it doesn’t change much here.
Would be helpful to know what kind of kb settings work. 20? 90? etc.
Hey @tristanroth I developed several legal GPT projects for local law firms before. In the case of legal GPTs it is not necessarily a case of tweaking token lengths, but more of using the Knowledge Explorer to feed the system sample queries and then fine-tune the knowledge chunks. I added this to my to-do list I will record a Loom video example and update on here either later tonight or tomorrow
Hi @simonh can you post your issue on here? Include screenshots and a loom video if possible along as much detail as possible (project scope, goals, visions, and current state).
Just the same as Tristan. I’m trying to get it to drill down on very specific procedures and manuals + guides but it’s just giving general overviews. I’ve been testing sliders etc. for a couple of weeks and can’t see any discernable difference so i’ll just wait for your Loom video - watch that and try what you recommend.
The other thing is that I’ve got about 2500 documents to sort through including technical operational manuals so if it’s a case of finetuning knowledge chunks I’ll probably need to start looking at Azure for smarter document handling and retrieval and just hope that system develops faster.