We are getting these super strange responses, when asking if a specific place is in the zone it will answer happily, but when asked for full list it kicks back and says it doesn’t know
CHAT HERE
SOP entire here
We are getting these super strange responses, when asking if a specific place is in the zone it will answer happily, but when asked for full list it kicks back and says it doesn’t know
CHAT HERE
SOP entire here
Any ideas on what’s going on?
It’s hard to say without knowing more. Is this information you put into the prompt, or the knowledge base, or are getting through an action, or is just supposed to know it?
The information is all in the KB. I can see that it is pulling the relevant chunk in out of the document, but for some reason it doesn’t provide it after that.
If the Pickaxe is indeed pulling the correct information out of the Knowledge Base, I would add more instruction within the prompt about how to use the information from the knowledge base.
Is there anything you can suggest adding to the role, whenever I try adding more detailed info my hallucination rate raises by a ton!!
It’s hard to give generic rules because it really matters on a case by case basis.
Try to narrow down the source of hallucination. Are the near-hallucinations or total-hallucinations? If they are near-hallucinations then you need to be more explicit in the knowledge base. If they are total hallucinations, you need to incorporate a ‘no known answer strategy’.
Here are some other tips.
It has all come down to how the knowledge base has broken the documents up, that’s why it’s not classing anything as relevant because chunks are cut halfway through sentences, email templates start halfway through ECT
is this something I can do with a pro plan? does it give full control over what it sees as a chunk?
Have you tried with different models, because some work better than others for specific tasks.
Done testing with both sonnet, GPT3.5 and 4o, I have tried almost 40 different iterations of the documents with different meta data, layouts and formats with no real luck.
I do understand I am getting it to go through a lot of information but I for the life of me cannot understand how the chunks work
For text documents, chunking is done on a token basis. So information does get orphaned. Unfortunately, there’s not a better way to chunk the information that works for everyone. There is intelligent chunking, but it’s hard to offer as a self-serve product.
If upload information as a spreadsheet, there are strict rules around how spreadsheets get chunked into the Knowledge Base.