I’ve been having this problem all weekend, and no amount of prompt engineering seems to fix it.
When I’m in the testing environment — where I’m able to see how it engages with the knowledge base — it seems to be working fairly well. But when I go over to publish the bot it consistently provides the same results, which don’t relate to the query.
Anyone have any idea what’s going on or how to fix it? Pictures provided below to demonstrate the issue.
Yeah. The documents are in the knowledge base; it’s pulling from the Knowlege base in both cases. It’s just pulling the same four entries from the knowlege base in the live environment.
That happened to me while working with a “contract assistant” pickaxe. When you upload a contract, it can provide summary info, and users can ask questions about the details. I wanted to have the AI comment on the contracts about compliance, etc., and load some information about contracts, legalities, etc., but the AI started returning generic answers to contracts. In the end, I disabled all the docs in the knowledge base, and it went back to responding to contract specifics.
This was a form named Contract Assistant. I haven’t tried different scenarios; I disabled the knowledge base docs and left it to examine later.
A more important issue is this form does not recognise when a document is loaded. Pickaxe does not recognise the uploaded document and asks to upload a contract. Then, when I go into the chat and start asking questions, it recognizes the contract and responds.