Pickaxe not performing the same in testing environment as live

I’ve been having this problem all weekend, and no amount of prompt engineering seems to fix it.

When I’m in the testing environment — where I’m able to see how it engages with the knowledge base — it seems to be working fairly well. But when I go over to publish the bot it consistently provides the same results, which don’t relate to the query.

Anyone have any idea what’s going on or how to fix it? Pictures provided below to demonstrate the issue.


Hmm. The results on the published Pickaxe do seem to be fairly more generic than the testing results. Did you put documents into the Knowledge Base?

Yeah. The documents are in the knowledge base; it’s pulling from the Knowlege base in both cases. It’s just pulling the same four entries from the knowlege base in the live environment.

That happened to me while working with a “contract assistant” pickaxe. When you upload a contract, it can provide summary info, and users can ask questions about the details. I wanted to have the AI comment on the contracts about compliance, etc., and load some information about contracts, legalities, etc., but the AI started returning generic answers to contracts. In the end, I disabled all the docs in the knowledge base, and it went back to responding to contract specifics.

I’ve seen a couple similar issues across the site the last couple days. I will look into it.