Has anyone used the bot for a survey by using forms or other methods?

I am trying to do a short survey via the bot and hopefully the bot is able to geenrate an immediate report -short to the user and follow up with the conversation to recommend a solution. Anyone done this before?

This is pretty straightforward to do using the smart form function in Pickaxe. It’s not super clear for the user how to instantiate a conversation after the initial output, but you could add in the prompt instructions for the user within the prompt itself to make it easier to identify.

Alternatively, you could put it inside a studio and provide the initial report for free and then charge access to further dive into the custom report.

1 Like

Thank you for respnding. I will give it a try and see if it works and explore a few more ways.

Yes you can “enable chat” in your Smart Forms, which means it adds a button to the bottom of the response that the end-user can click which pops open a chat window. And you can control chatbot behavior through the role section of the Smart Form builder

1 Like

Thanks Mike for the reply…
I will give that a try and see how it goes.
Another thing I would like to do is, if the bot’s AI able to quickly tabulate the results of the User’s response and give a simple one or two liner response: Eg, Your health score is XX compared to the common score of YY.
where XX is the results of the survey average scores. Not sure if this makes sense?

I don’t entirely understand. That sounds like you should be able to achieve this through prompting.

What exactly is not working currently when you try to do this?

If i understand correctly, you need the bot to ask some questions to the user then calculate a score (XX). Then you want the score to be compared to average score based on what other users have scored (YY) and then create a report showing this alongwith some chart/bar graph and feedback.

If this is the case, I think it can be done with combination of prompting+actions.

Actions you will need:

  1. Code interpreter/wolfram mathematica
  2. Charts
  3. Pdf generator

If questions are simple then an llm can itself do the computation, else you’ll need to run code interpreter for calculation. The more complex part is the average scores. Either you’ll need to feed it manually (and update it everyday). I feel the simple way would be to update the score on a google doc and put it in the knowledge base with daily refresh.

The most important part is the prompt which will possibly use this flow:

  1. Ask user X number of questions
  2. Use code interpreter/mathematical operations to calculate a number based on it.
  3. Once calculated, check knowledge base for the updated score.
  4. Compare the two and create a bar chart showing the comparison (use the chart generator action)
  5. Create a pdf based on the chart and feedback (check the recent video on pdf creation by the admins on pickaxe youtube channel or search the forums)

Since pickaxe can run at most two actions, its desirable that you dont use the code interpreter but prompt it to do the right calculations.

Hope this helps!

Thanks for the thouughtful reply. I will do some experiment and see how the whole thing goes.