As the title suggests, I’m curious about how the conversation history is used and processed.
When downloading the entire conversation history, it was over 100,000 tokens. In this case, it is a little difficult to ask AI to process it.
Is there any efficient way of processing it?
@avakero what are you trying to achieve?
When the conversation history gets that long, you’re right that it can be tricky to process all of it at once, especially with model token limits.
I’d love to better understand what you’re aiming to do with the full history.
Are you looking to:
- Summarize the entire conversation?
- Extract certain kinds of insights (like decisions, tasks, or sentiment)?
- Feed parts of the conversation back into a new AI prompt?
Let me know what your end goal is, and I’d be happy to suggest a more efficient way to handle it!
@ab2308 @danny_support
Thank you for your reply.
I was having trouble dealing with a very long JSON file.
Here is a JSON file of about 90,000 characters.As expected, it was difficult to handle with LLM, so I was wondering how you all parse such a long file when it comes up.
I have been working on this for a while now, and I think I finally have a workflow for extracting data.