Enhancing Pickaxe with Semantic Chunking for Context-Aware Retrieval

It would be incredibly valuable if the data could be semantically chunked. In its current form, information is sent to the LLM in disconnected fragments, which increases the risk of hallucinations. I was genuinely shocked to see how NotebookLM performs real-time source retrieval and displays references directly in the answers ( the hallucination rate is nearly zero). I would love to see such a powerful framework like Pickaxe adopt this approach. Introducing semantic chunking, along with improvements in how chunks are classified and retrieved, could be a game-changer for Pickaxe.