It would be nice to have an option to deselect all of the scraped webpages when adding them to the knowledge. Sometimes it scrapes 50+ url’s and when i just need a couple i need to deselect one by one so it would be much easier to just deselect all and then select the few i need.
Also a other thing that could be handy is to just scrape a single url and not everything. This should also reduce the resource of your server since it does not need to scrape the whole site just a single url.
It would be really helpful to be able to deselect every webpage that was collected at one time. It can be difficult and time consuming to manually deselect each url when scraping results generate more than 50.
Great suggest and amazing that it is implemented already. I am finding the opposite issue with gitbooks website (guides for platforms) the tools is only selecting one or two pages from a hundred.
Does anyone recommend a tools to scrape gitbook sites ?