Adding more URLs
In the previous lesson you built a very simple crawler that downloads HTML of a single page, reads its title and prints it to the console. This is the original source code:
import { CheerioCrawler } from 'crawlee';
const crawler = new CheerioCrawler({
async requestHandler({ $, request }) {
const title = $('title').text();
console.log(`The title of "${request.url}" is: ${title}.`);
}
})
await crawler.run(['https://crawlee.dev']);
In this lesson you'll use the example from the previous section and improve on it. You'll add more URLs to the queue and thanks to that the crawler will keep going, finding new links, enqueuing them into the RequestQueue
and then scraping them.