Provides a simple framework for parallel crawling of web pages using headless browsers with Puppeteer and Playwright. The URLs to crawl are fed either from a static list of URLs or from a dynamic queue of URLs enabling recursive crawling of websites.
BrowserCrawler uses headless (or even headful) browsers to download web pages and extract data,
which downloads the pages using raw HTTP requests and is about 10x faster.
The source URLs are represented by the Request objects that are fed from the RequestList or RequestQueue instances
provided by the
constructor options, respectively. If neither
requestQueue options are provided,
the crawler will open the default request queue either when the
crawler.addRequests() function is called,
requests parameter (representing the initial requests) of the
crawler.run() function is provided.
requestQueue options are used,
the instance first processes URLs from the RequestList and automatically enqueues all of them
to the RequestQueue before it starts their processing. This ensures that a single URL is not crawled multiple times.
The crawler finishes when there are no more Request objects to crawl.
New pages are only opened when there is enough free CPU and memory available,
using the functionality provided by the AutoscaledPool class.
All AutoscaledPool configuration options can be passed to the
parameter of the
For user convenience, the
maxConcurrency options of the
underlying AutoscaledPool constructor are available directly in the
NOTE: the pool of browser instances is internally managed by the BrowserPool class.
A reference to the underlying AutoscaledPool class that manages the concurrency of the crawler.
NOTE: This property is only initialized after calling the
crawler.run()function. We can use it to change the concurrency settings on the fly, to pause the crawler by calling
autoscaledPool.pause()or to abort it by calling
- State: Dictionary<any> = Dictionary<any>
defaultValue: State = ...