PuppeteerCrawlerOptions
Hierarchy
- BrowserCrawlerOptions<PuppeteerCrawlingContext, { browserPlugins: [PuppeteerPlugin] }>
- PuppeteerCrawlerOptions
Index
Properties
- autoscaledPoolOptions
- browserPoolOptions
- errorHandler
- failedRequestHandler
- headless
- keepAlive
- launchContext
- maxConcurrency
- maxRequestRetries
- maxRequestsPerCrawl
- maxRequestsPerMinute
- minConcurrency
- navigationTimeoutSecs
- persistCookiesPerSession
- postNavigationHooks
- preNavigationHooks
- proxyConfiguration
- requestHandler
- requestHandlerTimeoutSecs
- requestList
- requestQueue
- sessionPoolOptions
- useSessionPool
Properties
optionalautoscaledPoolOptions
optionalbrowserPoolOptions
Custom options passed to the underlying BrowserPool constructor. We can tweak those to fine-tune browser management.
optionalerrorHandler
User-provided function that allows modifying the request object before it gets retried by the crawler.
It's executed before each retry for the requests that failed less than maxRequestRetries
times.
The function receives the BrowserCrawlingContext
(actual context will be enhanced with the crawler specific properties) as the first argument,
where the request
corresponds to the request to be retried.
Second argument is the Error
instance that
represents the last error thrown during processing of the request.
optionalfailedRequestHandler
A function to handle requests that failed more than option.maxRequestRetries
times.
The function receives the BrowserCrawlingContext
(actual context will be enhanced with the crawler specific properties) as the first argument,
where the request
corresponds to the failed request.
Second argument is the Error
instance that
represents the last error thrown during processing of the request.
optionalheadless
Whether to run browser in headless mode. Defaults to true
.
Can be also set via Configuration.
optionalkeepAlive
Allows to keep the crawler alive even if the RequestQueue gets empty.
By default, the crawler.run()
will resolve once the queue is empty. With keepAlive: true
it will keep running,
waiting for more requests to come. Use crawler.teardown()
to exit the crawler.
optionallaunchContext
Options used by launchPuppeteer to start new Puppeteer instances.
optionalmaxConcurrency
Sets the maximum concurrency (parallelism) for the crawl. Shortcut for the
AutoscaledPool maxConcurrency
option.
optionalmaxRequestRetries
Indicates how many times the request is retried if requestHandler
fails.
optionalmaxRequestsPerCrawl
Maximum number of pages that the crawler will open. The crawl will stop when this limit is reached. This value should always be set in order to prevent infinite loops in misconfigured crawlers.
NOTE: In cases of parallel crawling, the actual number of pages visited might be slightly higher than this value.
optionalmaxRequestsPerMinute
The maximum number of requests per minute the crawler should run.
By default, this is set to Infinity
, but we can pass any positive, non-zero integer.
Shortcut for the AutoscaledPool maxTasksPerMinute
option.
optionalminConcurrency
Sets the minimum concurrency (parallelism) for the crawl. Shortcut for the
AutoscaledPool minConcurrency
option.
WARNING: If we set this value too high with respect to the available system memory and CPU, our crawler will run extremely slow or crash. If not sure, it's better to keep the default value and the concurrency will scale up automatically.
optionalnavigationTimeoutSecs
Timeout in which page navigation needs to finish, in seconds.
optionalpersistCookiesPerSession
Defines whether the cookies should be persisted for sessions.
This can only be used when useSessionPool
is set to true
.
optionalpostNavigationHooks
Async functions that are sequentially evaluated after the navigation. Good for checking if the navigation was successful.
The function accepts crawlingContext
as the only parameter.
Example:
postNavigationHooks: [
async (crawlingContext) => {
const { page } = crawlingContext;
if (hasCaptcha(page)) {
await solveCaptcha (page);
}
},
]
optionalpreNavigationHooks
Async functions that are sequentially evaluated before the navigation. Good for setting additional cookies
or browser properties before navigation. The function accepts two parameters, crawlingContext
and gotoOptions
,
which are passed to the page.goto()
function the crawler calls to navigate.
Example:
preNavigationHooks: [
async (crawlingContext, gotoOptions) => {
const { page } = crawlingContext;
await page.evaluate((attr) => { window.foo = attr; }, 'bar');
},
]
Modyfing pageOptions
is supported only in Playwright incognito.
See PrePageCreateHook
optionalproxyConfiguration
If set, the crawler will be configured for all connections to use the Proxy URLs provided and rotated according to the configuration.
optionalrequestHandler
Function that is called to process each request.
The function receives the BrowserCrawlingContext (actual context will be enhanced with the crawler specific properties) as an argument, where:
request
is an instance of the Request object with details about the URL to open, HTTP method etc;page
is an instance of the Puppeteer Page or Playwright Page;browserController
is an instance of the BrowserController;response
is an instance of the Puppeteer Response or Playwright Response, which is the main resource response as returned by the respectivepage.goto()
function.
The function must return a promise, which is then awaited by the crawler.
If the function throws an exception, the crawler will try to re-crawl the
request later, up to the maxRequestRetries
times.
If all the retries fail, the crawler calls the function
provided to the failedRequestHandler
parameter.
To make this work, we should always
let our function throw exceptions rather than catch them.
The exceptions are logged to the request using the
Request.pushErrorMessage()
function.
optionalrequestHandlerTimeoutSecs
Timeout in which the function passed as requestHandler
needs to finish, in seconds.
optionalrequestList
Static list of URLs to be processed.
If not provided, the crawler will open the default request queue when the crawler.addRequests()
function is called.
Alternatively,
requests
parameter ofcrawler.run()
could be used to enqueue the initial requests - it is a shortcut for runningcrawler.addRequests()
before thecrawler.run()
.
optionalrequestQueue
Dynamic queue of URLs to be processed. This is useful for recursive crawling of websites.
If not provided, the crawler will open the default request queue when the crawler.addRequests()
function is called.
Alternatively,
requests
parameter ofcrawler.run()
could be used to enqueue the initial requests - it is a shortcut for runningcrawler.addRequests()
before thecrawler.run()
.
optionalsessionPoolOptions
The configuration options for SessionPool to use.
optionaluseSessionPool
Basic crawler will initialize the SessionPool with the corresponding sessionPoolOptions
.
The session instance will be than available in the requestHandler
.
Custom options passed to the underlying AutoscaledPool constructor.