StagehandCrawlerOptions
Hierarchy
- BrowserCrawlerOptions<StagehandCrawlingContext, { browserPlugins: [StagehandPlugin] }>
- StagehandCrawlerOptions
Index
Properties
- autoscaledPoolOptions
- browserPoolOptions
- errorHandler
- experiments
- failedRequestHandler
- headless
- httpClient
- ignoreIframes
- ignoreShadowRoots
- keepAlive
- launchContext
- maxConcurrency
- maxCrawlDepth
- maxRequestRetries
- maxRequestsPerCrawl
- maxRequestsPerMinute
- maxSessionRotations
- minConcurrency
- navigationTimeoutSecs
- onSkippedRequest
- persistCookiesPerSession
- postNavigationHooks
- preNavigationHooks
- proxyConfiguration
- requestHandler
- requestHandlerTimeoutSecs
- requestList
- requestManager
- requestQueue
- respectRobotsTxtFile
- retryOnBlocked
- sameDomainDelaySecs
- sessionPoolOptions
- stagehandOptions
- statisticsOptions
- statusMessageCallback
- statusMessageLoggingInterval
- useSessionPool
Properties
optionalinheritedautoscaledPoolOptions
Custom options passed to the underlying AutoscaledPool constructor.
NOTE: The
runTaskFunctionoption is provided by the crawler and cannot be overridden. However, we can provide custom implementations ofisFinishedFunctionandisTaskReadyFunction.
optionalinheritedbrowserPoolOptions
Custom options passed to the underlying BrowserPool constructor. We can tweak those to fine-tune browser management.
optionalinheritederrorHandler
User-provided function that allows modifying the request object before it gets retried by the crawler.
It's executed before each retry for the requests that failed less than maxRequestRetries times.
The function receives the BrowserCrawlingContext
(actual context will be enhanced with the crawler specific properties) as the first argument,
where the request corresponds to the request to be retried.
Second argument is the Error instance that
represents the last error thrown during processing of the request.
optionalinheritedexperiments
Enables experimental features of Crawlee, which can alter the behavior of the crawler. WARNING: these options are not guaranteed to be stable and may change or be removed at any time.
optionalfailedRequestHandler
Function called when request handling fails after all retries.
optionalinheritedheadless
Whether to run browser in headless mode. Defaults to true.
Can be also set via Configuration.
optionalinheritedhttpClient
HTTP client implementation for the sendRequest context helper and for plain HTTP crawling.
Defaults to a new instance of GotScrapingHttpClient
optionalinheritedignoreIframes
Whether to ignore iframes when processing the page content via parseWithCheerio helper.
By default, iframes are expanded automatically. Use this option to disable this behavior.
optionalinheritedignoreShadowRoots
Whether to ignore custom elements (and their #shadow-roots) when processing the page content via parseWithCheerio helper.
By default, they are expanded automatically. Use this option to disable this behavior.
optionalinheritedkeepAlive
Allows to keep the crawler alive even if the RequestQueue gets empty.
By default, the crawler.run() will resolve once the queue is empty. With keepAlive: true it will keep running,
waiting for more requests to come. Use crawler.stop() to exit the crawler gracefully, or crawler.teardown() to stop it immediately.
optionallaunchContext
Launch context with Stagehand-specific options.
optionalinheritedmaxConcurrency
Sets the maximum concurrency (parallelism) for the crawl. Shortcut for the
AutoscaledPool maxConcurrency option.
optionalinheritedmaxCrawlDepth
Maximum depth of the crawl. If not set, the crawl will continue until all requests are processed.
Setting this to 0 will only process the initial requests, skipping all links enqueued by crawlingContext.enqueueLinks and crawlingContext.addRequests.
Passing 1 will process the initial requests and all links enqueued by crawlingContext.enqueueLinks and crawlingContext.addRequests in the handler for initial requests.
optionalinheritedmaxRequestRetries
Specifies the maximum number of retries allowed for a request if its processing fails.
This includes retries due to navigation errors or errors thrown from user-supplied functions
(requestHandler, preNavigationHooks, postNavigationHooks).
This limit does not apply to retries triggered by session rotation
(see maxSessionRotations).
optionalinheritedmaxRequestsPerCrawl
Maximum number of pages that the crawler will open. The crawl will stop when this limit is reached. This value should always be set in order to prevent infinite loops in misconfigured crawlers.
NOTE: In cases of parallel crawling, the actual number of pages visited might be slightly higher than this value.
optionalinheritedmaxRequestsPerMinute
The maximum number of requests per minute the crawler should run.
By default, this is set to Infinity, but we can pass any positive, non-zero integer.
Shortcut for the AutoscaledPool maxTasksPerMinute option.
optionalinheritedmaxSessionRotations
Maximum number of session rotations per request. The crawler will automatically rotate the session in case of a proxy error or if it gets blocked by the website.
The session rotations are not counted towards the maxRequestRetries limit.
optionalinheritedminConcurrency
Sets the minimum concurrency (parallelism) for the crawl. Shortcut for the
AutoscaledPool minConcurrency option.
WARNING: If we set this value too high with respect to the available system memory and CPU, our crawler will run extremely slow or crash. If not sure, it's better to keep the default value and the concurrency will scale up automatically.
optionalinheritednavigationTimeoutSecs
Timeout in which page navigation needs to finish, in seconds.
optionalinheritedonSkippedRequest
When a request is skipped for some reason, you can use this callback to act on it. This is currently fired for requests skipped
- based on robots.txt file,
- because they don't match enqueueLinks filters,
- because they are redirected to a URL that doesn't match the enqueueLinks strategy,
- or because the
maxRequestsPerCrawllimit has been reached
optionalinheritedpersistCookiesPerSession
Defines whether the cookies should be persisted for sessions.
This can only be used when useSessionPool is set to true.
optionalpostNavigationHooks
Async functions that are sequentially evaluated after the navigation.
optionalpreNavigationHooks
Async functions that are sequentially evaluated before the navigation.
optionalinheritedproxyConfiguration
If set, the crawler will be configured for all connections to use the Proxy URLs provided and rotated according to the configuration.
optionalrequestHandler
Function that is called to process each request.
The function receives the StagehandCrawlingContext as an argument, where:
requestis an instance of the Request object with details about the URL to open, HTTP method etc.pageis an enhanced PlaywrightPagewith AI methodsbrowserControlleris an instance of StagehandControllerresponseis the main resource response as returned bypage.goto(request.url)stagehandis the Stagehand instance for advanced control
The page object is enhanced with AI-powered methods:
page.act(instruction)- Perform actions using natural languagepage.extract(instruction, schema)- Extract structured datapage.observe()- Get AI-suggested actionspage.agent(config)- Create autonomous agents
The function must return a promise, which is then awaited by the crawler.
If the function throws an exception, the crawler will try to re-crawl the
request later, up to option.maxRequestRetries times.
optionalinheritedrequestHandlerTimeoutSecs
Timeout in which the function passed as requestHandler needs to finish, in seconds.
optionalinheritedrequestList
Static list of URLs to be processed.
If not provided, the crawler will open the default request queue when the crawler.addRequests() function is called.
Alternatively,
requestsparameter ofcrawler.run()could be used to enqueue the initial requests - it is a shortcut for runningcrawler.addRequests()before thecrawler.run().
optionalinheritedrequestManager
Allows explicitly configuring a request manager. Mutually exclusive with the requestQueue and requestList options.
This enables explicitly configuring the crawler to use RequestManagerTandem, for instance.
If using this, the type of BasicCrawler.requestQueue may not be fully compatible with the RequestProvider class.
optionalinheritedrequestQueue
Dynamic queue of URLs to be processed. This is useful for recursive crawling of websites.
If not provided, the crawler will open the default request queue when the crawler.addRequests() function is called.
Alternatively,
requestsparameter ofcrawler.run()could be used to enqueue the initial requests - it is a shortcut for runningcrawler.addRequests()before thecrawler.run().
optionalinheritedrespectRobotsTxtFile
If set to true, the crawler will automatically try to fetch the robots.txt file for each domain,
and skip those that are not allowed. This also prevents disallowed URLs to be added via enqueueLinks.
If an object is provided, it may contain a userAgent property to specify which user-agent
should be used when checking the robots.txt file. If not provided, the default user-agent * will be used.
optionalinheritedretryOnBlocked
If set to true, the crawler will automatically try to bypass any detected bot protection.
Currently supports:
optionalinheritedsameDomainDelaySecs
Indicates how much time (in seconds) to wait before crawling another same domain request.
optionalinheritedsessionPoolOptions
The configuration options for SessionPool to use.
optionalstagehandOptions
Stagehand-specific configuration options. These options configure the AI behavior and Browserbase integration.
optionalinheritedstatisticsOptions
Customize the way statistics collecting works, such as logging interval or whether to output them to the Key-Value store.
optionalinheritedstatusMessageCallback
Allows overriding the default status message. The callback needs to call crawler.setStatusMessage() explicitly.
The default status message is provided in the parameters.
const crawler = new CheerioCrawler({
statusMessageCallback: async (ctx) => {
return ctx.crawler.setStatusMessage(`this is status message from ${new Date().toISOString()}`, { level: 'INFO' }); // log level defaults to 'DEBUG'
},
statusMessageLoggingInterval: 1, // defaults to 10s
async requestHandler({ $, enqueueLinks, request, log }) {
// ...
},
});
optionalinheritedstatusMessageLoggingInterval
Defines the length of the interval for calling the setStatusMessage in seconds.
optionalinheriteduseSessionPool
Basic crawler will initialize the SessionPool with the corresponding sessionPoolOptions.
The session instance will be than available in the requestHandler.
Options for StagehandCrawler.