Skip to main content
Version: 3.4

JSDOMCrawlerOptions <UserData, JSONData>

Hierarchy

Index

Properties

optionaladditionalMimeTypes

additionalMimeTypes?: string[]

An array of MIME types you want the crawler to load and process. By default, only text/html and application/xhtml+xml MIME types are supported.

optionalautoscaledPoolOptions

autoscaledPoolOptions?: AutoscaledPoolOptions

Custom options passed to the underlying AutoscaledPool constructor.

NOTE: The runTaskFunction and isTaskReadyFunction options are provided by the crawler and cannot be overridden. However, we can provide a custom implementation of isFinishedFunction.

optionalerrorHandler

errorHandler?: ErrorHandler<JSDOMCrawlingContext<UserData, JSONData>>

User-provided function that allows modifying the request object before it gets retried by the crawler. It's executed before each retry for the requests that failed less than maxRequestRetries times.

The function receives the BasicCrawlingContext as the first argument, where the request corresponds to the request to be retried. Second argument is the Error instance that represents the last error thrown during processing of the request.

optionalfailedRequestHandler

failedRequestHandler?: ErrorHandler<JSDOMCrawlingContext<UserData, JSONData>>

A function to handle requests that failed more than maxRequestRetries times.

The function receives the BasicCrawlingContext as the first argument, where the request corresponds to the failed request. Second argument is the Error instance that represents the last error thrown during processing of the request.

optionalforceResponseEncoding

forceResponseEncoding?: string

By default this crawler will extract correct encoding from the HTTP response headers. Use forceResponseEncoding to force a certain encoding, disregarding the response headers. To only provide a default for missing encodings, use HttpCrawlerOptions.suggestResponseEncoding

// Will force windows-1250 encoding even if headers say otherwise
forceResponseEncoding: 'windows-1250'

optionalhandlePageFunction

handlePageFunction?: RequestHandler<JSDOMCrawlingContext<UserData, JSONData>>

An alias for HttpCrawlerOptions.requestHandler Soon to be removed, use requestHandler instead.

@deprecated

optionalhideInternalConsole

hideInternalConsole?: boolean

Suppress the logs from JSDOM internal console.

optionalignoreSslErrors

ignoreSslErrors?: boolean

If set to true, SSL certificate errors will be ignored.

optionalkeepAlive

keepAlive?: boolean

Allows to keep the crawler alive even if the RequestQueue gets empty. By default, the crawler.run() will resolve once the queue is empty. With keepAlive: true it will keep running, waiting for more requests to come. Use crawler.teardown() to exit the crawler.

optionalmaxConcurrency

maxConcurrency?: number

Sets the maximum concurrency (parallelism) for the crawl. Shortcut for the AutoscaledPool maxConcurrency option.

optionalmaxRequestRetries

maxRequestRetries?: number = <p>3</p>

Indicates how many times the request is retried if requestHandler fails.

optionalmaxRequestsPerCrawl

maxRequestsPerCrawl?: number

Maximum number of pages that the crawler will open. The crawl will stop when this limit is reached. This value should always be set in order to prevent infinite loops in misconfigured crawlers.

NOTE: In cases of parallel crawling, the actual number of pages visited might be slightly higher than this value.

optionalmaxRequestsPerMinute

maxRequestsPerMinute?: number

The maximum number of requests per minute the crawler should run. By default, this is set to Infinity, but we can pass any positive, non-zero integer. Shortcut for the AutoscaledPool maxTasksPerMinute option.

optionalminConcurrency

minConcurrency?: number

Sets the minimum concurrency (parallelism) for the crawl. Shortcut for the AutoscaledPool minConcurrency option.

WARNING: If we set this value too high with respect to the available system memory and CPU, our crawler will run extremely slow or crash. If not sure, it's better to keep the default value and the concurrency will scale up automatically.

optionalnavigationTimeoutSecs

navigationTimeoutSecs?: number

Timeout in which the HTTP request to the resource needs to finish, given in seconds.

optionalpersistCookiesPerSession

persistCookiesPerSession?: boolean

Automatically saves cookies to Session. Works only if Session Pool is used.

It parses cookie from response "set-cookie" header saves or updates cookies for session and once the session is used for next request. It passes the "Cookie" header to the request with the session cookies.

optionalpostNavigationHooks

postNavigationHooks?: InternalHttpHook<JSDOMCrawlingContext<UserData, JSONData>>[]

Async functions that are sequentially evaluated after the navigation. Good for checking if the navigation was successful. The function accepts crawlingContext as the only parameter. Example:

postNavigationHooks: [
async (crawlingContext) => {
// ...
},
]

optionalpreNavigationHooks

preNavigationHooks?: InternalHttpHook<JSDOMCrawlingContext<UserData, JSONData>>[]

Async functions that are sequentially evaluated before the navigation. Good for setting additional cookies or browser properties before navigation. The function accepts two parameters, crawlingContext and gotOptions, which are passed to the requestAsBrowser() function the crawler calls to navigate. Example:

preNavigationHooks: [
async (crawlingContext, gotOptions) => {
// ...
},
]

Modyfing pageOptions is supported only in Playwright incognito. See PrePageCreateHook

optionalproxyConfiguration

proxyConfiguration?: ProxyConfiguration

If set, this crawler will be configured for all connections to use Apify Proxy or your own Proxy URLs provided and rotated according to the configuration. For more information, see the documentation.

optionalrequestHandler

requestHandler?: RequestHandler<JSDOMCrawlingContext<UserData, JSONData>>

User-provided function that performs the logic of the crawler. It is called for each URL to crawl.

The function receives the BasicCrawlingContext as an argument, where the request represents the URL to crawl.

The function must return a promise, which is then awaited by the crawler.

If the function throws an exception, the crawler will try to re-crawl the request later, up to the maxRequestRetries times. If all the retries fail, the crawler calls the function provided to the failedRequestHandler parameter. To make this work, we should always let our function throw exceptions rather than catch them. The exceptions are logged to the request using the Request.pushErrorMessage() function.

optionalrequestHandlerTimeoutSecs

requestHandlerTimeoutSecs?: number = <p>60</p>

Timeout in which the function passed as requestHandler needs to finish, in seconds.

optionalrequestList

requestList?: RequestList

Static list of URLs to be processed. If not provided, the crawler will open the default request queue when the crawler.addRequests() function is called.

Alternatively, requests parameter of crawler.run() could be used to enqueue the initial requests - it is a shortcut for running crawler.addRequests() before the crawler.run().

optionalrequestQueue

requestQueue?: RequestQueue

Dynamic queue of URLs to be processed. This is useful for recursive crawling of websites. If not provided, the crawler will open the default request queue when the crawler.addRequests() function is called.

Alternatively, requests parameter of crawler.run() could be used to enqueue the initial requests - it is a shortcut for running crawler.addRequests() before the crawler.run().

optionalrunScripts

runScripts?: boolean

Download and run scripts.

optionalsessionPoolOptions

sessionPoolOptions?: SessionPoolOptions

The configuration options for SessionPool to use.

optionalstatusMessageLoggingInterval

statusMessageLoggingInterval?: number

Defines the length of the interval for calling the setStatusMessage in seconds.

optionalsuggestResponseEncoding

suggestResponseEncoding?: string

By default this crawler will extract correct encoding from the HTTP response headers. Sadly, there are some websites which use invalid headers. Those are encoded using the UTF-8 encoding. If those sites actually use a different encoding, the response will be corrupted. You can use suggestResponseEncoding to fall back to a certain encoding, if you know that your target website uses it. To force a certain encoding, disregarding the response headers, use HttpCrawlerOptions.forceResponseEncoding

// Will fall back to windows-1250 encoding if none found
suggestResponseEncoding: 'windows-1250'

optionaluseSessionPool

useSessionPool?: boolean

Basic crawler will initialize the SessionPool with the corresponding sessionPoolOptions. The session instance will be than available in the requestHandler.