Skip to main content
Version: 3.6

LinkeDOMCrawler

Provides a framework for the parallel crawling of web pages using plain HTTP requests and linkedom LinkeDOM implementation. The URLs to crawl are fed either from a static list of URLs or from a dynamic queue of URLs enabling recursive crawling of websites.

Since LinkeDOMCrawler uses raw HTTP requests to download web pages, it is very fast and efficient on data bandwidth. However, if the target website requires JavaScript to display the content, you might need to use PuppeteerCrawler or PlaywrightCrawler instead, because it loads the pages using full-featured headless Chrome browser.

Limitation: This crawler does not support proxies and cookies yet (each open starts with empty cookie store), and the user agent is always set to Chrome.

LinkeDOMCrawler downloads each URL using a plain HTTP request, parses the HTML content using LinkeDOM and then invokes the user-provided LinkeDOMCrawlerOptions.requestHandler to extract page data using the window object.

The source URLs are represented using Request objects that are fed from RequestList or RequestQueue instances provided by the LinkeDOMCrawlerOptions.requestList or LinkeDOMCrawlerOptions.requestQueue constructor options, respectively.

If both LinkeDOMCrawlerOptions.requestList and LinkeDOMCrawlerOptions.requestQueue are used, the instance first processes URLs from the RequestList and automatically enqueues all of them to RequestQueue before it starts their processing. This ensures that a single URL is not crawled multiple times.

The crawler finishes when there are no more Request objects to crawl.

We can use the preNavigationHooks to adjust gotOptions:

preNavigationHooks: [
(crawlingContext, gotOptions) => {
// ...
},
]

By default, LinkeDOMCrawler only processes web pages with the text/html and application/xhtml+xml MIME content types (as reported by the Content-Type HTTP header), and skips pages with other content types. If you want the crawler to process other content types, use the LinkeDOMCrawlerOptions.additionalMimeTypes constructor option. Beware that the parsing behavior differs for HTML, XML, JSON and other types of content. For more details, see LinkeDOMCrawlerOptions.requestHandler.

New requests are only dispatched when there is enough free CPU and memory available, using the functionality provided by the AutoscaledPool class. All AutoscaledPool configuration options can be passed to the autoscaledPoolOptions parameter of the CheerioCrawler constructor. For user convenience, the minConcurrency and maxConcurrency AutoscaledPool options are available directly in the CheerioCrawler constructor.

Example usage:

const crawler = new LinkeDOMCrawler({
async requestHandler({ request, window }) {
await Dataset.pushData({
url: request.url,
title: window.document.title,
});
},
});

await crawler.run([
'http://crawlee.dev',
]);

Hierarchy

Index

Constructors

constructor

Properties

optionalautoscaledPool

autoscaledPool?: AutoscaledPool

A reference to the underlying AutoscaledPool class that manages the concurrency of the crawler.

NOTE: This property is only initialized after calling the crawler.run() function. We can use it to change the concurrency settings on the fly, to pause the crawler by calling autoscaledPool.pause() or to abort it by calling autoscaledPool.abort().

readonlyconfig

config: Configuration = ...

optionalproxyConfiguration

proxyConfiguration?: ProxyConfiguration

A reference to the underlying ProxyConfiguration class that manages the crawler's proxies. Only available if used by the crawler.

optionalrequestList

requestList?: RequestList

A reference to the underlying RequestList class that manages the crawler's requests. Only available if used by the crawler.

optionalrequestQueue

requestQueue?: RequestProvider

Dynamic queue of URLs to be processed. This is useful for recursive crawling of websites. A reference to the underlying RequestQueue class that manages the crawler's requests. Only available if used by the crawler.

readonlyrouter

router: RouterHandler<LinkeDOMCrawlingContext<any, any>> = ...

Default Router instance that will be used if we don't specify any requestHandler. See router.addHandler() and router.addDefaultHandler().

optionalrunning

running?: boolean

optionalsessionPool

sessionPool?: SessionPool

A reference to the underlying SessionPool class that manages the crawler's sessions. Only available if used by the crawler.

readonlystats

stats: Statistics

A reference to the underlying Statistics class that collects and logs run statistics for requests.

Methods

addRequests

  • Adds requests to the queue in batches. By default, it will resolve after the initial batch is added, and continue adding the rest in background. You can configure the batch size via batchSize option and the sleep time in between the batches via waitBetweenBatchesMillis. If you want to wait for all batches to be added to the queue, you can use the waitForAllRequestsToBeAdded promise you get in the response object.


    Parameters

    Returns Promise<CrawlerAddRequestsResult>

exportData

  • exportData<Data>(path: string, format?: json | csv, options?: DatasetExportOptions): Promise<Data[]>
  • Retrieves all the data from the default crawler Dataset and exports them to the specified format. Supported formats are currently 'json' and 'csv', and will be inferred from the path automatically.


    Type parameters

    • Data

    Parameters

    Returns Promise<Data[]>

getData

getDataset

  • getDataset(): Promise<Dataset<Dictionary>>
  • Retrieves the default crawler Dataset.


    Returns Promise<Dataset<Dictionary>>

getRequestQueue

pushData

  • pushData(...args: [data: Dictionary | Dictionary[]]): Promise<void>
  • Pushes data to the default crawler Dataset by calling Dataset.pushData.


    Parameters

    • rest...args: [data: Dictionary | Dictionary[]]

    Returns Promise<void>

run

  • Runs the crawler. Returns a promise that gets resolved once all the requests are processed. We can use the requests parameter to enqueue the initial requests - it is a shortcut for running crawler.addRequests() before the crawler.run().


    Parameters

    Returns Promise<FinalStatistics>

setStatusMessage

  • This method is periodically called by the crawler, every statusMessageLoggingInterval seconds.


    Parameters

    Returns Promise<void>

use

  • use(extension: CrawlerExtension): void
  • EXPERIMENTAL Function for attaching CrawlerExtensions such as the Unblockers.


    Parameters

    • extension: CrawlerExtension

      Crawler extension that overrides the crawler configuration.

    Returns void

useState

  • useState<State>(defaultValue?: State): Promise<State>
  • Type parameters

    • State: Dictionary = Dictionary

    Parameters

    • defaultValue: State = ...

    Returns Promise<State>