Skip to main content
Version: 3.10


Provides a framework for downloading files in parallel using plain HTTP requests. The URLs to download are fed either from a static list of URLs or they can be added on the fly from another crawler.

Since FileDownload uses raw HTTP requests to download the files, it is very fast and bandwith-efficient. However, it doesn't parse the content - if you need to e.g. extract data from the downloaded files, you might need to use CheerioCrawler, PuppeteerCrawler or PlaywrightCrawler instead.

FileCrawler downloads each URL using a plain HTTP request and then invokes the user-provided FileDownloadOptions.requestHandler where the user can specify what to do with the downloaded data.

The source URLs are represented using Request objects that are fed from RequestList or RequestQueue instances provided by the FileDownloadOptions.requestList or FileDownloadOptions.requestQueue constructor options, respectively.

If both FileDownloadOptions.requestList and FileDownloadOptions.requestQueue are used, the instance first processes URLs from the RequestList and automatically enqueues all of them to RequestQueue before it starts their processing. This ensures that a single URL is not crawled multiple times.

The crawler finishes when there are no more Request objects to crawl.

We can use the preNavigationHooks to adjust gotOptions:

preNavigationHooks: [
(crawlingContext, gotOptions) => {
// ...

New requests are only dispatched when there is enough free CPU and memory available, using the functionality provided by the AutoscaledPool class. All AutoscaledPool configuration options can be passed to the autoscaledPoolOptions parameter of the FileCrawler constructor. For user convenience, the minConcurrency and maxConcurrency AutoscaledPool options are available directly in the FileCrawler constructor.

Example usage

const crawler = new FileDownloader({
requestHandler({ body, request }) {
writeFileSync(request.url.replace(/[^a-z0-9\.]/gi, '_'), body);








autoscaledPool?: AutoscaledPool

A reference to the underlying AutoscaledPool class that manages the concurrency of the crawler.

NOTE: This property is only initialized after calling the function. We can use it to change the concurrency settings on the fly, to pause the crawler by calling autoscaledPool.pause() or to abort it by calling autoscaledPool.abort().


config: Configuration = ...


hasFinishedBefore: boolean = false


proxyConfiguration?: ProxyConfiguration

A reference to the underlying ProxyConfiguration class that manages the crawler's proxies. Only available if used by the crawler.


requestList?: RequestList

A reference to the underlying RequestList class that manages the crawler's requests. Only available if used by the crawler.


requestQueue?: RequestProvider

Dynamic queue of URLs to be processed. This is useful for recursive crawling of websites. A reference to the underlying RequestQueue class that manages the crawler's requests. Only available if used by the crawler.


Default Router instance that will be used if we don't specify any requestHandler. See router.addHandler() and router.addDefaultHandler().


running: boolean = false


sessionPool?: SessionPool

A reference to the underlying SessionPool class that manages the crawler's sessions. Only available if used by the crawler.


stats: Statistics

A reference to the underlying Statistics class that collects and logs run statistics for requests.



  • Adds requests to the queue in batches. By default, it will resolve after the initial batch is added, and continue adding the rest in background. You can configure the batch size via batchSize option and the sleep time in between the batches via waitBetweenBatchesMillis. If you want to wait for all batches to be added to the queue, you can use the waitForAllRequestsToBeAdded promise you get in the response object.


    Returns Promise<CrawlerAddRequestsResult>


  • exportData<Data>(path: string, format?: json | csv, options?: DatasetExportOptions): Promise<Data[]>
  • Retrieves all the data from the default crawler Dataset and exports them to the specified format. Supported formats are currently 'json' and 'csv', and will be inferred from the path automatically.

    Type parameters

    • Data


    Returns Promise<Data[]>



  • getDataset(idOrName?: string): Promise<Dataset<Dictionary>>
  • Retrieves the specified Dataset, or the default crawler Dataset.


    • optionalidOrName: string

    Returns Promise<Dataset<Dictionary>>



  • pushData(data: Dictionary | Dictionary[], datasetIdOrName?: string): Promise<void>
  • Pushes data to the specified Dataset, or the default crawler Dataset by calling Dataset.pushData.


    • data: Dictionary | Dictionary[]
    • optionaldatasetIdOrName: string

    Returns Promise<void>


  • Runs the crawler. Returns a promise that gets resolved once all the requests are processed. We can use the requests parameter to enqueue the initial requests - it is a shortcut for running crawler.addRequests() before the


    Returns Promise<FinalStatistics>


  • This method is periodically called by the crawler, every statusMessageLoggingInterval seconds.


    Returns Promise<void>


  • use(extension: CrawlerExtension): void
  • EXPERIMENTAL Function for attaching CrawlerExtensions such as the Unblockers.


    • extension: CrawlerExtension

      Crawler extension that overrides the crawler configuration.

    Returns void


  • useState<State>(defaultValue?: State): Promise<State>
  • Type parameters

    • State: Dictionary = Dictionary


    • defaultValue: State = ...

    Returns Promise<State>