Skip to main content
Version: 3.3

Changelog

All notable changes to this project will be documented in this file. See Conventional Commits for commit guidelines.

3.12.1 (2024-12-04)

Features

  • tieredProxyUrls accept null for switching the proxy off (#2743) (82f4ea9), closes #2740

3.12.0 (2024-11-04)

Bug Fixes

Features

3.11.5 (2024-10-04)

Bug Fixes

3.11.4 (2024-09-23)

Bug Fixes

3.11.3 (2024-09-03)

Bug Fixes

  • RequestQueueV2: reset recently handled cache too if the queue is pending for too long (#2656) (51a69bc)

3.11.2 (2024-08-28)

Bug Fixes

  • RequestQueueV2: remove inProgress cache, rely solely on locked states (#2601) (57fcb08)

Features

3.11.1 (2024-07-24)

Note: Version bump only for package @crawlee/core

3.11.0 (2024-07-09)

Features

  • Sitemap-based request list implementation (#2498) (7bf8f0b)

3.10.5 (2024-06-12)

Bug Fixes

  • mark context.request.loadedUrl and id as required inside the request handler (#2531) (2b54660)

3.10.4 (2024-06-11)

Bug Fixes

  • add waitForAllRequestsToBeAdded option to enqueueLinks helper (925546b), closes #2318
  • respect crawler.log when creating child logger for Statistics (0a0d75d), closes #2412

3.10.3 (2024-06-07)

Bug Fixes

  • respect implicit router when no requestHandler is provided in AdaptiveCrawler (#2518) (31083aa)
  • revert the scaling steps back to 5% (5bf32f8)

Features

  • add waitForSelector context helper + parseWithCheerio in adaptive crawler (#2522) (6f88e73)

3.10.2 (2024-06-03)

Note: Version bump only for package @crawlee/core

3.10.1 (2024-05-23)

Bug Fixes

3.10.0 (2024-05-16)

Bug Fixes

  • EnqueueStrategy.All erroring with links using unsupported protocols (#2389) (8db3908)
  • core: conversion between tough cookies and browser pool cookies (#2443) (74f73ab)
  • core: fire local SystemInfo events every second (#2454) (1fa9a66)
  • core: use createSessionFunction when loading Session from persisted state (#2444) (3c56b4c)
  • double tier decrement in tiered proxy (#2468) (3a8204b)

Features

Performance Improvements

  • improve scaling based on memory (#2459) (2d5d443)
  • optimize RequestList memory footprint (#2466) (12210bd)
  • optimize adding large amount of requests via crawler.addRequests() (#2456) (6da86a8)

3.9.2 (2024-04-17)

Bug Fixes

3.9.1 (2024-04-11)

Note: Version bump only for package @crawlee/core

3.9.0 (2024-04-10)

Bug Fixes

  • include actual key in error message of KVS' setValue (#2411) (9089bf1)
  • notify autoscaled pool about newly added requests (#2400) (a90177d)

Features

3.8.2 (2024-03-21)

Bug Fixes

  • core: solve possible dead locks in RequestQueueV2 (#2376) (ffba095)
  • use 0 (number) instead of false as default for sessionRotationCount (#2372) (667a3e7)

Features

  • implement global storage access checking and use it to prevent unwanted side effects in adaptive crawler (#2371) (fb3b7da), closes #2364

3.8.1 (2024-02-22)

Bug Fixes

  • fix crawling context type in router.addHandler() (#2355) (d73c202)

3.8.0 (2024-02-21)

Bug Fixes

  • createRequests works correctly with exclude (and nothing else) (#2321) (048db09)

Features

  • KeyValueStore.recordExists() (#2339) (8507a65)
  • accessing crawler state, key-value store and named datasets via crawling context (#2283) (58dd5fc)
  • adaptive playwright crawler (#2316) (8e4218a)

3.7.3 (2024-01-30)

Bug Fixes

  • enqueueLinks: filter out empty/nullish globs (#2286) (84319b3)

3.7.2 (2024-01-09)

Bug Fixes

  • RequestQueue: always clear locks when a request is reclaimed (#2263) (0fafe29), closes #2262

3.7.1 (2024-01-02)

Note: Version bump only for package @crawlee/core

3.7.0 (2023-12-21)

Bug Fixes

Features

3.6.2 (2023-11-26)

Bug Fixes

  • prevent race condition in KeyValueStore.getAutoSavedValue() (#2193) (e340e2b)

3.6.1 (2023-11-15)

Bug Fixes

  • ts: specify type explicitly for logger (aec3550)

3.6.0 (2023-11-15)

Bug Fixes

  • add skipNavigation option to enqueueLinks (#2153) (118515d)
  • core: respect some advanced options for RequestList.open() + improve docs (#2158) (c5a1b07)
  • declare missing dependency on got-scraping in the core package (cd2fd4d)
  • retry incorrect Content-Type when response has blocked status code (#2176) (b54fb8b), closes #1994

Features

3.5.8 (2023-10-17)

Note: Version bump only for package @crawlee/core

3.5.7 (2023-10-05)

Bug Fixes

3.5.6 (2023-10-04)

Bug Fixes

  • types: re-export RequestQueueOptions as an alias to RequestProviderOptions (#2109) (0900f76)

3.5.5 (2023-10-02)

Bug Fixes

  • session pool leaks memory on multiple crawler runs (#2083) (b96582a), closes #2074 #2031
  • types: make return type of RequestProvider.open and RequestQueue(v2).open strict and accurate (#2096) (dfaddb9)

Features

3.5.4 (2023-09-11)

Bug Fixes

  • core: allow explicit calls to purgeDefaultStorage to wipe the storage on each call (#2060) (4831f07)
  • various helpers opening KVS now respect Configuration (#2071) (59dbb16)

3.5.3 (2023-08-31)

Bug Fixes

  • browser-pool: improve error handling when browser is not found (#2050) (282527f), closes #1459
  • crawler instances with different StorageClients do not affect each other (#2056) (3f4c863)
  • pin all internal dependencies (#2041) (d6f2b17), closes #2040

Features

  • core: add default dataset helpers to BasicCrawler (#2057) (e2a7544)

3.5.2 (2023-08-21)

Bug Fixes

  • make the Request constructor options typesafe (#2034) (75e7d65)

3.5.1 (2023-08-16)

Bug Fixes

  • add Request.maxRetries to the RequestOptions interface (#2024) (6433821)
  • log original error message on session rotation (#2022) (8a11ffb)

3.5.0 (2023-07-31)

Bug Fixes

  • core: add requests from URL list (requestsFromUrl) to the queue in batches (418fbf8), closes #1995
  • core: support relative links in enqueueLinks explicitly provided via urls option (#2014) (cbd9d08), closes #2005

Features

  • core: use RequestQueue.addBatchedRequests() in enqueueLinks helper (4d61ca9), closes #1995
  • retire session on proxy error (#2002) (8c0928b), closes #1912

3.4.2 (2023-07-19)

Features

  • core: add RequestQueue.addRequestsBatched() that is non-blocking (#1996) (c85485d), closes #1995

3.4.1 (2023-07-13)

Bug Fixes

  • http-crawler: replace IncomingMessage with PlainResponse for context's response (#1973) (2a1cc7f), closes #1964

3.4.0 (2023-06-12)

Features

3.3.3 (2023-05-31)

Features

  • add support for requestsFromUrl to RequestQueue (#1917) (7f2557c)
  • core: add Request.maxRetries to allow overriding the maxRequestRetries (#1925) (c5592db)

3.3.2 (2023-05-11)

Bug Fixes

  • respect config object when creating SessionPool (#1881) (db069df)

Features

  • allow running single crawler instance multiple times (#1844) (9e6eb1e), closes #765
  • router: allow inline router definition (#1877) (2d241c9)
  • support alternate storage clients when opening storages (#1901) (661e550)

3.3.1 (2023-04-11)

Bug Fixes

  • Storage: queue up opening storages to prevent issues in concurrent calls (#1865) (044c740)
  • try to detect stuck request queue and fix its state (#1837) (95a9f94)

3.3.0 (2023-03-09)

Bug Fixes

  • ignore invalid URLs in enqueueLinks in browser crawlers (#1803) (5ac336c)

Features

3.2.2 (2023-02-08)

Note: Version bump only for package @crawlee/core

3.2.1 (2023-02-07)

Bug Fixes

  • add QueueOperationInfo export to the core package (5ec6c24)

3.2.0 (2023-02-07)

Bug Fixes

  • clone request.userData when creating new request object (#1728) (222ef59), closes #1725
  • declare missing dependency on tslib (27e96c8), closes #1747
  • ensure CrawlingContext interface is inferred correctly in route handlers (aa84633)
  • utils: add missing dependency on ow (bf0e03c), closes #1716

Features

  • enqueueLinks: add SameOrigin strategy and relax protocol matching for the other strategies (#1748) (4ba982a)

3.1.3 (2022-12-07)

Note: Version bump only for package @crawlee/core

3.1.2 (2022-11-15)

Bug Fixes

  • injectJQuery in context does not survive navs (#1661) (493a7cf)
  • make router error message more helpful for undefined routes (#1678) (ab359d8)
  • MemoryStorage: correctly respect the desc option (#1666) (b5f37f6)
  • requestHandlerTimeout timing (#1660) (493ea0c)
  • shallow clone browserPoolOptions before normalization (#1665) (22467ca)
  • support headfull mode in playwright js project template (ea2e61b)
  • support headfull mode in puppeteer js project template (e6aceb8)

Features

3.1.1 (2022-11-07)

Bug Fixes

Features

  • add static set and useStorageClient shortcuts to Configuration (2e66fa2)
  • enable migration testing (#1583) (ee3a68f)
  • playwright: disable animations when taking screenshots (#1601) (4e63034)

3.1.0 (2022-10-13)

Bug Fixes

  • add overload for KeyValueStore.getValue with defaultValue (#1541) (e3cb509)
  • add retry attempts to methods in CLI (#1588) (9142e59)
  • allow label in enqueueLinksByClickingElements options (#1525) (18b7c25)
  • basic-crawler: handle request.noRetry after errorHandler (#1542) (2a2040e)
  • build storage classes by using this instead of the class (#1596) (2b14eb7)
  • correct some typing exports (#1527) (4a136e5)
  • do not hide stack trace of (retried) Type/Syntax/ReferenceErrors (469b4b5)
  • enqueueLinks: ensure the enqueue strategy is respected alongside user patterns (#1509) (2b0eeed)
  • enqueueLinks: prevent useless request creations when filtering by user patterns (#1510) (cb8fe36)
  • export Cookie from crawlee metapackage (7b02ceb)
  • handle redirect cookies (#1521) (2f7fc7c)
  • http-crawler: do not hang on POST without payload (#1546) (8c87390)
  • remove undeclared dependency on core package from puppeteer utils (827ae60)
  • support TypeScript 4.8 (#1507) (4c3a504)
  • wait for persist state listeners to run when event manager closes (#1481) (aa550ed)

Features

3.0.4 (2022-08-22)

Features

  • bump puppeteer support to 15.1

Bug Fixes

  • key value stores emitting an error when multiple write promises ran in parallel (#1460) (f201cca)
  • fix dockerfiles in project templates

3.0.3 (2022-08-11)

Fixes

  • add missing configuration to CheerioCrawler constructor (#1432)
  • sendRequest types (#1445)
  • respect headless option in browser crawlers (#1455)
  • make CheerioCrawlerOptions type more loose (d871d8c)
  • improve dockerfiles and project templates (7c21a64)

Features

  • add utils.playwright.blockRequests() (#1447)
  • http-crawler (#1440)
  • prefer /INPUT.json files for KeyValueStore.getInput() (#1453)
  • jsdom-crawler (#1451)
  • add RetryRequestError + add error to the context for BC (#1443)
  • add keepAlive to crawler options (#1452)

3.0.2 (2022-07-28)

Fixes

  • regression in resolving the base url for enqueue link filtering (1422)
  • improve file saving on memory storage (1421)
  • add UserData type argument to CheerioCrawlingContext and related interfaces (1424)
  • always limit desiredConcurrency to the value of maxConcurrency (bcb689d)
  • wait for storage to finish before resolving crawler.run() (9d62d56)
  • using explicitly typed router with CheerioCrawler (07b7e69)
  • declare dependency on ow in @crawlee/cheerio package (be59f99)
  • use crawlee@^3.0.0 in the CLI templates (6426f22)
  • fix building projects with TS when puppeteer and playwright are not installed (1404)
  • enqueueLinks should respect full URL of the current request for relative link resolution (1427)
  • use desiredConcurrency: 10 as the default for CheerioCrawler (1428)

Features

  • feat: allow configuring what status codes will cause session retirement (1423)
  • feat: add support for middlewares to the Router via use method (1431)

3.0.1 (2022-07-26)

Fixes

  • remove JSONData generic type arg from CheerioCrawler in (#1402)
  • rename default storage folder to just storage in (#1403)
  • remove trailing slash for proxyUrl in (#1405)
  • run browser crawlers in headless mode by default in (#1409)
  • rename interface FailedRequestHandler to ErrorHandler in (#1410)
  • ensure default route is not ignored in CheerioCrawler in (#1411)
  • add headless option to BrowserCrawlerOptions in (#1412)
  • processing custom cookies in (#1414)
  • enqueue link not finding relative links if the checked page is redirected in (#1416)
  • fix building projects with TS when puppeteer and playwright are not installed in (#1404)
  • calling enqueueLinks in browser crawler on page without any links in (385ca27)
  • improve error message when no default route provided in (04c3b6a)

Features

  • feat: add parseWithCheerio for puppeteer & playwright in (#1418)

3.0.0 (2022-07-13)

This section summarizes most of the breaking changes between Crawlee (v3) and Apify SDK (v2). Crawlee is the spiritual successor to Apify SDK, so we decided to keep the versioning and release Crawlee as v3.

Crawlee vs Apify SDK

Up until version 3 of apify, the package contained both scraping related tools and Apify platform related helper methods. With v3 we are splitting the whole project into two main parts:

  • Crawlee, the new web-scraping library, available as crawlee package on NPM
  • Apify SDK, helpers for the Apify platform, available as apify package on NPM

Moreover, the Crawlee library is published as several packages under @crawlee namespace:

  • @crawlee/core: the base for all the crawler implementations, also contains things like Request, RequestQueue, RequestList or Dataset classes
  • @crawlee/basic: exports BasicCrawler
  • @crawlee/cheerio: exports CheerioCrawler
  • @crawlee/browser: exports BrowserCrawler (which is used for creating @crawlee/playwright and @crawlee/puppeteer)
  • @crawlee/playwright: exports PlaywrightCrawler
  • @crawlee/puppeteer: exports PuppeteerCrawler
  • @crawlee/memory-storage: @apify/storage-local alternative
  • @crawlee/browser-pool: previously browser-pool package
  • @crawlee/utils: utility methods
  • @crawlee/types: holds TS interfaces mainly about the StorageClient

Installing Crawlee

As Crawlee is not yet released as latest, we need to install from the next distribution tag!

Most of the Crawlee packages are extending and reexporting each other, so it's enough to install just the one you plan on using, e.g. @crawlee/playwright if you plan on using playwright - it already contains everything from the @crawlee/browser package, which includes everything from @crawlee/basic, which includes everything from @crawlee/core.

npm install crawlee@next

Or if all we need is cheerio support, we can install only @crawlee/cheerio

npm install @crawlee/cheerio@next

When using playwright or puppeteer, we still need to install those dependencies explicitly - this allows the users to be in control of which version will be used.

npm install crawlee@next playwright
# or npm install @crawlee/playwright@next playwright

Alternatively we can also use the crawlee meta-package which contains (re-exports) most of the @crawlee/* packages, and therefore contains all the crawler classes.

Sometimes you might want to use some utility methods from @crawlee/utils, so you might want to install that as well. This package contains some utilities that were previously available under Apify.utils. Browser related utilities can be also found in the crawler packages (e.g. @crawlee/playwright).

Full TypeScript support

Both Crawlee and Apify SDK are full TypeScript rewrite, so they include up-to-date types in the package. For your TypeScript crawlers we recommend using our predefined TypeScript configuration from @apify/tsconfig package. Don't forget to set the module and target to ES2022 or above to be able to use top level await.

The @apify/tsconfig config has noImplicitAny enabled, you might want to disable it during the initial development as it will cause build failures if you left some unused local variables in your code.

tsconfig.json
{
"extends": "@apify/tsconfig",
"compilerOptions": {
"module": "ES2022",
"target": "ES2022",
"outDir": "dist",
"lib": ["DOM"]
},
"include": [
"./src/**/*"
]
}

Docker build

For Dockerfile we recommend using multi-stage build, so you don't install the dev dependencies like TypeScript in your final image:

Dockerfile
# using multistage build, as we need dev deps to build the TS source code
FROM apify/actor-node:16 AS builder

# copy all files, install all dependencies (including dev deps) and build the project
COPY . ./
RUN npm install --include=dev \
&& npm run build

# create final image
FROM apify/actor-node:16
# copy only necessary files
COPY --from=builder /usr/src/app/package*.json ./
COPY --from=builder /usr/src/app/README.md ./
COPY --from=builder /usr/src/app/dist ./dist
COPY --from=builder /usr/src/app/apify.json ./apify.json
COPY --from=builder /usr/src/app/INPUT_SCHEMA.json ./INPUT_SCHEMA.json

# install only prod deps
RUN npm --quiet set progress=false \
&& npm install --only=prod --no-optional \
&& echo "Installed NPM packages:" \
&& (npm list --only=prod --no-optional --all || true) \
&& echo "Node.js version:" \
&& node --version \
&& echo "NPM version:" \
&& npm --version

# run compiled code
CMD npm run start:prod

Browser fingerprints

Previously we had a magical stealth option in the puppeteer crawler that enabled several tricks aiming to mimic the real users as much as possible. While this worked to a certain degree, we decided to replace it with generated browser fingerprints.

In case we don't want to have dynamic fingerprints, we can disable this behaviour via useFingerprints in browserPoolOptions:

const crawler = new PlaywrightCrawler({
browserPoolOptions: {
useFingerprints: false,
},
});

Previously, if we wanted to get or add cookies for the session that would be used for the request, we had to call session.getPuppeteerCookies() or session.setPuppeteerCookies(). Since this method could be used for any of our crawlers, not just PuppeteerCrawler, the methods have been renamed to session.getCookies() and session.setCookies() respectively. Otherwise, their usage is exactly the same!

Memory storage

When we store some data or intermediate state (like the one RequestQueue holds), we now use @crawlee/memory-storage by default. It is an alternative to the @apify/storage-local, that stores the state inside memory (as opposed to SQLite database used by @apify/storage-local). While the state is stored in memory, it also dumps it to the file system, so we can observe it, as well as respects the existing data stored in KeyValueStore (e.g. the INPUT.json file).

When we want to run the crawler on Apify platform, we need to use Actor.init or Actor.main, which will automatically switch the storage client to ApifyClient when on the Apify platform.

We can still use the @apify/storage-local, to do it, first install it pass it to the Actor.init or Actor.main options:

@apify/storage-local v2.1.0+ is required for Crawlee

import { Actor } from 'apify';
import { ApifyStorageLocal } from '@apify/storage-local';

const storage = new ApifyStorageLocal(/* options like `enableWalMode` belong here */);
await Actor.init({ storage });

Purging of the default storage

Previously the state was preserved between local runs, and we had to use --purge argument of the apify-cli. With Crawlee, this is now the default behaviour, we purge the storage automatically on Actor.init/main call. We can opt out of it via purge: false in the Actor.init options.

Renamed crawler options and interfaces

Some options were renamed to better reflect what they do. We still support all the old parameter names too, but not at the TS level.

  • handleRequestFunction -> requestHandler
  • handlePageFunction -> requestHandler
  • handleRequestTimeoutSecs -> requestHandlerTimeoutSecs
  • handlePageTimeoutSecs -> requestHandlerTimeoutSecs
  • requestTimeoutSecs -> navigationTimeoutSecs
  • handleFailedRequestFunction -> failedRequestHandler

We also renamed the crawling context interfaces, so they follow the same convention and are more meaningful:

  • CheerioHandlePageInputs -> CheerioCrawlingContext
  • PlaywrightHandlePageFunction -> PlaywrightCrawlingContext
  • PuppeteerHandlePageFunction -> PuppeteerCrawlingContext

Context aware helpers

Some utilities previously available under Apify.utils namespace are now moved to the crawling context and are context aware. This means they have some parameters automatically filled in from the context, like the current Request instance or current Page object, or the RequestQueue bound to the crawler.

One common helper that received more attention is the enqueueLinks. As mentioned above, it is context aware - we no longer need pass in the requestQueue or page arguments (or the cheerio handle $). In addition to that, it now offers 3 enqueuing strategies:

  • EnqueueStrategy.All ('all'): Matches any URLs found
  • EnqueueStrategy.SameHostname ('same-hostname') Matches any URLs that have the same subdomain as the base URL (default)
  • EnqueueStrategy.SameDomain ('same-domain') Matches any URLs that have the same domain name. For example, https://wow.an.example.com and https://example.com will both be matched for a base url of https://example.com.

This means we can even call enqueueLinks() without any parameters. By default, it will go through all the links found on current page and filter only those targeting the same subdomain.

Moreover, we can specify patterns the URL should match via globs:

const crawler = new PlaywrightCrawler({
async requestHandler({ enqueueLinks }) {
await enqueueLinks({
globs: ['https://apify.com/*/*'],
// we can also use `regexps` and `pseudoUrls` keys here
});
},
});

Implicit RequestQueue instance

All crawlers now have the RequestQueue instance automatically available via crawler.getRequestQueue() method. It will create the instance for you if it does not exist yet. This mean we no longer need to create the RequestQueue instance manually, and we can just use crawler.addRequests() method described underneath.

We can still create the RequestQueue explicitly, the crawler.getRequestQueue() method will respect that and return the instance provided via crawler options.

crawler.addRequests()

We can now add multiple requests in batches. The newly added addRequests method will handle everything for us. It enqueues the first 1000 requests and resolves, while continuing with the rest in the background, again in a smaller 1000 items batches, so we don't fall into any API rate limits. This means the crawling will start almost immediately (within few seconds at most), something previously possible only with a combination of RequestQueue and RequestList.

// will resolve right after the initial batch of 1000 requests is added
const result = await crawler.addRequests([/* many requests, can be even millions */]);

// if we want to wait for all the requests to be added, we can await the `waitForAllRequestsToBeAdded` promise
await result.waitForAllRequestsToBeAdded;

Less verbose error logging

Previously an error thrown from inside request handler resulted in full error object being logged. With Crawlee, we log only the error message as a warning as long as we know the request will be retried. If you want to enable verbose logging like in v2, use the CRAWLEE_VERBOSE_LOG env var.

Removal of requestAsBrowser

In v1 we replaced the underlying implementation of requestAsBrowser to be just a proxy over calling got-scraping - our custom extension to got that tries to mimic the real browsers as much as possible. With v3, we are removing the requestAsBrowser, encouraging the use of got-scraping directly.

For easier migration, we also added context.sendRequest() helper that allows processing the context bound Request object through got-scraping:

const crawler = new BasicCrawler({
async requestHandler({ sendRequest, log }) {
// we can use the options parameter to override gotScraping options
const res = await sendRequest({ responseType: 'json' });
log.info('received body', res.body);
},
});

How to use sendRequest()?

See the Got Scraping guide.

Removed options

The useInsecureHttpParser option has been removed. It's permanently set to true in order to better mimic browsers' behavior.

Got Scraping automatically performs protocol negotiation, hence we removed the useHttp2 option. It's set to true - 100% of browsers nowadays are capable of HTTP/2 requests. Oh, more and more of the web is using it too!

Renamed options

In the requestAsBrowser approach, some of the options were named differently. Here's a list of renamed options:

payload

This options represents the body to send. It could be a string or a Buffer. However, there is no payload option anymore. You need to use body instead. Or, if you wish to send JSON, json. Here's an example:

// Before:
await Apify.utils.requestAsBrowser({, payload: 'Hello, world!' });
await Apify.utils.requestAsBrowser({, payload: Buffer.from('c0ffe', 'hex') });
await Apify.utils.requestAsBrowser({, json: { hello: 'world' } });

// After:
await gotScraping({, body: 'Hello, world!' });
await gotScraping({, body: Buffer.from('c0ffe', 'hex') });
await gotScraping({, json: { hello: 'world' } });
ignoreSslErrors

It has been renamed to https.rejectUnauthorized. By default, it's set to false for convenience. However, if you want to make sure the connection is secure, you can do the following:

// Before:
await Apify.utils.requestAsBrowser({, ignoreSslErrors: false });

// After:
await gotScraping({, https: { rejectUnauthorized: true } });

Please note: the meanings are opposite! So we needed to invert the values as well.

header-generator options

useMobileVersion, languageCode and countryCode no longer exist. Instead, you need to use headerGeneratorOptions directly:

// Before:
await Apify.utils.requestAsBrowser({
,
useMobileVersion: true,
languageCode: 'en',
countryCode: 'US',
});

// After:
await gotScraping({
,
headerGeneratorOptions: {
devices: ['mobile'], // or ['desktop']
locales: ['en-US'],
},
});
timeoutSecs

In order to set a timeout, use timeout.request (which is milliseconds now).

// Before:
await Apify.utils.requestAsBrowser({
,
timeoutSecs: 30,
});

// After:
await gotScraping({
,
timeout: {
request: 30 * 1000,
},
});
throwOnHttpErrors

throwOnHttpErrorsthrowHttpErrors. This options throws on unsuccessful HTTP status codes, for example 404. By default, it's set to false.

decodeBody

decodeBodydecompress. This options decompresses the body. Defaults to true - please do not change this or websites will break (unless you know what you're doing!).

abortFunction

This function used to make the promise throw on specific responses, if it returned true. However, it wasn't that useful.

You probably want to cancel the request instead, which you can do in the following way:

const promise = gotScraping();

promise.on('request', request => {
// Please note this is not a Got Request instance, but a ClientRequest one.
// https://nodejs.org/api/http.html#class-httpclientrequest

if (request.protocol !== 'https:') {
// Unsecure request, abort.
promise.cancel();

// If you set `isStream` to `true`, please use `stream.destroy()` instead.
}
});

const response = await promise;

Removal of browser pool plugin mixing

Previously, you were able to have a browser pool that would mix Puppeteer and Playwright plugins (or even your own custom plugins if you've built any). As of this version, that is no longer allowed, and creating such a browser pool will cause an error to be thrown (it's expected that all plugins that will be used are of the same type).

Handling requests outside of browser

One small feature worth mentioning is the ability to handle requests with browser crawlers outside the browser. To do that, we can use a combination of Request.skipNavigation and context.sendRequest().

Take a look at how to achieve this by checking out the Skipping navigation for certain requests example!

Logging

Crawlee exports the default log instance directly as a named export. We also have a scoped log instance provided in the crawling context - this one will log messages prefixed with the crawler name and should be preferred for logging inside the request handler.

const crawler = new CheerioCrawler({
async requestHandler({ log, request }) {
log.info(`Opened ${request.loadedUrl}`);
},
});

Auto-saved crawler state

Every crawler instance now has useState() method that will return a state object we can use. It will be automatically saved when persistState event occurs. The value is cached, so we can freely call this method multiple times and get the exact same reference. No need to worry about saving the value either, as it will happen automatically.

const crawler = new CheerioCrawler({
async requestHandler({ crawler }) {
const state = await crawler.useState({ foo: [] as number[] });
// just change the value, no need to care about saving it
state.foo.push(123);
},
});

Apify SDK

The Apify platform helpers can be now found in the Apify SDK (apify NPM package). It exports the Actor class that offers following static helpers:

  • ApifyClient shortcuts: addWebhook(), call(), callTask(), metamorph()
  • helpers for running on Apify platform: init(), exit(), fail(), main(), isAtHome(), createProxyConfiguration()
  • storage support: getInput(), getValue(), openDataset(), openKeyValueStore(), openRequestQueue(), pushData(), setValue()
  • events support: on(), off()
  • other utilities: getEnv(), newClient(), reboot()

Actor.main is now just a syntax sugar around calling Actor.init() at the beginning and Actor.exit() at the end (plus wrapping the user function in try/catch block). All those methods are async and should be awaited - with node 16 we can use the top level await for that. In other words, following is equivalent:

import { Actor } from 'apify';

await Actor.init();
// your code
await Actor.exit('Crawling finished!');
import { Actor } from 'apify';

await Actor.main(async () => {
// your code
}, { statusMessage: 'Crawling finished!' });

Actor.init() will conditionally set the storage implementation of Crawlee to the ApifyClient when running on the Apify platform, or keep the default (memory storage) implementation otherwise. It will also subscribe to the websocket events (or mimic them locally). Actor.exit() will handle the tear down and calls process.exit() to ensure our process won't hang indefinitely for some reason.

Events

Apify SDK (v2) exports Apify.events, which is an EventEmitter instance. With Crawlee, the events are managed by EventManager class instead. We can either access it via Actor.eventManager getter, or use Actor.on and Actor.off shortcuts instead.

-Apify.events.on(...);
+Actor.on(...);

We can also get the EventManager instance via Configuration.getEventManager().

In addition to the existing events, we now have an exit event fired when calling Actor.exit() (which is called at the end of Actor.main()). This event allows you to gracefully shut down any resources when Actor.exit is called.

Smaller/internal breaking changes

  • Apify.call() is now just a shortcut for running ApifyClient.actor(actorId).call(input, options), while also taking the token inside env vars into account
  • Apify.callTask() is now just a shortcut for running ApifyClient.task(taskId).call(input, options), while also taking the token inside env vars into account
  • Apify.metamorph() is now just a shortcut for running ApifyClient.task(taskId).metamorph(input, options), while also taking the ACTOR_RUN_ID inside env vars into account
  • Apify.waitForRunToFinish() has been removed, use ApifyClient.waitForFinish() instead
  • Actor.main/init purges the storage by default
  • remove purgeLocalStorage helper, move purging to the storage class directly
    • StorageClient interface now has optional purge method
    • purging happens automatically via Actor.init() (you can opt out via purge: false in the options of init/main methods)
  • QueueOperationInfo.request is no longer available
  • Request.handledAt is now string date in ISO format
  • Request.inProgress and Request.reclaimed are now Sets instead of POJOs
  • injectUnderscore from puppeteer utils has been removed
  • APIFY_MEMORY_MBYTES is no longer taken into account, use CRAWLEE_AVAILABLE_MEMORY_RATIO instead
  • some AutoscaledPool options are no longer available:
    • cpuSnapshotIntervalSecs and memorySnapshotIntervalSecs has been replaced with top level systemInfoIntervalMillis configuration
    • maxUsedCpuRatio has been moved to the top level configuration
  • ProxyConfiguration.newUrlFunction can be async. .newUrl() and .newProxyInfo() now return promises.
  • prepareRequestFunction and postResponseFunction options are removed, use navigation hooks instead
  • gotoFunction and gotoTimeoutSecs are removed
  • removed compatibility fix for old/broken request queues with null Request props
  • fingerprintsOptions renamed to fingerprintOptions (fingerprints -> fingerprint).
  • fingerprintOptions now accept useFingerprintCache and fingerprintCacheSize (instead of useFingerprintPerProxyCache and fingerprintPerProxyCacheSize, which are now no longer available). This is because the cached fingerprints are no longer connected to proxy URLs but to sessions.

2.3.2 (2022-05-05)

  • fix: use default user agent for playwright with chrome instead of the default "headless UA"
  • fix: always hide webdriver of chrome browsers

2.3.1 (2022-05-03)

  • fix: utils.apifyClient early instantiation (#1330)
  • feat: utils.playwright.injectJQuery() (#1337)
  • feat: add keyValueStore option to Statistics class (#1345)
  • fix: ensure failed req count is correct when using RequestList (#1347)
  • fix: random puppeteer crawler (running in headful mode) failure (#1348)

    This should help with the We either navigate top level or have old version of the navigated frame bug in puppeteer.

  • fix: allow returning falsy values in RequestTransform's return type

2.3.0 (2022-04-07)

  • feat: accept more social media patterns (#1286)
  • feat: add multiple click support to enqueueLinksByClickingElements (#1295)
  • feat: instance-scoped "global" configuration (#1315)
  • feat: requestList accepts proxyConfiguration for requestsFromUrls (#1317)
  • feat: update playwright to v1.20.2
  • feat: update puppeteer to v13.5.2

    We noticed that with this version of puppeteer actor run could crash with We either navigate top level or have old version of the navigated frame error (puppeteer issue here). It should not happen while running the browser in headless mode. In case you need to run the browser in headful mode (headless: false), we recommend pinning puppeteer version to 10.4.0 in actor package.json file.

  • feat: stealth deprecation (#1314)
  • feat: allow passing a stream to KeyValueStore.setRecord (#1325)
  • fix: use correct apify-client instance for snapshotting (#1308)
  • fix: automatically reset RequestQueue state after 5 minutes of inactivity, closes #997
  • fix: improve guessing of chrome executable path on windows (#1294)
  • fix: prune CPU snapshots locally (#1313)
  • fix: improve browser launcher types (#1318)

0 concurrency mitigation

This release should resolve the 0 concurrency bug by automatically resetting the internal RequestQueue state after 5 minutes of inactivity.

We now track last activity done on a RequestQueue instance:

  • added new request
  • started processing a request (added to inProgress cache)
  • marked request as handled
  • reclaimed request

If we don't detect one of those actions in last 5 minutes, and we have some requests in the inProgress cache, we try to reset the state. We can override this limit via CRAWLEE_INTERNAL_TIMEOUT env var.

This should finally resolve the 0 concurrency bug, as it was always about stuck requests in the inProgress cache.

2.2.2 (2022-02-14)

  • fix: ensure request.headers is set
  • fix: lower RequestQueue API timeout to 30 seconds
  • improve logging for fetching next request and timeouts

2.2.1 (2022-01-03)

  • fix: ignore requests that are no longer in progress (#1258)
  • fix: do not use tryCancel() from inside sync callback (#1265)
  • fix: revert to puppeteer 10.x (#1276)
  • fix: wait when body is not available in infiniteScroll() from Puppeteer utils (#1238)
  • fix: expose logger classes on the utils.log instance (#1278)

2.2.0 (2021-12-17)

Proxy per page

Up until now, browser crawlers used the same session (and therefore the same proxy) for all request from a single browser * now get a new proxy for each session. This means that with incognito pages, each page will get a new proxy, aligning the behaviour with CheerioCrawler.

This feature is not enabled by default. To use it, we need to enable useIncognitoPages flag under launchContext:

new Apify.Playwright({
launchContext: {
useIncognitoPages: true,
},
// ...
})

Note that currently there is a performance overhead for using useIncognitoPages. Use this flag at your own will.

We are planning to enable this feature by default in SDK v3.0.

Abortable timeouts

Previously when a page function timed out, the task still kept running. This could lead to requests being processed multiple times. In v2.2 we now have abortable timeouts that will cancel the task as early as possible.

Mitigation of zero concurrency issue

Several new timeouts were added to the task function, which should help mitigate the zero concurrency bug. Namely fetching of next request information and reclaiming failed requests back to the queue are now executed with a timeout with 3 additional retries before the task fails. The timeout is always at least 300s (5 minutes), or requestHandlerTimeoutSecs if that value is higher.

Full list of changes

  • fix RequestError: URI malformed in cheerio crawler (#1205)
  • only provide Cookie header if cookies are present (#1218)
  • handle extra cases for diffCookie (#1217)
  • add timeout for task function (#1234)
  • implement proxy per page in browser crawlers (#1228)
  • add fingerprinting support (#1243)
  • implement abortable timeouts (#1245)
  • add timeouts with retries to runTaskFunction() (#1250)
  • automatically convert google spreadsheet URLs to CSV exports (#1255)

2.1.0 (2021-10-07)

  • automatically convert google docs share urls to csv download ones in request list (#1174)
  • use puppeteer emulating scrolls instead of window.scrollBy (#1170)
  • warn if apify proxy is used in proxyUrls (#1173)
  • fix YOUTUBE_REGEX_STRING being too greedy (#1171)
  • add purgeLocalStorage utility method (#1187)
  • catch errors inside request interceptors (#1188, #1190)
  • add support for cgroups v2 (#1177)
  • fix incorrect offset in fixUrl function (#1184)
  • support channel and user links in YouTube regex (#1178)
  • fix: allow passing requestsFromUrl to RequestListOptions in TS (#1191)
  • allow passing forceCloud down to the KV store (#1186), closes #752
  • merge cookies from session with user provided ones (#1201), closes #1197
  • use ApifyClient v2 (full rewrite to TS)

2.0.7 (2021-09-08)

  • Fix casting of int/bool environment variables (e.g. APIFY_LOCAL_STORAGE_ENABLE_WAL_MODE), closes #956
  • Fix incognito pages and user data dir (#1145)
  • Add @ts-ignore comments to imports of optional peer dependencies (#1152)
  • Use config instance in sdk.openSessionPool() (#1154)
  • Add a breaking callback to infiniteScroll (#1140)

2.0.6 (2021-08-27)

  • Fix deprecation messages logged from ProxyConfiguration and CheerioCrawler.
  • Update got-scraping to receive multiple improvements.

2.0.5 (2021-08-24)

  • Fix error handling in puppeteer crawler

2.0.4 (2021-08-23)

  • Use sessionToken with got-scraping

2.0.3 (2021-08-20)

  • BREAKING IN EDGE CASES * We removed forceUrlEncoding in requestAsBrowser because we found out that recent versions of the underlying HTTP client got already encode URLs and forceUrlEncoding could lead to weird behavior. We think of this as fixing a bug, so we're not bumping the major version.
  • Limit handleRequestTimeoutMillis to max valid value to prevent Node.js fallback to 1.
  • Use got-scraping@^3.0.1
  • Disable SSL validation on MITM proxie
  • Limit handleRequestTimeoutMillis to max valid value

2.0.2 (2021-08-12)

  • Fix serialization issues in CheerioCrawler caused by parser conflicts in recent versions of cheerio.

2.0.1 (2021-08-06)

  • Use got-scraping 2.0.1 until fully compatible.

2.0.0 (2021-08-05)

  • BREAKING: Require Node.js >=15.10.0 because HTTP2 support on lower Node.js versions is very buggy.
  • BREAKING: Bump cheerio to 1.0.0-rc.10 from rc.3. There were breaking changes in cheerio between the versions so this bump might be breaking for you as well.
  • Remove LiveViewServer which was deprecated before release of SDK v1.
Page Options