Crawlee is a web
scraping and browser
automation library
Crawlee is a web
scraping and browser
automation library
It helps you build reliable crawlers. Fast.
npx crawlee create my-crawler
Reliable crawling 🏗️
Crawlee won't fix broken selectors for you (yet), but it helps you build and maintain your crawlers faster.
When a website adds JavaScript rendering, you don't have to rewrite everything, only switch to one of the browser crawlers. When you later find a great API to speed up your crawls, flip the switch back.
It keeps your proxies healthy by rotating them smartly with good fingerprints that make your crawlers look human-like. It's not unblockable, but it will save you money in the long run.
Crawlee is built by people who scrape for a living and use it every day to scrape millions of pages. Meet our community on Discord.
JavaScript & TypeScript
We believe websites are best scraped in the language they're written in. Crawlee runs on Node.js and it's built in TypeScript to improve code completion in your IDE, even if you don't use TypeScript yourself. Crawlee supports both TypeScript and JavaScript crawling.
HTTP scraping
Crawlee makes HTTP requests that mimic browser headers and TLS fingerprints. It also rotates them automatically based on data about real-world traffic. Popular HTML parsers Cheerio and JSDOM are included.
Headless browsers
Switch your crawlers from HTTP to headless browsers in 3 lines of code. Crawlee builds on top of Puppeteer and Playwright and adds its own anti-blocking features and human-like fingerprints. Chrome, Firefox and more.
Automatic scaling and proxy management
Crawlee automatically manages concurrency based on available system resources and smartly rotates proxies. Proxies that often time-out, return network errors or bad HTTP codes like 401 or 403 are discarded.
Queue and Storage
You can save files, screenshots and JSON results to disk with one line of code or plug an adapter for your DB. Your URLs are kept in a queue that ensures their uniqueness and that you don't lose progress when something fails.
Helpful utils and configurability
Crawlee includes tools for extracting social handles or phone numbers, infinite scrolling, blocking unwanted assets and many more. It works great out of the box, but also provides rich configuration options.
Try Crawlee out 👾
The fastest way to try Crawlee out is to use the Crawlee CLI and choose the Getting started example. The CLI will install all the necessary dependencies and add boilerplate code for you to play with.
npx crawlee create my-crawler
If you prefer adding Crawlee into your own project, try the example below. Because it uses PlaywrightCrawler
we also need to install Playwright. It's not bundled with Crawlee to reduce install size.
npm install crawlee playwright
import { PlaywrightCrawler } from 'crawlee';
// PlaywrightCrawler crawls the web using a headless browser controlled by the Playwright library.
const crawler = new PlaywrightCrawler({
// Use the requestHandler to process each of the crawled pages.
async requestHandler({ request, page, enqueueLinks, pushData, log }) {
const title = await page.title();
log.info(`Title of ${request.loadedUrl} is '${title}'`);
// Save results as JSON to `./storage/datasets/default` directory.
await pushData({ title, url: request.loadedUrl });
// Extract links from the current page and add them to the crawling queue.
await enqueueLinks();
},
// Uncomment this option to see the browser window.
// headless: false,
// Comment this option to scrape the full website.
maxRequestsPerCrawl: 20,
});
// Add first URL to the queue and start the crawl.
await crawler.run(['https://crawlee.dev']);
// Export the whole dataset to a single file in `./result.csv`.
await crawler.exportData('./result.csv');
// Or work with the data directly.
const data = await crawler.getData();
console.table(data.items);
Deploy to the cloud ☁️
Crawlee is developed by Apify, the web scraping and automation platform. You can deploy a Crawlee project wherever you want (see our deployment guides for AWS Lambda and Google Cloud), but using the Apify platform will give you the best experience. With a few simple steps, you can convert your Crawlee project into a so-called Actor. Actors are serverless micro-apps that are easy to develop, run, share, and integrate. The infra, proxies, and storages are ready to go. Learn more about Actors.
1️⃣ First, install the Apify SDK to your project, as well as the Apify CLI. The SDK will help with the Apify integration, while the CLI will help us with the initialization and deployment.
npm install apify
npm install -g apify-cli
2️⃣ The next step is to add Actor.init()
to the beginning of your main script and Actor.exit()
to the end of it. This will enable the integration to the Apify Platform, so the cloud storages (e.g. RequestQueue
) will be used. The code should look like this:
import { PlaywrightCrawler, Dataset } from 'crawlee';
// Import the `Actor` class from the Apify SDK.
import { Actor } from 'apify';
// Set up the integration to Apify.
await Actor.init();
// Crawler setup from the previous example.
const crawler = new PlaywrightCrawler({
// ...
});
await crawler.run(['https://crawlee.dev']);
// Once finished, clean up the environment.
await Actor.exit();
3️⃣ Then you will need to sign up for the Apify account. Once you have it, use the Apify CLI to log in via apify login
. The last two steps also involve the Apify CLI. Call the apify init
first, which will add Apify config to your project, and finally run the apify push
to deploy it.
apify login # so the CLI knows you
apify init # and the Apify platform understands your project
apify push # time to ship it!