Skip to main content
Version: 3.9

Running your crawler in the Cloud

Apify Platform

Crawlee is developed by Apify, the web scraping and automation platform. You could say it is the home of Crawlee projects. In this section you'll see how to deploy the crawler there with just a few simple steps. You can deploy a Crawlee project wherever you want, but using the Apify Platform will give you the best experience.

In case you want to deploy your Crawlee project to other platforms, check out the Deployment section.

With a few simple steps, you can convert your Crawlee project into a so-called Actor. Actors are serverless micro-apps that are easy to develop, run, share, and integrate. The infra, proxies, and storages are ready to go. Learn more about Actors.

Choosing between Crawlee CLI and Apify CLI for project setup

We started this guide by using the Crawlee CLI to bootstrap the project - it offers the basic Crawlee templates, including a ready-made Dockerfile. If you know you will be deploying your project to the Apify Platform, you might want to start with the Apify CLI instead. It also offers several project templates, and those are all set up to be used on the Apify Platform right ahead.

Dependencies

The first step will be installing two new dependencies:

  • Apify SDK, a toolkit for working with the Apify Platform. This will allow us to wire the storages (e.g. RequestQueue and Dataset) to the Apify cloud products. This will be a dependency of our Node.js project.

    npm install apify
  • Apify CLI, a command-line tool that will help us with authentication and deployment. This will be a globally installed tool, you will install it only once and use it in all your Crawlee/Apify projects.

    npm install -g apify-cli

Logging in to the Apify Platform

The next step will be creating your Apify account. Don't worry, we have a free tier, so you can try things out before you buy in! Once you have that, it's time to log in with the just-installed Apify CLI. You will need your personal access token, which you can find at https://console.apify.com/account#/integrations.

apify login

Adjusting the code

Now that you have your account set up, you will need to adjust the code a tiny bit. We will use the Apify SDK, which will help us to wire the Crawlee storages (like the RequestQueue) to their Apify Platform counterparts - otherwise Crawlee would keep things only in memory.

Open your src/main.js file (or src/main.ts if you used a TypeScript template), and add Actor.init() to the beginning of your main script and Actor.exit() to the end of it. Don't forget to await those calls, as both functions are async. Your code should look like this:

src/main.js
import { Actor } from 'apify';
import { PlaywrightCrawler, log } from 'crawlee';
import { router } from './routes.mjs';

await Actor.init();

// This is better set with CRAWLEE_LOG_LEVEL env var
// or a configuration option. This is just for show 😈
log.setLevel(log.LEVELS.DEBUG);

log.debug('Setting up crawler.');
const crawler = new PlaywrightCrawler({
// Instead of the long requestHandler with
// if clauses we provide a router instance.
requestHandler: router,
});

await crawler.run(['https://warehouse-theme-metal.myshopify.com/collections']);

await Actor.exit();

The Actor.init() call will configure Crawlee to use the Apify API instead of its default memory storage interface. It also sets up few other things, like listening to the platform events via websockets. The Actor.exit() call then handles graceful shutdown - it will close the open handles created by the Actor.init() call, as without that, the Node.js process would be stuck.

Understanding Actor.init() behavior with environment variables

The Actor.init() call works conditionally based on the environment variables, namely based on the APIFY_IS_AT_HOME env var, which is set to true on the Apify Platform. This means that your project will remain working the same locally, but will use the Apify API when deployed to the Apify Platform.

Initializing the project

You will also need to initialize the project for Apify, to do that, use the Apify CLI again:

apify init

This will create a folder called .actor, and an actor.json file inside it - this file contains the configuration relevant to the Apify Platform, namely the Actor name, version, build tag, and few other things. Check out the relevant documentation to see all the different things you can set there up.

Ship it!

And that's all, your project is now ready to be published on the Apify Platform. You can use the Apify CLI once more to do that:

apify push

This command will create an archive from your project, upload it to the Apify Platform and initiate a Docker build. Once finished, you will get a link to your new Actor on the platform.

Learning more about web scraping

Explore Apify Academy Resources

If you want to learn more about web scraping and browser automation, check out the Apify Academy. It's full of courses and tutorials on the topic. From beginner to advanced. And the best thing: It's free and open source ❤️

If you want to do one more project, checkout our tutorial on building a HackerNews scraper using Crawlee.

Thank you! 🎉

That's it! Thanks for reading the whole introduction and if there's anything wrong, please 🙏 let us know on GitHub or in our Discord community. Happy scraping! 👋