Build your Python
web crawlers
using Crawlee
Build your Python
web crawlers
using Crawlee
It helps you build reliable Python web crawlers. Fast.
pipx run crawlee create my-crawler
Build reliable Python web crawlers 🏗️
Crawlee won't fix broken selectors for you (yet), but it helps you build and maintain your Python web crawlers faster.
When a website adds JavaScript rendering, you don't have to rewrite everything, only switch to a browser crawler. When you later find a great API to speed up your crawls, flip the switch back.
Crawlee is built by people who scrape for a living and use it every day to scrape millions of pages. Meet our community on Discord.
Python with type hints
Crawlee for Python is written in a modern way using type hints, providing code completion in your IDE and helping you catch bugs early on build time.
Headless browsers
Switch your crawlers from HTTP to a headless browser in 3 lines of code. Crawlee builds on top of Playwright and adds its own features. Chrome, Firefox and more.
Automatic scaling and proxy management
Crawlee automatically manages concurrency based on available system resources and smartly rotates proxies. Proxies that often time-out, return network errors or bad HTTP codes like 401 or 403 are discarded.
Try Crawlee out 👾
The fastest way to try Crawlee out is to use the Crawlee CLI and choose one of the provided templates. The CLI will prepare a new project for you, and add boilerplate code for you to play with.
pipx run crawlee create my-crawler
If you prefer to integrate Crawlee into your own project, you can follow the example below. Crawlee is available on PyPI, so you can install it using pip
. Since it uses PlaywrightCrawler
, you will also need to install crawlee
package with playwright
extra. It is not not included with Crawlee by default to keep the installation size minimal.
pip install 'crawlee[playwright]'
Currently we have Python packages crawlee
and playwright
installed. There is one more essential requirement: the Playwright browser binaries. You can install them by running:
playwright install
Now we are ready to execute our first Crawlee project:
import asyncio
from crawlee.playwright_crawler import PlaywrightCrawler, PlaywrightCrawlingContext
async def main() -> None:
crawler = PlaywrightCrawler(
max_requests_per_crawl=5, # Limit the crawl to 5 requests.
headless=False, # Show the browser window.
browser_type='firefox', # Use the Firefox browser.
)
# Define the default request handler, which will be called for every request.
@crawler.router.default_handler
async def request_handler(context: PlaywrightCrawlingContext) -> None:
context.log.info(f'Processing {context.request.url} ...')
# Enqueue all links found on the page.
await context.enqueue_links()
# Extract data from the page using Playwright API.
data = {
'url': context.request.url,
'title': await context.page.title(),
'content': (await context.page.content())[:100],
}
# Push the extracted data to the default dataset.
await context.push_data(data)
# Run the crawler with the initial list of URLs.
await crawler.run(['https://crawlee.dev'])
# Export the entire dataset to a JSON file.
await crawler.export_data('results.json')
# Or work with the data directly.
data = await crawler.get_data()
crawler.log.info(f'Extracted data: {data.items}')
if __name__ == '__main__':
asyncio.run(main())