Request Storage
How to store the requests your crawler will go through
Result Storage
Where are you going to store all of that juicy scraped data?!
HTTP clients
Learn about Crawlee's HTTP client architecture, how to switch between different implementations, and create custom HTTP clients for specialized web scraping needs.
Configuration
Configuring Crawlee parameters
CheerioCrawler
Your first steps into the world of scraping with Crawlee
JavaScript rendering
Your first steps into the world of scraping with Crawlee
Proxy Management
Using proxies to get around those annoying IP-blocks
Session Management
How to manage your cookies, proxy IP rotations and more
Scaling our crawlers
To infinity and beyond! ...within limits
Avoid getting blocked
How to avoid getting blocked when scraping
JSDOMCrawler
Your first steps into the world of scraping with Crawlee
Impit HTTP Client
Browser impersonation for HTTP requests using the Impit library
Got Scraping
Blazing fast cURL alternative for modern web scraping
TypeScript Projects
Stricter, safer, and better development experience
Running in Docker
Example Docker images to run your crawlers
StagehandCrawler
AI-powered web crawling with natural language browser automation
Running in web server
Run Crawlee in web server using a request/response approach
Parallel Scraping
Parallelizing your scrapers with Crawlee
Using a custom HTTP client (Experimental)
Use a custom HTTP client for `sendRequest` and plain-HTTP crawling