Request loaders
The request_loaders
sub-package extends the functionality of the RequestQueue
, providing additional tools for managing URLs and requests. If you are new to Crawlee and unfamiliar with the RequestQueue
, consider starting with the Storages guide first. Request loaders define how requests are fetched and stored, enabling various use cases such as reading URLs from files, external APIs, or combining multiple sources together.
Overview
The request_loaders
sub-package introduces the following abstract classes:
RequestLoader
: The base interface for reading requests in a crawl.RequestManager
: ExtendsRequestLoader
with write capabilities.RequestManagerTandem
: Combines a read-onlyRequestLoader
with a writableRequestManager
.
And specific request loader implementations:
RequestList
: A lightweight implementation for managing a static list of URLs.SitemapRequestLoader
: A specialized loader that reads URLs from XML sitemaps with filtering capabilities.
Below is a class diagram that illustrates the relationships between these components and the RequestQueue
:
Request loaders
The RequestLoader
interface defines the foundation for fetching requests during a crawl. It provides abstract methods for basic operations like retrieving, marking, and checking the status of requests. Concrete implementations, such as RequestList
, build on this interface to handle specific scenarios. You can create your own custom loader that reads from an external file, web endpoint, database, or any other specific data source. For more details, refer to the RequestLoader
API reference.
Request list
The RequestList
can accept an asynchronous generator as input, allowing requests to be streamed rather than loading them all into memory at once. This can significantly reduce memory usage, especially when working with large sets of URLs.
Here is a basic example of working with the RequestList
:
import asyncio
from crawlee.request_loaders import RequestList
async def main() -> None:
# Open the request list, if it does not exist, it will be created.
# Leave name empty to use the default request list.
request_list = RequestList(
name='my-request-list',
requests=[
'https://apify.com/',
'https://crawlee.dev/',
'https://crawlee.dev/python/',
],
)
# Fetch and process requests from the queue.
while request := await request_list.fetch_next_request():
# Do something with it...
# And mark it as handled.
await request_list.mark_request_as_handled(request)
if __name__ == '__main__':
asyncio.run(main())
Sitemap request loader
The SitemapRequestLoader
is a specialized request loader that reads URLs from XML sitemaps. It's particularly useful when you want to crawl a website systematically by following its sitemap structure. The loader supports filtering URLs using glob patterns and regular expressions, allowing you to include or exclude specific types of URLs. The SitemapRequestLoader
provides streaming processing of sitemaps, ensuring efficient memory usage without loading the entire sitemap into memory.
import asyncio
import re
from crawlee.http_clients import ImpitHttpClient
from crawlee.request_loaders import SitemapRequestLoader
async def main() -> None:
# Create an HTTP client for fetching sitemaps
async with ImpitHttpClient() as http_client:
# Create a sitemap request loader with URL filtering
sitemap_loader = SitemapRequestLoader(
sitemap_urls=['https://crawlee.dev/sitemap.xml'],
http_client=http_client,
# Exclude all URLs that do not contain 'blog'
exclude=[re.compile(r'^((?!blog).)*$')],
max_buffer_size=500, # Buffer up to 500 URLs in memory
)
while request := await sitemap_loader.fetch_next_request():
# Do something with it...
# And mark it as handled.
await sitemap_loader.mark_request_as_handled(request)
if __name__ == '__main__':
asyncio.run(main())
Request managers
The RequestManager
extends RequestLoader
with write capabilities. In addition to reading requests, a request manager can add and reclaim them. This is essential for dynamic crawling projects where new URLs may emerge during the crawl process, or when certain requests fail and need to be retried. For more details, refer to the RequestManager
API reference.
Request manager tandem
The RequestManagerTandem
class allows you to combine the read-only capabilities of a RequestLoader
(like RequestList
) with the read-write capabilities of a RequestManager
(like RequestQueue
). This is useful for scenarios where you need to load initial requests from a static source (such as a file or database) and dynamically add or retry requests during the crawl. Additionally, it provides deduplication capabilities, ensuring that requests are not processed multiple times.
Under the hood, RequestManagerTandem
checks whether the read-only loader still has pending requests. If so, each new request from the loader is transferred to the manager. Any newly added or reclaimed requests go directly to the manager side.
Request list with request queue
This section describes the combination of the RequestList
and RequestQueue
classes. This setup is particularly useful when you have a static list of URLs that you want to crawl, but also need to handle dynamic requests discovered during the crawl process. The RequestManagerTandem
class facilitates this combination, with the RequestLoader.to_tandem
method available as a convenient shortcut. Requests from the RequestList
are processed first by being enqueued into the default RequestQueue
, which handles persistence and retries for failed requests.
- Explicit usage
- Using to_tandem helper
import asyncio
from crawlee.crawlers import ParselCrawler, ParselCrawlingContext
from crawlee.request_loaders import RequestList, RequestManagerTandem
from crawlee.storages import RequestQueue
async def main() -> None:
# Create a static request list.
request_list = RequestList(['https://crawlee.dev', 'https://apify.com'])
# Open the default request queue.
request_queue = await RequestQueue.open()
# And combine them together to a sinhle request manager.
request_manager = RequestManagerTandem(request_list, request_queue)
# Create a crawler and pass the request manager to it.
crawler = ParselCrawler(
request_manager=request_manager,
max_requests_per_crawl=10, # Limit the max requests per crawl.
)
@crawler.router.default_handler
async def handler(context: ParselCrawlingContext) -> None:
# New links will be enqueued directly to the queue.
await context.enqueue_links()
await crawler.run()
if __name__ == '__main__':
asyncio.run(main())
import asyncio
from crawlee.crawlers import ParselCrawler, ParselCrawlingContext
from crawlee.request_loaders import RequestList
async def main() -> None:
# Create a static request list.
request_list = RequestList(['https://crawlee.dev', 'https://apify.com'])
# Convert the request list to a request manager using the to_tandem method.
# It is a tandem with the default request queue.
request_manager = await request_list.to_tandem()
# Create a crawler and pass the request manager to it.
crawler = ParselCrawler(
request_manager=request_manager,
max_requests_per_crawl=10, # Limit the max requests per crawl.
)
@crawler.router.default_handler
async def handler(context: ParselCrawlingContext) -> None:
# New links will be enqueued directly to the queue.
await context.enqueue_links()
await crawler.run()
if __name__ == '__main__':
asyncio.run(main())
Sitemap request loader with request queue
Similar to the RequestList
example above, you can combine a SitemapRequestLoader
with a RequestQueue
using the RequestManagerTandem
class. This setup is particularly useful when you want to crawl URLs from a sitemap while also handling dynamic requests discovered during the crawl process. URLs from the sitemap are processed first by being enqueued into the default RequestQueue
, which handles persistence and retries for failed requests.
- Explicit usage
- Using to_tandem helper
import asyncio
import re
from crawlee.crawlers import ParselCrawler, ParselCrawlingContext
from crawlee.http_clients import HttpxHttpClient
from crawlee.request_loaders import RequestManagerTandem, SitemapRequestLoader
from crawlee.storages import RequestQueue
async def main() -> None:
# Create an HTTP client for fetching sitemaps
async with HttpxHttpClient() as http_client:
# Create a sitemap request loader with URL filtering
sitemap_loader = SitemapRequestLoader(
sitemap_urls=['https://crawlee.dev/sitemap.xml'],
http_client=http_client,
# Include only URLs that contain 'docs'
include=[re.compile(r'.*docs.*')],
max_buffer_size=500, # Buffer up to 500 URLs in memory
)
# Open the default request queue.
request_queue = await RequestQueue.open()
# And combine them together to a single request manager.
request_manager = RequestManagerTandem(sitemap_loader, request_queue)
# Create a crawler and pass the request manager to it.
crawler = ParselCrawler(
request_manager=request_manager,
max_requests_per_crawl=10, # Limit the max requests per crawl.
)
@crawler.router.default_handler
async def handler(context: ParselCrawlingContext) -> None:
# New links will be enqueued directly to the queue.
await context.enqueue_links()
await crawler.run()
if __name__ == '__main__':
asyncio.run(main())
import asyncio
import re
from crawlee.crawlers import ParselCrawler, ParselCrawlingContext
from crawlee.http_clients import HttpxHttpClient
from crawlee.request_loaders import SitemapRequestLoader
async def main() -> None:
# Create an HTTP client for fetching sitemaps
async with HttpxHttpClient() as http_client:
# Create a sitemap request loader with URL filtering
sitemap_loader = SitemapRequestLoader(
sitemap_urls=['https://crawlee.dev/sitemap.xml'],
http_client=http_client,
# Include only URLs that contain 'docs'
include=[re.compile(r'.*docs.*')],
max_buffer_size=500, # Buffer up to 500 URLs in memory
)
# Convert the sitemap loader to a request manager using the to_tandem method.
# It is a tandem with the default request queue.
request_manager = await sitemap_loader.to_tandem()
# Create a crawler and pass the request manager to it.
crawler = ParselCrawler(
request_manager=request_manager,
max_requests_per_crawl=10, # Limit the max requests per crawl.
)
@crawler.router.default_handler
async def handler(context: ParselCrawlingContext) -> None:
# New links will be enqueued directly to the queue.
await context.enqueue_links()
await crawler.run()
if __name__ == '__main__':
asyncio.run(main())
Conclusion
This guide explained the request_loaders
sub-package, which extends the functionality of the RequestQueue
with additional tools for managing URLs and requests. You learned about the RequestLoader
, RequestManager
, and RequestManagerTandem
classes, as well as the RequestList
and SitemapRequestLoader
implementations. You also saw practical examples of how to work with these classes to handle various crawling scenarios.
If you have questions or need assistance, feel free to reach out on our GitHub or join our Discord community. Happy scraping!