Request loaders
The request_loaders
sub-package extends the functionality of the RequestQueue
, providing additional tools for managing URLs. If you are new to Crawlee, and you do not know the RequestQueue
, consider starting with the Storages guide first. Request loaders define how requests are fetched and stored, enabling various use cases, such as reading URLs from files, external APIs or combining multiple sources together.
Overview
The request_loaders
sub-package introduces the following abstract classes:
RequestLoader
: The base interface for reading requests in a crawl.RequestManager
: ExtendsRequestLoader
with write capabilities.RequestManagerTandem
: Combines a read-onlyRequestLoader
with a writableRequestManager
.
And one specific request loader:
RequestList
: A lightweight implementation of request loader for managing a static list of URLs.
Below is a class diagram that illustrates the relationships between these components and the RequestQueue
:
Request loader
The RequestLoader
interface defines the foundation for fetching requests during a crawl. It provides abstract methods for basic operations like retrieving, marking, or checking the status of requests. Concrete implementations, such as RequestList
, build on this interface to handle specific scenarios. You may create your own loader that reads from an external file, a web endpoint, a database or matches some other specific scenario. For more details refer to the RequestLoader
API reference.
The RequestList
can accept an asynchronous generator as input. This allows the requests to be streamed, rather than loading them all into memory at once. This can significantly reduce the memory usage, especially when working with large sets of URLs.
Here is a basic example of working with the RequestList
:
import asyncio
from crawlee.request_loaders import RequestList
async def main() -> None:
# Open the request list, if it does not exist, it will be created.
# Leave name empty to use the default request list.
request_list = RequestList(
name='my-request-list',
requests=[
'https://apify.com/',
'https://crawlee.dev/',
'https://crawlee.dev/python/',
],
)
# Fetch and process requests from the queue.
while request := await request_list.fetch_next_request():
# Do something with it...
# And mark it as handled.
await request_list.mark_request_as_handled(request)
if __name__ == '__main__':
asyncio.run(main())
Request manager
The RequestManager
extends RequestLoader
with write capabilities. In addition to reading requests, a request manager can add or reclaim them. This is important for dynamic crawling projects, where new URLs may emerge during the crawl process. Or when certain requests may failed and need to be retried. For more details refer to the RequestManager
API reference.
Request manager tandem
The RequestManagerTandem
class allows you to combine the read-only capabilities RequestLoader
(like RequestList
) with read-write capabilities of a RequestManager
(like RequestQueue
). This is useful for scenarios where you need to load initial requests from a static source (like a file or database) and dynamically add or retry requests during the crawl. Additionally, it provides deduplication capabilities, ensuring that requests are not processed multiple times. Under the hood, RequestManagerTandem
checks whether the read-only loader still has pending requests. If so, each new request from the loader is transferred to the manager. Any newly added or reclaimed requests go directly to the manager side.
Request list with request queue
This sections describes the combination of the RequestList
and RequestQueue
classes. This setup is particularly useful when you have a static list of URLs that you want to crawl, but you also need to handle dynamic requests during the crawl process. The RequestManagerTandem
class facilitates this combination, with the RequestLoader.to_tandem
method available as a convenient shortcut. Requests from the RequestList
are processed first by enqueuing them into the default RequestQueue
, which handles persistence and retries failed requests.
- Explicitly usage
- Using to_tandem helper
import asyncio
from crawlee.crawlers import ParselCrawler, ParselCrawlingContext
from crawlee.request_loaders import RequestList, RequestManagerTandem
from crawlee.storages import RequestQueue
async def main() -> None:
# Create a static request list.
request_list = RequestList(['https://crawlee.dev', 'https://apify.com'])
# Open the default request queue.
request_queue = await RequestQueue.open()
# And combine them together to a sinhle request manager.
request_manager = RequestManagerTandem(request_list, request_queue)
# Create a crawler and pass the request manager to it.
crawler = ParselCrawler(request_manager=request_manager)
@crawler.router.default_handler
async def handler(context: ParselCrawlingContext) -> None:
# New links will be enqueued directly to the queue.
await context.enqueue_links()
await crawler.run()
asyncio.run(main())
import asyncio
from crawlee.crawlers import ParselCrawler, ParselCrawlingContext
from crawlee.request_loaders import RequestList
async def main() -> None:
# Create a static request list.
request_list = RequestList(['https://crawlee.dev', 'https://apify.com'])
# Convert the request list to a request manager using the to_tandem method.
# It is a tandem with the default request queue.
request_manager = await request_list.to_tandem()
# Create a crawler and pass the request manager to it.
crawler = ParselCrawler(request_manager=request_manager)
@crawler.router.default_handler
async def handler(context: ParselCrawlingContext) -> None:
# New links will be enqueued directly to the queue.
await context.enqueue_links()
await crawler.run()
asyncio.run(main())
Conclusion
This guide explained the request_loaders
sub-package, which extends the functionality of the RequestQueue
with additional tools for managing URLs. You learned about the RequestLoader
, RequestManager
, and RequestManagerTandem
classes, as well as the RequestList
class. You also saw examples of how to work with these classes in practice. If you have questions or need assistance, feel free to reach out on our GitHub or join our Discord community. Happy scraping!