Skip to main content

BeautifulSoupCrawler

A web crawler for performing HTTP requests and parsing HTML/XML content.

The BeautifulSoupCrawler builds on top of the AbstractHttpCrawler, which means it inherits all of its features. It specifies its own parser BeautifulSoupParser which is used to parse HttpResponse. BeautifulSoupParser uses following library for parsing: https://pypi.org/project/beautifulsoup4/

The HTTP client-based crawlers are ideal for websites that do not require JavaScript execution. However, if you need to execute client-side JavaScript, consider using browser-based crawler like the PlaywrightCrawler.

Usage

from crawlee.crawlers import BeautifulSoupCrawler, BeautifulSoupCrawlingContext

crawler = BeautifulSoupCrawler()

# Define the default request handler, which will be called for every request.
@crawler.router.default_handler
async def request_handler(context: BeautifulSoupCrawlingContext) -> None:
context.log.info(f'Processing {context.request.url} ...')

# Extract data from the page.
data = {
'url': context.request.url,
'title': context.soup.title.string if context.soup.title else None,
}

# Push the extracted data to the default dataset.
await context.push_data(data)

await crawler.run(['https://crawlee.dev/'])

Hierarchy

Index

Methods

__init__

  • __init__(*, parser, request_handler, statistics, configuration, event_manager, storage_client, request_manager, session_pool, proxy_configuration, http_client, max_request_retries, max_requests_per_crawl, max_session_rotations, max_crawl_depth, use_session_pool, retry_on_blocked, concurrency_settings, request_handler_timeout, abort_on_error, configure_logging, statistics_log_format, keep_alive, additional_http_error_status_codes, ignore_http_error_status_codes, respect_robots_txt_file, status_message_logging_interval, status_message_callback): None
  • Initialize a new instance.


    Parameters

    • optionalkeyword-onlyparser: BeautifulSoupParserType = 'lxml'

      The type of parser that should be used by BeautifulSoup.

    • keyword-onlyoptionalrequest_handler: NotRequired[Callable[[TCrawlingContext], Awaitable[None]]]

      A callable responsible for handling requests.

    • keyword-onlyoptionalstatistics: NotRequired[Statistics[TStatisticsState]]

      A custom Statistics instance, allowing the use of non-default configuration.

    • keyword-onlyoptionalconfiguration: NotRequired[Configuration]

      The Configuration instance. Some of its properties are used as defaults for the crawler.

    • keyword-onlyoptionalevent_manager: NotRequired[EventManager]

      The event manager for managing events for the crawler and all its components.

    • keyword-onlyoptionalstorage_client: NotRequired[StorageClient]

      The storage client for managing storages for the crawler and all its components.

    • keyword-onlyoptionalrequest_manager: NotRequired[RequestManager]

      Manager of requests that should be processed by the crawler.

    • keyword-onlyoptionalsession_pool: NotRequired[SessionPool]

      A custom SessionPool instance, allowing the use of non-default configuration.

    • keyword-onlyoptionalproxy_configuration: NotRequired[ProxyConfiguration]

      HTTP proxy configuration used when making requests.

    • keyword-onlyoptionalhttp_client: NotRequired[HttpClient]

      HTTP client used by BasicCrawlingContext.send_request method.

    • keyword-onlyoptionalmax_request_retries: NotRequired[int]

      Specifies the maximum number of retries allowed for a request if its processing fails. This includes retries due to navigation errors or errors thrown from user-supplied functions (request_handler, pre_navigation_hooks etc.).

      This limit does not apply to retries triggered by session rotation (see max_session_rotations).

    • keyword-onlyoptionalmax_requests_per_crawl: NotRequired[int | None]

      Maximum number of pages to open during a crawl. The crawl stops upon reaching this limit. Setting this value can help avoid infinite loops in misconfigured crawlers. None means no limit. Due to concurrency settings, the actual number of pages visited may slightly exceed this value.

    • keyword-onlyoptionalmax_session_rotations: NotRequired[int]

      Maximum number of session rotations per request. The crawler rotates the session if a proxy error occurs or if the website blocks the request.

      The session rotations are not counted towards the max_request_retries limit.

    • keyword-onlyoptionalmax_crawl_depth: NotRequired[int | None]

      Specifies the maximum crawl depth. If set, the crawler will stop processing links beyond this depth. The crawl depth starts at 0 for initial requests and increases with each subsequent level of links. Requests at the maximum depth will still be processed, but no new links will be enqueued from those requests. If not set, crawling continues without depth restrictions.

    • keyword-onlyoptionaluse_session_pool: NotRequired[bool]

      Enable the use of a session pool for managing sessions during crawling.

    • keyword-onlyoptionalretry_on_blocked: NotRequired[bool]

      If True, the crawler attempts to bypass bot protections automatically.

    • keyword-onlyoptionalconcurrency_settings: NotRequired[ConcurrencySettings]

      Settings to fine-tune concurrency levels.

    • keyword-onlyoptionalrequest_handler_timeout: NotRequired[timedelta]

      Maximum duration allowed for a single request handler to run.

    • keyword-onlyoptionalabort_on_error: NotRequired[bool]

      If True, the crawler stops immediately when any request handler error occurs.

    • keyword-onlyoptionalconfigure_logging: NotRequired[bool]

      If True, the crawler will set up logging infrastructure automatically.

    • keyword-onlyoptionalstatistics_log_format: NotRequired[Literal['table', 'inline']]

      If 'table', displays crawler statistics as formatted tables in logs. If 'inline', outputs statistics as plain text log messages.

    • keyword-onlyoptionalkeep_alive: NotRequired[bool]

      Flag that can keep crawler running even when there are no requests in queue.

    • keyword-onlyoptionaladditional_http_error_status_codes: NotRequired[Iterable[int]]

      Additional HTTP status codes to treat as errors, triggering automatic retries when encountered.

    • keyword-onlyoptionalignore_http_error_status_codes: NotRequired[Iterable[int]]

      HTTP status codes that are typically considered errors but should be treated as successful responses.

    • keyword-onlyoptionalrespect_robots_txt_file: NotRequired[bool]

      If set to True, the crawler will automatically try to fetch the robots.txt file for each domain, and skip those that are not allowed. This also prevents disallowed URLs to be added via EnqueueLinksFunction.

    • keyword-onlyoptionalstatus_message_logging_interval: NotRequired[timedelta]

      Interval for logging the crawler status messages.

    • keyword-onlyoptionalstatus_message_callback: NotRequired[ Callable[[StatisticsState, StatisticsState | None, str], Awaitable[str | None]] ]

      Allows overriding the default status message. The default status message is provided in the parameters. Returning None suppresses the status message.

    Returns None

add_requests

  • async add_requests(requests, *, forefront, batch_size, wait_time_between_batches, wait_for_all_requests_to_be_added, wait_for_all_requests_to_be_added_timeout): None
  • Add requests to the underlying request manager in batches.


    Parameters

    • requests: Sequence[str | Request]

      A list of requests to add to the queue.

    • optionalkeyword-onlyforefront: bool = False

      If True, add requests to the forefront of the queue.

    • optionalkeyword-onlybatch_size: int = 1000

      The number of requests to add in one batch.

    • optionalkeyword-onlywait_time_between_batches: timedelta = timedelta(0)

      Time to wait between adding batches.

    • optionalkeyword-onlywait_for_all_requests_to_be_added: bool = False

      If True, wait for all requests to be added before returning.

    • optionalkeyword-onlywait_for_all_requests_to_be_added_timeout: timedelta | None = None

      Timeout for waiting for all requests to be added.

    Returns None

create_parsed_http_crawler_class

  • create_parsed_http_crawler_class(static_parser): type[AbstractHttpCrawler[ParsedHttpCrawlingContext[TParseResult], TParseResult, TSelectResult]]
  • Create a specific version of AbstractHttpCrawler class.

    This is a convenience factory method for creating a specific AbstractHttpCrawler subclass. While AbstractHttpCrawler allows its two generic parameters to be independent, this method simplifies cases where TParseResult is used for both generic parameters.


    Parameters

    • static_parser: AbstractHttpParser[TParseResult, TSelectResult]

    Returns type[AbstractHttpCrawler[ParsedHttpCrawlingContext[TParseResult], TParseResult, TSelectResult]]

error_handler

  • error_handler(handler): ErrorHandler[TCrawlingContext]
  • Register a function to handle errors occurring in request handlers.

    The error handler is invoked after a request handler error occurs and before a retry attempt.


    Parameters

    • handler: ErrorHandler[TCrawlingContext | BasicCrawlingContext]

    Returns ErrorHandler[TCrawlingContext]

export_data

  • async export_data(path, dataset_id, dataset_name): None
  • Export all items from a Dataset to a JSON or CSV file.

    This method simplifies the process of exporting data collected during crawling. It automatically determines the export format based on the file extension (.json or .csv) and handles the conversion of Dataset items to the appropriate format.


    Parameters

    • path: str | Path

      The destination file path. Must end with '.json' or '.csv'.

    • optionaldataset_id: str | None = None

      The ID of the Dataset to export from. If None, uses name parameter instead.

    • optionaldataset_name: str | None = None

      The name of the Dataset to export from. If None, uses id parameter instead.

    Returns None

failed_request_handler

  • failed_request_handler(handler): FailedRequestHandler[TCrawlingContext]
  • Register a function to handle requests that exceed the maximum retry limit.

    The failed request handler is invoked when a request has failed all retry attempts.


    Parameters

    • handler: FailedRequestHandler[TCrawlingContext | BasicCrawlingContext]

    Returns FailedRequestHandler[TCrawlingContext]

get_data

  • Retrieve data from a Dataset.

    This helper method simplifies the process of retrieving data from a Dataset. It opens the specified one and then retrieves the data based on the provided parameters.


    Parameters

    • optionaldataset_id: str | None = None

      The ID of the Dataset.

    • optionaldataset_name: str | None = None

      The name of the Dataset.

    • kwargs: Unpack[GetDataKwargs]

      Keyword arguments to be passed to the Dataset.get_data() method.

    Returns DatasetItemsListPage

get_dataset

  • async get_dataset(*, id, name): Dataset
  • Return the Dataset with the given ID or name. If none is provided, return the default one.


    Parameters

    • optionalkeyword-onlyid: str | None = None
    • optionalkeyword-onlyname: str | None = None

    Returns Dataset

get_key_value_store

  • Return the KeyValueStore with the given ID or name. If none is provided, return the default KVS.


    Parameters

    • optionalkeyword-onlyid: str | None = None
    • optionalkeyword-onlyname: str | None = None

    Returns KeyValueStore

get_request_manager

on_skipped_request

pre_navigation_hook

  • pre_navigation_hook(hook): None
  • Register a hook to be called before each navigation.


    Parameters

    • hook: Callable[[BasicCrawlingContext], Awaitable[None]]

      A coroutine function to be called before each navigation.

    Returns None

run

  • Run the crawler until all requests are processed.


    Parameters

    • optionalrequests: Sequence[str | Request] | None = None

      The requests to be enqueued before the crawler starts.

    • optionalkeyword-onlypurge_request_queue: bool = True

      If this is True and the crawler is not being run for the first time, the default request queue will be purged.

    Returns FinalStatistics

stop

  • stop(reason): None
  • Set flag to stop crawler.

    This stops current crawler run regardless of whether all requests were finished.


    Parameters

    • optionalreason: str = 'Stop was called externally.'

      Reason for stopping that will be used in logs.

    Returns None

Properties

log

log: logging.Logger

The logger used by the crawler.

router

router: Router[TCrawlingContext]

The Router used to handle each individual crawling request.

statistics

statistics: Statistics[TStatisticsState]

Statistics about the current (or last) crawler run.