Skip to main content

HttpCrawlerOptions

Arguments for the AbstractHttpCrawler constructor.

It is intended for typing forwarded __init__ arguments in the subclasses.

Hierarchy

Index

Properties

abort_on_error

abort_on_error: NotRequired[bool]

If True, the crawler stops immediately when any request handler error occurs.

additional_http_error_status_codes

additional_http_error_status_codes: NotRequired[Iterable[int]]

Additional HTTP status codes to treat as errors, triggering automatic retries when encountered.

concurrency_settings

concurrency_settings: NotRequired[ConcurrencySettings]

Settings to fine-tune concurrency levels.

configuration

configuration: NotRequired[Configuration]

The configuration object. Some of its properties are used as defaults for the crawler.

configure_logging

configure_logging: NotRequired[bool]

If True, the crawler will set up logging infrastructure automatically.

event_manager

event_manager: NotRequired[EventManager]

The event manager for managing events for the crawler and all its components.

http_client

http_client: NotRequired[BaseHttpClient]

HTTP client used by BasicCrawlingContext.send_request method.

ignore_http_error_status_codes

ignore_http_error_status_codes: NotRequired[Iterable[int]]

HTTP status codes typically considered errors but to be treated as successful responses.

max_crawl_depth

max_crawl_depth: NotRequired[int | None]

Specifies the maximum crawl depth. If set, the crawler will stop processing links beyond this depth. The crawl depth starts at 0 for initial requests and increases with each subsequent level of links. Requests at the maximum depth will still be processed, but no new links will be enqueued from those requests. If not set, crawling continues without depth restrictions.

max_request_retries

max_request_retries: NotRequired[int]

Maximum number of attempts to process a single request.

max_requests_per_crawl

max_requests_per_crawl: NotRequired[int | None]

Maximum number of pages to open during a crawl. The crawl stops upon reaching this limit. Setting this value can help avoid infinite loops in misconfigured crawlers. None means no limit. Due to concurrency settings, the actual number of pages visited may slightly exceed this value.

max_session_rotations

max_session_rotations: NotRequired[int]

Maximum number of session rotations per request. The crawler rotates the session if a proxy error occurs or if the website blocks the request.

proxy_configuration

proxy_configuration: NotRequired[ProxyConfiguration]

HTTP proxy configuration used when making requests.

request_handler

request_handler: NotRequired[Callable[[TCrawlingContext], Awaitable[None]]]

A callable responsible for handling requests.

request_handler_timeout

request_handler_timeout: NotRequired[timedelta]

Maximum duration allowed for a single request handler to run.

request_manager

request_manager: NotRequired[RequestManager]

Manager of requests that should be processed by the crawler.

retry_on_blocked

retry_on_blocked: NotRequired[bool]

If True, the crawler attempts to bypass bot protections automatically.

session_pool

session_pool: NotRequired[SessionPool]

A custom SessionPool instance, allowing the use of non-default configuration.

statistics

statistics: NotRequired[Statistics[StatisticsState]]

A custom Statistics instance, allowing the use of non-default configuration.

storage_client

storage_client: NotRequired[BaseStorageClient]

The storage client for managing storages for the crawler and all its components.

use_session_pool

use_session_pool: NotRequired[bool]

Enable the use of a session pool for managing sessions during crawling.