Skip to main content
Version: 3.2

@crawlee/browser

Provides a simple framework for parallel crawling of web pages using headless browsers with Puppeteer and Playwright. The URLs to crawl are fed either from a static list of URLs or from a dynamic queue of URLs enabling recursive crawling of websites.

Since BrowserCrawler uses headless (or even headful) browsers to download web pages and extract data, it is useful for crawling of websites that require to execute JavaScript. If the target website doesn't need JavaScript, we should consider using the CheerioCrawler, which downloads the pages using raw HTTP requests and is about 10x faster.

The source URLs are represented by the Request objects that are fed from the RequestList or RequestQueue instances provided by the requestList or requestQueue constructor options, respectively. If neither requestList nor requestQueue options are provided, the crawler will open the default request queue either when the crawler.addRequests() function is called, or if requests parameter (representing the initial requests) of the crawler.run() function is provided.

If both requestList and requestQueue options are used, the instance first processes URLs from the RequestList and automatically enqueues all of them to the RequestQueue before it starts their processing. This ensures that a single URL is not crawled multiple times.

The crawler finishes when there are no more Request objects to crawl.

BrowserCrawler opens a new browser page (i.e. tab or window) for each Request object to crawl and then calls the function provided by user as the requestHandler option.

New pages are only opened when there is enough free CPU and memory available, using the functionality provided by the AutoscaledPool class. All AutoscaledPool configuration options can be passed to the autoscaledPoolOptions parameter of the BrowserCrawler constructor. For user convenience, the minConcurrency and maxConcurrency options of the underlying AutoscaledPool constructor are available directly in the BrowserCrawler constructor.

NOTE: the pool of browser instances is internally managed by the BrowserPool class.

Index

Crawlers

Interfaces

References

Type Aliases

References

AutoscaledPool

Re-exports AutoscaledPool

AutoscaledPoolOptions

BASIC_CRAWLER_TIMEOUT_BUFFER_SECS

BasicCrawler

Re-exports BasicCrawler

BasicCrawlerOptions

BasicCrawlingContext

ClientInfo

Re-exports ClientInfo

Configuration

Re-exports Configuration

ConfigurationOptions

Cookie

Re-exports Cookie

CrawlerAddRequestsOptions

CrawlerAddRequestsResult

CrawlingContext

Re-exports CrawlingContext

CreateContextOptions

CreateSession

Re-exports CreateSession

CriticalError

Re-exports CriticalError

Dataset

Re-exports Dataset

DatasetConsumer

Re-exports DatasetConsumer

DatasetContent

Re-exports DatasetContent

DatasetDataOptions

DatasetIteratorOptions

DatasetMapper

Re-exports DatasetMapper

DatasetOptions

Re-exports DatasetOptions

DatasetReducer

Re-exports DatasetReducer

EnqueueLinksOptions

EnqueueStrategy

Re-exports EnqueueStrategy

ErrorHandler

Re-exports ErrorHandler

EventManager

Re-exports EventManager

EventType

Re-exports EventType

EventTypeName

Re-exports EventTypeName

ExportOptions

Re-exports ExportOptions

FinalStatistics

Re-exports FinalStatistics

GlobInput

Re-exports GlobInput

GlobObject

Re-exports GlobObject

IStorage

Re-exports IStorage

KeyConsumer

Re-exports KeyConsumer

KeyValueStore

Re-exports KeyValueStore

KeyValueStoreIteratorOptions

KeyValueStoreOptions

LocalEventManager

Log

Re-exports Log

LogLevel

Re-exports LogLevel

Logger

Re-exports Logger

LoggerJson

Re-exports LoggerJson

LoggerOptions

Re-exports LoggerOptions

LoggerText

Re-exports LoggerText

NonRetryableError

ProxyConfiguration

ProxyConfigurationFunction

ProxyConfigurationOptions

ProxyInfo

Re-exports ProxyInfo

PseudoUrl

Re-exports PseudoUrl

PseudoUrlInput

Re-exports PseudoUrlInput

PseudoUrlObject

Re-exports PseudoUrlObject

PushErrorMessageOptions

RecordOptions

Re-exports RecordOptions

RegExpInput

Re-exports RegExpInput

RegExpObject

Re-exports RegExpObject

Request

Re-exports Request

RequestHandler

Re-exports RequestHandler

RequestList

Re-exports RequestList

RequestListOptions

RequestListSourcesFunction

RequestListState

Re-exports RequestListState

RequestOptions

Re-exports RequestOptions

RequestQueue

Re-exports RequestQueue

RequestQueueOperationOptions

RequestQueueOptions

RequestState

Re-exports RequestState

RequestTransform

Re-exports RequestTransform

RetryRequestError

Router

Re-exports Router

RouterHandler

Re-exports RouterHandler

Session

Re-exports Session

SessionOptions

Re-exports SessionOptions

SessionPool

Re-exports SessionPool

SessionPoolOptions

SessionState

Re-exports SessionState

Snapshotter

Re-exports Snapshotter

SnapshotterOptions

Source

Re-exports Source

StatisticPersistedState

StatisticState

Re-exports StatisticState

Statistics

Re-exports Statistics

StorageClient

Re-exports StorageClient

StorageManagerOptions

SystemInfo

Re-exports SystemInfo

SystemStatus

Re-exports SystemStatus

SystemStatusOptions

UrlPatternObject

Re-exports UrlPatternObject

UseStateOptions

Re-exports UseStateOptions

createBasicRouter

enqueueLinks

Re-exports enqueueLinks

filterRequestsByPatterns

log

Re-exports log

purgeDefaultStorages

useState

Re-exports useState

Type Aliases

BrowserErrorHandler

BrowserErrorHandler<Context>: ErrorHandler<Context>

Type parameters

  • Context: BrowserCrawlingContext = BrowserCrawlingContext

BrowserHook

BrowserHook<Context, GoToOptions>: (crawlingContext: Context, gotoOptions: GoToOptions) => Awaitable<void>

Type parameters

  • Context = BrowserCrawlingContext
  • GoToOptions: Dictionary | undefined = Dictionary

Type declaration

    • (crawlingContext: Context, gotoOptions: GoToOptions): Awaitable<void>
    • Parameters

      • crawlingContext: Context
      • gotoOptions: GoToOptions

      Returns Awaitable<void>

BrowserRequestHandler

BrowserRequestHandler<Context>: RequestHandler<Context>

Type parameters

  • Context: BrowserCrawlingContext = BrowserCrawlingContext