Skip to main content
Version: Next

@crawlee/basic

Provides a simple framework for parallel crawling of web pages. The URLs to crawl are fed either from a static list of URLs or from a dynamic queue of URLs enabling recursive crawling of websites.

BasicCrawler is a low-level tool that requires the user to implement the page download and data extraction functionality themselves. If we want a crawler that already facilitates this functionality, we should consider using CheerioCrawler, PuppeteerCrawler or PlaywrightCrawler.

BasicCrawler invokes the user-provided requestHandler for each Request object, which represents a single URL to crawl. The Request objects are fed from the RequestList or RequestQueue instances provided by the requestList or requestQueue constructor options, respectively. If neither requestList nor requestQueue options are provided, the crawler will open the default request queue either when the crawler.addRequests() function is called, or if requests parameter (representing the initial requests) of the crawler.run() function is provided.

If both requestList and requestQueue options are used, the instance first processes URLs from the RequestList and automatically enqueues all of them to the RequestQueue before it starts their processing. This ensures that a single URL is not crawled multiple times.

The crawler finishes if there are no more Request objects to crawl.

New requests are only dispatched when there is enough free CPU and memory available, using the functionality provided by the AutoscaledPool class. All AutoscaledPool configuration options can be passed to the autoscaledPoolOptions parameter of the BasicCrawler constructor. For user convenience, the minConcurrency and maxConcurrency options of the underlying AutoscaledPool constructor are available directly in the BasicCrawler constructor.

Example usage

import { BasicCrawler, Dataset } from 'crawlee';

// Create a crawler instance
const crawler = new BasicCrawler({
async requestHandler({ request, sendRequest }) {
// 'request' contains an instance of the Request class
// Here we simply fetch the HTML of the page and store it to a dataset
const { body } = await sendRequest({
url: request.url,
method: request.method,
body: request.payload,
headers: request.headers,
});

await Dataset.pushData({
url: request.url,
html: body,
})
},
});

// Enqueue the initial requests and run the crawler
await crawler.run([
'http://www.example.com/page-1',
'http://www.example.com/page-2',
]);

Index

Crawlers

Other

Other

AddRequestsBatchedOptions

AddRequestsBatchedResult

AutoscaledPool

Re-exports AutoscaledPool

AutoscaledPoolOptions

BaseHttpClient

Re-exports BaseHttpClient

BaseHttpResponseData

BLOCKED_STATUS_CODES

checkStorageAccess

ClientInfo

Re-exports ClientInfo

Configuration

Re-exports Configuration

ConfigurationOptions

Cookie

Re-exports Cookie

CrawlingContext

Re-exports CrawlingContext

CreateSession

Re-exports CreateSession

CriticalError

Re-exports CriticalError

Dataset

Re-exports Dataset

DatasetConsumer

Re-exports DatasetConsumer

DatasetContent

Re-exports DatasetContent

DatasetDataOptions

DatasetExportOptions

DatasetExportToOptions

DatasetIteratorOptions

DatasetMapper

Re-exports DatasetMapper

DatasetOptions

Re-exports DatasetOptions

DatasetReducer

Re-exports DatasetReducer

enqueueLinks

Re-exports enqueueLinks

EnqueueLinksOptions

EnqueueStrategy

Re-exports EnqueueStrategy

ErrnoException

Re-exports ErrnoException

ErrorSnapshotter

Re-exports ErrorSnapshotter

ErrorTracker

Re-exports ErrorTracker

ErrorTrackerOptions

EventManager

Re-exports EventManager

EventType

Re-exports EventType

EventTypeName

Re-exports EventTypeName

filterRequestsByPatterns

FinalStatistics

Re-exports FinalStatistics

GetUserDataFromRequest

GlobInput

Re-exports GlobInput

GlobObject

Re-exports GlobObject

GotScrapingHttpClient

HttpRequest

Re-exports HttpRequest

HttpRequestOptions

HttpResponse

Re-exports HttpResponse

IRequestList

Re-exports IRequestList

IStorage

Re-exports IStorage

KeyConsumer

Re-exports KeyConsumer

KeyValueStore

Re-exports KeyValueStore

KeyValueStoreIteratorOptions

KeyValueStoreOptions

LoadedRequest

Re-exports LoadedRequest

LocalEventManager

log

Re-exports log

Log

Re-exports Log

Logger

Re-exports Logger

LoggerJson

Re-exports LoggerJson

LoggerOptions

Re-exports LoggerOptions

LoggerText

Re-exports LoggerText

LogLevel

Re-exports LogLevel

MAX_POOL_SIZE

Re-exports MAX_POOL_SIZE

NonRetryableError

PERSIST_STATE_KEY

PersistenceOptions

processHttpRequestOptions

ProxyConfiguration

ProxyConfigurationFunction

ProxyConfigurationOptions

ProxyInfo

Re-exports ProxyInfo

PseudoUrl

Re-exports PseudoUrl

PseudoUrlInput

Re-exports PseudoUrlInput

PseudoUrlObject

Re-exports PseudoUrlObject

purgeDefaultStorages

PushErrorMessageOptions

QueueOperationInfo

RecordOptions

Re-exports RecordOptions

RedirectHandler

Re-exports RedirectHandler

RegExpInput

Re-exports RegExpInput

RegExpObject

Re-exports RegExpObject

Request

Re-exports Request

RequestHandlerResult

RequestList

Re-exports RequestList

RequestListOptions

RequestListSourcesFunction

RequestListState

Re-exports RequestListState

RequestOptions

Re-exports RequestOptions

RequestProvider

Re-exports RequestProvider

RequestProviderOptions

RequestQueue

Re-exports RequestQueue

RequestQueueOperationOptions

RequestQueueOptions

RequestQueueV1

Re-exports RequestQueueV1

RequestQueueV2

Re-exports RequestQueueV2

RequestState

Re-exports RequestState

RequestTransform

Re-exports RequestTransform

ResponseLike

Re-exports ResponseLike

ResponseTypes

Re-exports ResponseTypes

RestrictedCrawlingContext

RetryRequestError

Router

Re-exports Router

RouterHandler

Re-exports RouterHandler

RouterRoutes

Re-exports RouterRoutes

Session

Re-exports Session

SessionError

Re-exports SessionError

SessionOptions

Re-exports SessionOptions

SessionPool

Re-exports SessionPool

SessionPoolOptions

SessionState

Re-exports SessionState

SitemapRequestList

SitemapRequestListOptions

SnapshotResult

Re-exports SnapshotResult

Snapshotter

Re-exports Snapshotter

SnapshotterOptions

Source

Re-exports Source

StatisticPersistedState

Statistics

Re-exports Statistics

StatisticsOptions

StatisticState

Re-exports StatisticState

StorageClient

Re-exports StorageClient

StorageManagerOptions

StreamingHttpResponse

SystemInfo

Re-exports SystemInfo

SystemStatus

Re-exports SystemStatus

SystemStatusOptions

TieredProxy

Re-exports TieredProxy

tryAbsoluteURL

Re-exports tryAbsoluteURL

UrlPatternObject

Re-exports UrlPatternObject

useState

Re-exports useState

UseStateOptions

Re-exports UseStateOptions

withCheckedStorageAccess

ErrorHandler

ErrorHandler<Context>: (inputs: LoadedContext<Context>, error: Error) => Awaitable<void>

Type parameters

Type declaration

    • (inputs: LoadedContext<Context>, error: Error): Awaitable<void>
    • Parameters

      • inputs: LoadedContext<Context>
      • error: Error

      Returns Awaitable<void>

RequestHandler

RequestHandler<Context>: (inputs: LoadedContext<Context>) => Awaitable<void>

Type parameters

Type declaration

    • (inputs: LoadedContext<Context>): Awaitable<void>
    • Parameters

      • inputs: LoadedContext<Context>

      Returns Awaitable<void>

StatusMessageCallback

StatusMessageCallback<Context, Crawler>: (params: StatusMessageCallbackParams<Context, Crawler>) => Awaitable<void>

Type parameters

Type declaration

constBASIC_CRAWLER_TIMEOUT_BUFFER_SECS

BASIC_CRAWLER_TIMEOUT_BUFFER_SECS: 10 = 10

Additional number of seconds used in CheerioCrawler and BrowserCrawler to set a reasonable requestHandlerTimeoutSecs for BasicCrawler that would not impare functionality (not timeout before crawlers).

Page Options