Skip to main content
Version: 3.9

BasicCrawlingContext <UserData>

Hierarchy

Index

Properties

addRequests

addRequests: (requestsLike: readonly (string | ReadonlyObjectDeep<Partial<RequestOptions<Dictionary>> & { regex?: RegExp; requestsFromUrl?: string }> | ReadonlyObjectDeep<Request<Dictionary>>)[], options?: ReadonlyObjectDeep<RequestQueueOperationOptions>) => Promise<void>

Add requests directly to the request queue.


Type declaration

    • (requestsLike: readonly (string | ReadonlyObjectDeep<Partial<RequestOptions<Dictionary>> & { regex?: RegExp; requestsFromUrl?: string }> | ReadonlyObjectDeep<Request<Dictionary>>)[], options?: ReadonlyObjectDeep<RequestQueueOperationOptions>): Promise<void>
    • Parameters

      • requestsLike: readonly (string | ReadonlyObjectDeep<Partial<RequestOptions<Dictionary>> & { regex?: RegExp; requestsFromUrl?: string }> | ReadonlyObjectDeep<Request<Dictionary>>)[]
      • optionaloptions: ReadonlyObjectDeep<RequestQueueOperationOptions>

        Options for the request queue

      Returns Promise<void>

crawler

crawler: BasicCrawler<BasicCrawlingContext<Dictionary>>

getKeyValueStore

getKeyValueStore: (idOrName?: string) => Promise<KeyValueStore>

Get a key-value store with given name or id, or the default one for the crawler.


Type declaration

id

id: string

log

log: Log

A preconfigured logger for the request handler.

optionalproxyInfo

proxyInfo?: ProxyInfo

An object with information about currently used proxy by the crawler and configured by the ProxyConfiguration class.

request

request: Request<UserData>

The original Request object.

optionalsession

session?: Session

useState

useState: <State>(defaultValue?: State) => Promise<State>

Returns the state - a piece of mutable persistent data shared across all the request handler runs.


Type declaration

    • <State>(defaultValue?: State): Promise<State>
    • Type parameters

      • State: Dictionary = Dictionary

      Parameters

      • optionaldefaultValue: State

      Returns Promise<State>

Methods

enqueueLinks

  • This function automatically finds and enqueues links from the current page, adding them to the RequestQueue currently used by the crawler.

    Optionally, the function allows you to filter the target links' URLs using an array of globs or regular expressions and override settings of the enqueued Request objects.

    Check out the Crawl a website with relative links example for more details regarding its usage.

    Example usage

    async requestHandler({ enqueueLinks }) {
    await enqueueLinks({
    urls: [...],
    });
    },

    Parameters

    • optionaloptions: { baseUrl?: string; exclude?: readonly (GlobInput | RegExpInput)[]; forefront?: boolean; globs?: readonly GlobInput[]; label?: string; limit?: number; pseudoUrls?: readonly PseudoUrlInput[]; regexps?: readonly RegExpInput[]; requestQueue?: RequestProvider; selector?: string; skipNavigation?: boolean; strategy?: EnqueueStrategy | all | same-domain | same-hostname | same-origin; transformRequestFunction?: RequestTransform; urls: readonly string[]; userData?: Dictionary }

      All enqueueLinks() parameters are passed via an options object.

      • optionalbaseUrl: string

        A base URL that will be used to resolve relative URLs when using Cheerio. Ignored when using Puppeteer, since the relative URL resolution is done inside the browser automatically.

      • optionalexclude: readonly (GlobInput | RegExpInput)[]

        An array of glob pattern strings, regexp patterns or plain objects containing patterns matching URLs that will never be enqueued.

        The plain objects must include either the glob property or the regexp property. All remaining keys will be used as request options for the corresponding enqueued Request objects.

        Glob matching is always case-insensitive. If you need case-sensitive matching, provide a regexp.

      • optionalforefront: boolean = ```ts false ```

        If set to true:

        • while adding the request to the queue: the request will be added to the foremost position in the queue.
        • while reclaiming the request: the request will be placed to the beginning of the queue, so that it's returned in the next call to RequestQueue.fetchNextRequest. By default, it's put to the end of the queue.
      • optionalglobs: readonly GlobInput[]

        An array of glob pattern strings or plain objects containing glob pattern strings matching the URLs to be enqueued.

        The plain objects must include at least the glob property, which holds the glob pattern string. All remaining keys will be used as request options for the corresponding enqueued Request objects.

        The matching is always case-insensitive. If you need case-sensitive matching, use regexps property directly.

        If globs is an empty array or undefined, and regexps are also not defined, then the function enqueues the links with the same subdomain.

      • optionallabel: string

        Sets Request.label for newly enqueued requests.

      • optionallimit: number

        Limit the amount of actually enqueued URLs to this number. Useful for testing across the entire crawling scope.

      • optionalpseudoUrls: readonly PseudoUrlInput[]

        NOTE: In future versions of SDK the options will be removed. Please use globs or regexps instead.

        An array of PseudoUrl strings or plain objects containing PseudoUrl strings matching the URLs to be enqueued.

        The plain objects must include at least the purl property, which holds the pseudo-URL string. All remaining keys will be used as request options for the corresponding enqueued Request objects.

        With a pseudo-URL string, the matching is always case-insensitive. If you need case-sensitive matching, use regexps property directly.

        If pseudoUrls is an empty array or undefined, then the function enqueues the links with the same subdomain.

        @deprecated

        prefer using globs or regexps instead

      • optionalregexps: readonly RegExpInput[]

        An array of regular expressions or plain objects containing regular expressions matching the URLs to be enqueued.

        The plain objects must include at least the regexp property, which holds the regular expression. All remaining keys will be used as request options for the corresponding enqueued Request objects.

        If regexps is an empty array or undefined, and globs are also not defined, then the function enqueues the links with the same subdomain.

      • optionalrequestQueue: RequestProvider

        A request queue to which the URLs will be enqueued.

      • optionalselector: string

        A CSS selector matching links to be enqueued.

      • optionalskipNavigation: boolean = ```ts false ```

        If set to true, tells the crawler to skip navigation and process the request directly.

      • optionalstrategy: EnqueueStrategy | all | same-domain | same-hostname | same-origin = EnqueueStrategy | all | same-domain | same-hostname | same-origin

        The strategy to use when enqueueing the urls.

        Depending on the strategy you select, we will only check certain parts of the URLs found. Here is a diagram of each URL part and their name:

        Protocol          Domain
        ┌────┐ ┌─────────┐
        https://example.crawlee.dev/...
        │ └─────────────────┤
        │ Hostname │
        │ │
        └─────────────────────────┘
        Origin
      • optionaltransformRequestFunction: RequestTransform

        Just before a new Request is constructed and enqueued to the RequestQueue, this function can be used to remove it or modify its contents such as userData, payload or, most importantly uniqueKey. This is useful when you need to enqueue multiple Requests to the queue that share the same URL, but differ in methods or payloads, or to dynamically update or create userData.

        For example: by adding keepUrlFragment: true to the request object, URL fragments will not be removed when uniqueKey is computed.

        Example:

        {
        transformRequestFunction: (request) => {
        request.userData.foo = 'bar';
        request.keepUrlFragment = true;
        return request;
        }
        }

        Note that transformRequestFunction has a priority over request options specified in globs, regexps, or pseudoUrls objects, and thus some options could be over-written by transformRequestFunction.

      • urls: readonly string[]

        An array of URLs to enqueue.

      • optionaluserData: Dictionary

        Sets Request.userData for newly enqueued requests.

    Returns Promise<BatchAddRequestsResult>

    Promise that resolves to BatchAddRequestsResult object.

pushData

  • pushData(data?: ReadonlyDeep<Dictionary | Dictionary[]>, datasetIdOrName?: string): Promise<void>
  • This function allows you to push data to a Dataset specified by name, or the one currently used by the crawler.

    Shortcut for crawler.pushData().


    Parameters

    • optionaldata: ReadonlyDeep<Dictionary | Dictionary[]>

      Data to be pushed to the default dataset.

    • optionaldatasetIdOrName: string

    Returns Promise<void>

sendRequest

  • sendRequest<Response>(overrideOptions?: Partial<OptionsInit>): Promise<Response<Response>>
  • Fires HTTP request via got-scraping, allowing to override the request options on the fly.

    This is handy when you work with a browser crawler but want to execute some requests outside it (e.g. API requests). Check the Skipping navigations for certain requests example for more detailed explanation of how to do that.

    async requestHandler({ sendRequest }) {
    const { body } = await sendRequest({
    // override headers only
    headers: { ... },
    });
    },

    Type parameters

    • Response = string

    Parameters

    • optionaloverrideOptions: Partial<OptionsInit>

    Returns Promise<Response<Response>>