Skip to main content
Version: 3.6

BasicCrawlingContext <UserData>

Hierarchy

Index

Properties

crawler

crawler: BasicCrawler<BasicCrawlingContext<Dictionary>>

id

id: string

log

log: Log

optionalproxyInfo

proxyInfo?: ProxyInfo

An object with information about currently used proxy by the crawler and configured by the ProxyConfiguration class.

request

request: Request<UserData>

The original Request object.

optionalsession

session?: Session

Methods

enqueueLinks

  • This function automatically finds and enqueues links from the current page, adding them to the RequestQueue currently used by the crawler.

    Optionally, the function allows you to filter the target links' URLs using an array of globs or regular expressions and override settings of the enqueued Request objects.

    Check out the Crawl a website with relative links example for more details regarding its usage.

    Example usage

    async requestHandler({ enqueueLinks }) {
    await enqueueLinks({
    urls: [...],
    });
    },

    Parameters

    • optionaloptions: { baseUrl?: string; exclude?: (GlobInput | RegExpInput)[]; forefront?: boolean; globs?: GlobInput[]; label?: string; limit?: number; pseudoUrls?: PseudoUrlInput[]; regexps?: RegExpInput[]; requestQueue?: RequestProvider; selector?: string; skipNavigation?: boolean; strategy?: EnqueueStrategy | all | same-domain | same-hostname | same-origin; transformRequestFunction?: RequestTransform; urls: string[]; userData?: Dictionary }

      All enqueueLinks() parameters are passed via an options object.

      • optionalbaseUrl: string

        A base URL that will be used to resolve relative URLs when using Cheerio. Ignored when using Puppeteer, since the relative URL resolution is done inside the browser automatically.

      • optionalexclude: (GlobInput | RegExpInput)[]

        An array of glob pattern strings, regexp patterns or plain objects containing patterns matching URLs that will never be enqueued.

        The plain objects must include either the glob property or the regexp property. All remaining keys will be used as request options for the corresponding enqueued Request objects.

        Glob matching is always case-insensitive. If you need case-sensitive matching, provide a regexp.

      • optionalforefront: boolean = false

        If set to true:

        • while adding the request to the queue: the request will be added to the foremost position in the queue.
        • while reclaiming the request: the request will be placed to the beginning of the queue, so that it's returned in the next call to RequestQueue.fetchNextRequest. By default, it's put to the end of the queue.
      • optionalglobs: GlobInput[]

        An array of glob pattern strings or plain objects containing glob pattern strings matching the URLs to be enqueued.

        The plain objects must include at least the glob property, which holds the glob pattern string. All remaining keys will be used as request options for the corresponding enqueued Request objects.

        The matching is always case-insensitive. If you need case-sensitive matching, use regexps property directly.

        If globs is an empty array or undefined, and regexps are also not defined, then the function enqueues the links with the same subdomain.

      • optionallabel: string

        Sets Request.label for newly enqueued requests.

      • optionallimit: number

        Limit the amount of actually enqueued URLs to this number. Useful for testing across the entire crawling scope.

      • optionalpseudoUrls: PseudoUrlInput[]

        NOTE: In future versions of SDK the options will be removed. Please use globs or regexps instead.

        An array of PseudoUrl strings or plain objects containing PseudoUrl strings matching the URLs to be enqueued.

        The plain objects must include at least the purl property, which holds the pseudo-URL string. All remaining keys will be used as request options for the corresponding enqueued Request objects.

        With a pseudo-URL string, the matching is always case-insensitive. If you need case-sensitive matching, use regexps property directly.

        If pseudoUrls is an empty array or undefined, then the function enqueues the links with the same subdomain.

        @deprecated

        prefer using globs or regexps instead

      • optionalregexps: RegExpInput[]

        An array of regular expressions or plain objects containing regular expressions matching the URLs to be enqueued.

        The plain objects must include at least the regexp property, which holds the regular expression. All remaining keys will be used as request options for the corresponding enqueued Request objects.

        If regexps is an empty array or undefined, and globs are also not defined, then the function enqueues the links with the same subdomain.

      • optionalrequestQueue: RequestProvider

        A request queue to which the URLs will be enqueued.

      • optionalselector: string

        A CSS selector matching links to be enqueued.

      • optionalskipNavigation: boolean = false

        If set to true, tells the crawler to skip navigation and process the request directly.

      • optionalstrategy: EnqueueStrategy | all | same-domain | same-hostname | same-origin = EnqueueStrategy | all | same-domain | same-hostname | same-origin

        The strategy to use when enqueueing the urls.

        Depending on the strategy you select, we will only check certain parts of the URLs found. Here is a diagram of each URL part and their name:

        Protocol          Domain
        ┌────┐ ┌─────────┐
        https://example.crawlee.dev/...
        │ └─────────────────┤
        │ Hostname │
        │ │
        └─────────────────────────┘
        Origin
      • optionaltransformRequestFunction: RequestTransform

        Just before a new Request is constructed and enqueued to the RequestQueue, this function can be used to remove it or modify its contents such as userData, payload or, most importantly uniqueKey. This is useful when you need to enqueue multiple Requests to the queue that share the same URL, but differ in methods or payloads, or to dynamically update or create userData.

        For example: by adding keepUrlFragment: true to the request object, URL fragments will not be removed when uniqueKey is computed.

        Example:

        {
        transformRequestFunction: (request) => {
        request.userData.foo = 'bar';
        request.keepUrlFragment = true;
        return request;
        }
        }

        Note that transformRequestFunction has a priority over request options specified in globs, regexps, or pseudoUrls objects, and thus some options could be over-written by transformRequestFunction.

      • urls: string[]

        An array of URLs to enqueue.

      • optionaluserData: Dictionary

        Sets Request.userData for newly enqueued requests.

    Returns Promise<BatchAddRequestsResult>

    Promise that resolves to BatchAddRequestsResult object.

pushData

  • pushData(...args: [data: Dictionary | Dictionary[]]): Promise<void>
  • This function allows you to push data to the default Dataset currently used by the crawler.

    Shortcut for crawler.pushData().


    Parameters

    • rest...args: [data: Dictionary | Dictionary[]]

    Returns Promise<void>

sendRequest

  • sendRequest<Response>(overrideOptions?: Partial<OptionsInit>): Promise<Response<Response>>
  • Fires HTTP request via got-scraping, allowing to override the request options on the fly.

    This is handy when you work with a browser crawler but want to execute some requests outside it (e.g. API requests). Check the Skipping navigations for certain requests example for more detailed explanation of how to do that.

    async requestHandler({ sendRequest }) {
    const { body } = await sendRequest({
    // override headers only
    headers: { ... },
    });
    },

    Type parameters

    • Response = string

    Parameters

    • optionaloverrideOptions: Partial<OptionsInit>

    Returns Promise<Response<Response>>