Skip to main content
Version: 3.3

BasicCrawlingContext <UserData>

Hierarchy

Index

Properties

crawler

crawler: BasicCrawler<BasicCrawlingContext<Dictionary<any>>>

id

id: string

log

log: Log

optionalproxyInfo

proxyInfo?: ProxyInfo

An object with information about currently used proxy by the crawler and configured by the ProxyConfiguration class.

request

request: Request<UserData>

The original Request object.

optionalsession

session?: Session

Methods

enqueueLinks

  • enqueueLinks(options?: { baseUrl: undefined | string; exclude: undefined | (GlobInput | RegExpInput)[]; forefront: undefined | boolean; globs: undefined | GlobInput[]; label: undefined | string; limit: undefined | number; pseudoUrls: undefined | PseudoUrlInput[]; regexps: undefined | RegExpInput[]; requestQueue: undefined | RequestQueue; selector: undefined | string; strategy: undefined | EnqueueStrategy | all | same-domain | same-hostname | same-origin; transformRequestFunction: undefined | RequestTransform; urls: string[]; userData: undefined | Dictionary<any> }): Promise<BatchAddRequestsResult>
  • This function automatically finds and enqueues links from the current page, adding them to the RequestQueue currently used by the crawler.

    Optionally, the function allows you to filter the target links' URLs using an array of globs or regular expressions and override settings of the enqueued Request objects.

    Check out the Crawl a website with relative links example for more details regarding its usage.

    Example usage

    async requestHandler({ enqueueLinks }) {
    await enqueueLinks({
    urls: [...],
    });
    },

    Parameters

    • optionaloptions: { baseUrl: undefined | string; exclude: undefined | (GlobInput | RegExpInput)[]; forefront: undefined | boolean; globs: undefined | GlobInput[]; label: undefined | string; limit: undefined | number; pseudoUrls: undefined | PseudoUrlInput[]; regexps: undefined | RegExpInput[]; requestQueue: undefined | RequestQueue; selector: undefined | string; strategy: undefined | EnqueueStrategy | all | same-domain | same-hostname | same-origin; transformRequestFunction: undefined | RequestTransform; urls: string[]; userData: undefined | Dictionary<any> }

      All enqueueLinks() parameters are passed via an options object.

      • baseUrl: undefined | string
      • exclude: undefined | (GlobInput | RegExpInput)[]
      • forefront: undefined | boolean
      • globs: undefined | GlobInput[]
      • label: undefined | string
      • limit: undefined | number
      • pseudoUrls: undefined | PseudoUrlInput[]
      • regexps: undefined | RegExpInput[]
      • requestQueue: undefined | RequestQueue
      • selector: undefined | string
      • strategy: undefined | EnqueueStrategy | all | same-domain | same-hostname | same-origin
      • transformRequestFunction: undefined | RequestTransform
      • urls: string[]

        An array of URLs to enqueue.

      • userData: undefined | Dictionary<any>

    Returns Promise<BatchAddRequestsResult>

    Promise that resolves to BatchAddRequestsResult object.

sendRequest

  • sendRequest<Response>(overrideOptions?: Partial<OptionsInit>): Promise<Response<Response>>
  • Fires HTTP request via got-scraping, allowing to override the request options on the fly.

    This is handy when you work with a browser crawler but want to execute some requests outside it (e.g. API requests). Check the Skipping navigations for certain requests example for more detailed explanation of how to do that.

    async requestHandler({ sendRequest }) {
    const { body } = await sendRequest({
    // override headers only
    headers: { ... },
    });
    },

    Type parameters

    • Response = string

    Parameters

    • optionaloverrideOptions: Partial<OptionsInit>

    Returns Promise<Response<Response>>