Request <UserData>
Index
Constructors
constructor
- Requestparameters including the URL, HTTP method and headers, and others.- Parameters- options: RequestOptions<UserData>
 - Returns Request<UserData>
Properties
errorMessages
An array of error messages from request processing.
optionalhandledAt
ISO datetime string that indicates the time when the request has been processed.
Is null if the request has not been crawled yet.
optionalheaders
Object with HTTP headers. Key is header name, value is the value.
optionalid
Request ID
optionalloadedUrl
An actually loaded URL after redirects, if present. HTTP redirects are guaranteed to be included.
When using PuppeteerCrawler or PlaywrightCrawler, meta tag and JavaScript redirects may, or may not be included, depending on their nature. This generally means that redirects, which happen immediately will most likely be included, but delayed redirects will not.
method
HTTP method, e.g. GET or POST.
noRetry
The true value indicates that the request will not be automatically retried on error.
optionalpayload
HTTP request payload, e.g. for POST requests.
retryCount
Indicates the number of times the crawling of the request has been retried on error.
uniqueKey
A unique key identifying the request.
Two requests with the same uniqueKey are considered as pointing to the same URL.
url
URL of the web page to crawl.
userData
Custom user data assigned to the request.
Accessors
crawlDepth
- Depth of the request in the current crawl tree. Note that this is dependent on the crawler setup and might produce unexpected results when used with multiple crawlers. - Returns number
- Depth of the request in the current crawl tree. Note that this is dependent on the crawler setup and might produce unexpected results when used with multiple crawlers. - Parameters- value: number
 - Returns void
label
- shortcut for getting - request.userData.label- Returns undefined | string
- shortcut for setting - request.userData.label- Parameters- value: undefined | string
 - Returns void
maxRetries
- Maximum number of retries for this request. Allows to override the global - maxRequestRetriesoption of- BasicCrawler.- Returns undefined | number
- Maximum number of retries for this request. Allows to override the global - maxRequestRetriesoption of- BasicCrawler.- Parameters- value: undefined | number
 - Returns void
sessionRotationCount
- Indicates the number of times the crawling of the request has rotated the session due to a session or a proxy error. - Returns number
- Indicates the number of times the crawling of the request has rotated the session due to a session or a proxy error. - Parameters- value: number
 - Returns void
skipNavigation
- Tells the crawler processing this request to skip the navigation and process the request directly. - Returns boolean
- Tells the crawler processing this request to skip the navigation and process the request directly. - Parameters- value: boolean
 - Returns void
state
- Describes the request's current lifecycle state. - Returns RequestState
- Describes the request's current lifecycle state. - Parameters- value: RequestState
 - Returns void
Methods
pushErrorMessage
- Stores information about an error that occurred during processing of this request. - You should always use Error instances when throwing errors in JavaScript. - Nevertheless, to improve the debugging experience when using third party libraries that may not always throw an Error instance, the function performs a type inspection of the passed argument and attempts to extract as much information as possible, since just throwing a bad type error makes any debugging rather difficult. - Parameters- errorOrMessage: unknown- Error object or error message to be stored in the request. 
- optionaloptions: PushErrorMessageOptions = {}
 - Returns void
Represents a URL to be crawled, optionally including HTTP method, headers, payload and other metadata. The
Requestobject also stores information about errors that occurred during processing of the request.Each
Requestinstance has theuniqueKeyproperty, which can be either specified manually in the constructor or generated automatically from the URL. Two requests with the sameuniqueKeyare considered as pointing to the same web resource. This behavior applies to all Crawlee classes, such as RequestList, RequestQueue, PuppeteerCrawler or PlaywrightCrawler.Example use: