BasicCrawlingContext <UserData>
Hierarchy
- CrawlingContext<BasicCrawler, UserData>
- BasicCrawlingContext
Index
Properties
addRequests
Type declaration
Parameters
requestsLike: readonly (string | ReadonlyObjectDeep<Partial<RequestOptions<Dictionary>> & { regex?: RegExp; requestsFromUrl?: string }> | ReadonlyObjectDeep<Request<Dictionary>>)[]
optionaloptions: ReadonlyObjectDeep<RequestQueueOperationOptions>
Options for the request queue
Returns Promise<void>
crawler
getKeyValueStore
Get a key-value store with given name or id, or the default one for the crawler.
Type declaration
Parameters
optionalidOrName: string
Returns Promise<KeyValueStore>
id
log
A preconfigured logger for the request handler.
optionalproxyInfo
An object with information about currently used proxy by the crawler and configured by the ProxyConfiguration class.
request
The original Request object.
optionalsession
useState
Returns the state - a piece of mutable persistent data shared across all the request handler runs.
Type declaration
Type parameters
- State: Dictionary = Dictionary
Parameters
optionaldefaultValue: State
Returns Promise<State>
Methods
enqueueLinks
This function automatically finds and enqueues links from the current page, adding them to the RequestQueue currently used by the crawler.
Optionally, the function allows you to filter the target links' URLs using an array of globs or regular expressions and override settings of the enqueued Request objects.
Check out the Crawl a website with relative links example for more details regarding its usage.
Example usage
async requestHandler({ enqueueLinks }) {
await enqueueLinks({
urls: [...],
});
},Parameters
optionaloptions: { baseUrl?: string; exclude?: readonly (GlobInput | RegExpInput)[]; forefront?: boolean; globs?: readonly GlobInput[]; label?: string; limit?: number; pseudoUrls?: readonly PseudoUrlInput[]; regexps?: readonly RegExpInput[]; requestQueue?: RequestProvider; selector?: string; skipNavigation?: boolean; strategy?: EnqueueStrategy | all | same-domain | same-hostname | same-origin; transformRequestFunction?: RequestTransform; urls: readonly string[]; userData?: Dictionary }
All
enqueueLinks()
parameters are passed via an options object.optionalbaseUrl: string
A base URL that will be used to resolve relative URLs when using Cheerio. Ignored when using Puppeteer, since the relative URL resolution is done inside the browser automatically.
optionalexclude: readonly (GlobInput | RegExpInput)[]
An array of glob pattern strings, regexp patterns or plain objects containing patterns matching URLs that will never be enqueued.
The plain objects must include either the
glob
property or theregexp
property. All remaining keys will be used as request options for the corresponding enqueued Request objects.Glob matching is always case-insensitive. If you need case-sensitive matching, provide a regexp.
optionalforefront: boolean = false
If set to
true
:- while adding the request to the queue: the request will be added to the foremost position in the queue.
- while reclaiming the request: the request will be placed to the beginning of the queue, so that it's returned in the next call to RequestQueue.fetchNextRequest. By default, it's put to the end of the queue.
optionalglobs: readonly GlobInput[]
An array of glob pattern strings or plain objects containing glob pattern strings matching the URLs to be enqueued.
The plain objects must include at least the
glob
property, which holds the glob pattern string. All remaining keys will be used as request options for the corresponding enqueued Request objects.The matching is always case-insensitive. If you need case-sensitive matching, use
regexps
property directly.If
globs
is an empty array orundefined
, andregexps
are also not defined, then the function enqueues the links with the same subdomain.optionallabel: string
Sets Request.label for newly enqueued requests.
optionallimit: number
Limit the amount of actually enqueued URLs to this number. Useful for testing across the entire crawling scope.
optionalpseudoUrls: readonly PseudoUrlInput[]
NOTE: In future versions of SDK the options will be removed. Please use
globs
orregexps
instead.An array of PseudoUrl strings or plain objects containing PseudoUrl strings matching the URLs to be enqueued.
The plain objects must include at least the
purl
property, which holds the pseudo-URL string. All remaining keys will be used as request options for the corresponding enqueued Request objects.With a pseudo-URL string, the matching is always case-insensitive. If you need case-sensitive matching, use
regexps
property directly.If
pseudoUrls
is an empty array orundefined
, then the function enqueues the links with the same subdomain.optionalregexps: readonly RegExpInput[]
An array of regular expressions or plain objects containing regular expressions matching the URLs to be enqueued.
The plain objects must include at least the
regexp
property, which holds the regular expression. All remaining keys will be used as request options for the corresponding enqueued Request objects.If
regexps
is an empty array orundefined
, andglobs
are also not defined, then the function enqueues the links with the same subdomain.optionalrequestQueue: RequestProvider
A request queue to which the URLs will be enqueued.
optionalselector: string
A CSS selector matching links to be enqueued.
optionalskipNavigation: boolean = false
If set to
true
, tells the crawler to skip navigation and process the request directly.optionalstrategy: EnqueueStrategy | all | same-domain | same-hostname | same-origin = EnqueueStrategy | all | same-domain | same-hostname | same-origin
The strategy to use when enqueueing the urls.
Depending on the strategy you select, we will only check certain parts of the URLs found. Here is a diagram of each URL part and their name:
Protocol Domain
┌────┐ ┌─────────┐
https://example.crawlee.dev/...
│ └─────────────────┤
│ Hostname │
│ │
└─────────────────────────┘
OriginoptionaltransformRequestFunction: RequestTransform
Just before a new Request is constructed and enqueued to the RequestQueue, this function can be used to remove it or modify its contents such as
userData
,payload
or, most importantlyuniqueKey
. This is useful when you need to enqueue multipleRequests
to the queue that share the same URL, but differ in methods or payloads, or to dynamically update or createuserData
.For example: by adding
keepUrlFragment: true
to therequest
object, URL fragments will not be removed whenuniqueKey
is computed.Example:
{
transformRequestFunction: (request) => {
request.userData.foo = 'bar';
request.keepUrlFragment = true;
return request;
}
}Note that
transformRequestFunction
has a priority over request options specified inglobs
,regexps
, orpseudoUrls
objects, and thus some options could be over-written bytransformRequestFunction
.urls: readonly string[]
An array of URLs to enqueue.
optionaluserData: Dictionary
Sets Request.userData for newly enqueued requests.
Returns Promise<BatchAddRequestsResult>
Promise that resolves to BatchAddRequestsResult object.
pushData
This function allows you to push data to a Dataset specified by name, or the one currently used by the crawler.
Shortcut for
crawler.pushData()
.Parameters
optionaldata: ReadonlyDeep<Dictionary | Dictionary[]>
Data to be pushed to the default dataset.
optionaldatasetIdOrName: string
Returns Promise<void>
sendRequest
Fires HTTP request via
got-scraping
, allowing to override the request options on the fly.This is handy when you work with a browser crawler but want to execute some requests outside it (e.g. API requests). Check the Skipping navigations for certain requests example for more detailed explanation of how to do that.
async requestHandler({ sendRequest }) {
const { body } = await sendRequest({
// override headers only
headers: { ... },
});
},Type parameters
- Response = string
Parameters
optionaloverrideOptions: Partial<OptionsInit>
Returns Promise<Response<Response>>
Add requests directly to the request queue.