CheerioCrawlingContext <UserData, JSONData>
Hierarchy
- InternalHttpCrawlingContext<UserData, JSONData, CheerioCrawler>
- CheerioCrawlingContext
Index
Properties
$
inheritedaddRequests
Add requests directly to the request queue.
Type declaration
Parameters
requestsLike: readonly (string | ReadonlyObjectDeep<Partial<RequestOptions<Dictionary>> & { regex?: RegExp; requestsFromUrl?: string }> | ReadonlyObjectDeep<Request<Dictionary>>)[]
optionaloptions: ReadonlyObjectDeep<RequestQueueOperationOptions>
Options for the request queue
Returns Promise<void>
inheritedbody
The request body of the web page.
The type depends on the Content-Type
header of the web page:
- String for
text/html
,application/xhtml+xml
,application/xml
MIME content types - Buffer for others MIME content types
inheritedcontentType
Parsed Content-Type header: { type, encoding }
.
Type declaration
encoding: BufferEncoding
type: string
inheritedcrawler
inheritedgetKeyValueStore
Get a key-value store with given name or id, or the default one for the crawler.
Type declaration
Parameters
optionalidOrName: string
Returns Promise<KeyValueStore>
inheritedid
inheritedjson
The parsed object from JSON string if the response contains the content type application/json.
inheritedlog
A preconfigured logger for the request handler.
optionalinheritedproxyInfo
An object with information about currently used proxy by the crawler and configured by the ProxyConfiguration class.
inheritedrequest
The original Request object.
inheritedresponse
optionalinheritedsession
inheriteduseState
Returns the state - a piece of mutable persistent data shared across all the request handler runs.
Type declaration
Parameters
optionaldefaultValue: State
Returns Promise<State>
Methods
inheritedenqueueLinks
This function automatically finds and enqueues links from the current page, adding them to the RequestQueue currently used by the crawler.
Optionally, the function allows you to filter the target links' URLs using an array of globs or regular expressions and override settings of the enqueued Request objects.
Check out the Crawl a website with relative links example for more details regarding its usage.
Example usage
async requestHandler({ enqueueLinks }) {
await enqueueLinks({
globs: [
'https://www.example.com/handbags/*',
],
});
},Parameters
optionaloptions: ReadonlyObjectDeep<Omit<EnqueueLinksOptions, requestQueue>> & Pick<EnqueueLinksOptions, requestQueue>
All
enqueueLinks()
parameters are passed via an options object.
Returns Promise<BatchAddRequestsResult>
Promise that resolves to BatchAddRequestsResult object.
parseWithCheerio
Returns Cheerio handle, this is here to unify the crawler API, so they all have this handy method. It has the same return type as the
$
context property, use it only if you are abstracting your workflow to support different context types in one handler. When provided with theselector
argument, it will throw if it's not available.Example usage:
async requestHandler({ parseWithCheerio }) {
const $ = await parseWithCheerio();
const title = $('title').text();
});Parameters
optionalselector: string
optionaltimeoutMs: number
Returns Promise<CheerioAPI>
inheritedpushData
This function allows you to push data to a Dataset specified by name, or the one currently used by the crawler.
Shortcut for
crawler.pushData()
.Parameters
optionaldata: ReadonlyDeep<Dictionary | Dictionary[]>
Data to be pushed to the default dataset.
optionaldatasetIdOrName: string
Returns Promise<void>
inheritedsendRequest
Fires HTTP request via
got-scraping
, allowing to override the request options on the fly.This is handy when you work with a browser crawler but want to execute some requests outside it (e.g. API requests). Check the Skipping navigations for certain requests example for more detailed explanation of how to do that.
async requestHandler({ sendRequest }) {
const { body } = await sendRequest({
// override headers only
headers: { ... },
});
},Parameters
optionaloverrideOptions: Partial<OptionsInit>
Returns Promise<Response<Response>>
waitForSelector
Wait for an element matching the selector to appear. Timeout is ignored.
Example usage:
async requestHandler({ waitForSelector, parseWithCheerio }) {
await waitForSelector('article h1');
const $ = await parseWithCheerio();
const title = $('title').text();
});Parameters
selector: string
optionaltimeoutMs: number
Returns Promise<void>
The Cheerio object with parsed HTML. Cheerio is available only for HTML and XML content types.