JSDOMCrawlingContext <UserData, JSONData>
Hierarchy
- InternalHttpCrawlingContext<UserData, JSONData, JSDOMCrawler>
- JSDOMCrawlingContext
Index
Properties
addRequests
Type declaration
Parameters
requestsLike: readonly (string | ReadonlyObjectDeep<Partial<RequestOptions<Dictionary>> & { regex?: RegExp; requestsFromUrl?: string }> | ReadonlyObjectDeep<Request<Dictionary>>)[]
optionaloptions: ReadonlyObjectDeep<RequestQueueOperationOptions>
Options for the request queue
Returns Promise<void>
body
The request body of the web page.
The type depends on the Content-Type
header of the web page:
- String for
text/html
,application/xhtml+xml
,application/xml
MIME content types - Buffer for others MIME content types
contentType
Parsed Content-Type header: { type, encoding }
.
Type declaration
encoding: BufferEncoding
type: string
crawler
document
getKeyValueStore
Type declaration
Get a key-value store with given name or id, or the default one for the crawler.
Parameters
optionalidOrName: string
Returns Promise<KeyValueStore>
id
json
The parsed object from JSON string if the response contains the content type application/json.
log
A preconfigured logger for the request handler.
optionalproxyInfo
An object with information about currently used proxy by the crawler and configured by the ProxyConfiguration class.
request
The original Request object.
response
optionalsession
useState
Type declaration
Returns the state - a piece of mutable persistent data shared across all the request handler runs.
Type parameters
- State: Dictionary = Dictionary
Parameters
optionaldefaultValue: State
Returns Promise<State>
window
Methods
enqueueLinks
This function automatically finds and enqueues links from the current page, adding them to the RequestQueue currently used by the crawler.
Optionally, the function allows you to filter the target links' URLs using an array of globs or regular expressions and override settings of the enqueued Request objects.
Check out the Crawl a website with relative links example for more details regarding its usage.
Example usage
async requestHandler({ enqueueLinks }) {
await enqueueLinks({
globs: [
'https://www.example.com/handbags/*',
],
});
},Parameters
optionaloptions: ReadonlyObjectDeep<Omit<EnqueueLinksOptions, requestQueue>> & Pick<EnqueueLinksOptions, requestQueue>
All
enqueueLinks()
parameters are passed via an options object.
Returns Promise<BatchAddRequestsResult>
Promise that resolves to BatchAddRequestsResult object.
parseWithCheerio
Returns Cheerio handle, allowing to work with the data same way as with CheerioCrawler.
Example usage:
async requestHandler({ parseWithCheerio }) {
const $ = await parseWithCheerio();
const title = $('title').text();
});Returns Promise<CheerioAPI>
pushData
This function allows you to push data to a Dataset specified by name, or the one currently used by the crawler.
Shortcut for
crawler.pushData()
.Parameters
optionaldata: ReadonlyDeep<Dictionary | Dictionary[]>
Data to be pushed to the default dataset.
optionaldatasetIdOrName: string
Returns Promise<void>
sendRequest
Fires HTTP request via
got-scraping
, allowing to override the request options on the fly.This is handy when you work with a browser crawler but want to execute some requests outside it (e.g. API requests). Check the Skipping navigations for certain requests example for more detailed explanation of how to do that.
async requestHandler({ sendRequest }) {
const { body } = await sendRequest({
// override headers only
headers: { ... },
});
},Type parameters
- Response = string
Parameters
optionaloverrideOptions: Partial<OptionsInit>
Returns Promise<Response<Response>>
Add requests directly to the request queue.