Dataset
crawlee.storages._dataset.Dataset
Index
Constructors
__init__
Parameters
id: str
name: str | None
configuration: Configuration
client: BaseStorageClient
Returns None
Methods
drop
Returns None
export_to
Exports the entire dataset into a specified file stored under a key in a key-value store.
This method consolidates all entries from a specified dataset into one file, which is then saved under a given key in a key-value store. The format of the exported file is determined by the
content_type
parameter. Either the dataset's ID or name should be specified, and similarly, either the target key-value store's ID or name should be used.Parameters
kwargs: Unpack[ExportToKwargs]
Returns None
get_data
Retrieves dataset items based on filtering, sorting, and pagination parameters.
This method allows customization of the data retrieval process from a dataset, supporting operations such as field selection, ordering, and skipping specific records based on provided parameters.
Parameters
kwargs: Unpack[GetDataKwargs]
Returns DatasetItemsListPage
get_info
Get an object containing general information about the dataset.
Returns DatasetMetadata | None
iterate_items
Iterates over dataset items, applying filtering, sorting, and pagination.
Retrieves dataset items incrementally, allowing fine-grained control over the data fetched. The function supports various parameters to filter, sort, and limit the data returned, facilitating tailored dataset queries.
Parameters
offset: int = 0keyword-only
limit: int | None = Nonekeyword-only
clean: bool = Falsekeyword-only
desc: bool = Falsekeyword-only
fields: list[str] | None = Nonekeyword-only
omit: list[str] | None = Nonekeyword-only
unwind: str | None = Nonekeyword-only
skip_empty: bool = Falsekeyword-only
skip_hidden: bool = Falsekeyword-only
Returns AsyncIterator[dict]
open
Parameters
id: str | None = Nonekeyword-only
name: str | None = Nonekeyword-only
configuration: Configuration | None = Nonekeyword-only
storage_client: BaseStorageClient | None = Nonekeyword-only
Returns Dataset
push_data
Store an object or an array of objects to the dataset.
The size of the data is limited by the receiving API and therefore
push_data()
will only allow objects whose JSON representation is smaller than 9MB. When an array is passed, none of the included objects may be larger than 9MB, but the array itself may be of any size.Parameters
data: JsonSerializable
kwargs: Unpack[PushDataKwargs]
Returns None
write_to
Exports the entire dataset into an arbitrary stream.
Parameters
content_type: Literal['json', 'csv']
destination: TextIO
Returns None
Represents an append-only structured storage, ideal for tabular data similar to database tables.
The
Dataset
class is designed to store structured data, where each entry (row) maintains consistent attributes (columns) across the dataset. It operates in an append-only mode, allowing new records to be added, but not modified or deleted. This makes it particularly useful for storing results from web crawling operations.Data can be stored either locally or in the cloud. It depends on the setup of underlying storage client. By default a
MemoryStorageClient
is used, but it can be changed to a different one.By default, data is stored using the following path structure:
{CRAWLEE_STORAGE_DIR}
: The root directory for all storage data specified by the environment variable.{DATASET_ID}
: Specifies the dataset, either "default" or a custom dataset ID.{INDEX}
: Represents the zero-based index of the record within the dataset.To open a dataset, use the
open
class method by specifying anid
,name
, orconfiguration
. If none are provided, the default dataset for the current crawler run is used. Attempting to open a dataset byid
that does not exist will raise an error; however, if accessed byname
, the dataset will be created if it doesn't already exist.Usage: