EarthscaleClient
Client for the Earthscale API.
Can be used to add, list, and retrieve datasets.
There are two ways to authenticate with Earthscale:
Email and password environment variables: Set the
EARTHSCALE_EMAILandEARTHSCALE_PASSWORDenvironment variablesOAuth: If the environment variables are not set, a browser window will be opened to authenticate using OAuth. This only works if running in a graphical environment where a browser window can be opened.
Examples
Usage with environment variables:
import os
from earthscale import EarthscaleClient
os.environ["EARTHSCALE_EMAIL"] = "[email protected]"
os.environ["EARTHSCALE_PASSWORD"] = "service_password"
with EarthscaleClient() as client:
client.list_datasets()__init__
__init__(
api_url: str | None = None,
auth_url: str | None = None,
anon_key: str | None = None,
skip_version_check: bool = False,
session: Session | None = None,
use_proxy: bool = False
)Initialize the Earthscale client.
Parameters
api_url
str | None
Custom URL of the Earthscale API. Defaults to https://api.earthscale.ai.
None
auth_url
str | None
Custom URL for the authentication service. Defaults to https://supabase.earthscale.ai.
None
anon_key
str | None
Custom anon key for the authentication service. If not set, the client will use the default anon key for https://supabase.earthscale.ai.
None
skip_version_check
bool
Whether to skip version compatibility check.
False
session
Session | None
Optional custom requests session to use.
None
use_proxy
bool
Whether to use the proxy server for authentication
False
login
Login using service account credentials from environment variables, or OAuth authentication if no environment variables are set.
Raises
AuthenticationError – If authentication fails.
Return type
None
add_image_dataset
Add an image dataset. Images must be in a format that can be read by rasterio, e.g. GeoTIFFs.
This function supports creating a time dimension through the filename_date_pattern argument. This pattern uses strftime-style format codes to extract date information from the filenames.
Examples
For filenames like brasil_coverage_2011.tif:
Parameters
name
str
The name of the dataset. Creates a new version of an existing dataset if the latest version of a dataset has the same name.
url
str | list[str]
The URL or list of URLs of the dataset.
labels
list[DatasetLabel] | None
Optional. User-defined labels as key-value pairs. Deprecated, use tags instead. If only labels are provided, a warning will be logged and the labels will be converted to tags. If both labels and tags are provided, tags will take precedence.
None
tags
dict[str, str] | None
Optional. User-defined tags as key-value pairs.
None
bands
list[str] | None
Optional list of bands to include.
None
groupby
str | None
[DEPRECATED] Will be ignored. If filename_date_pattern or filename_band_pattern are provided, those will be used as group keys) for a time dimension and variables respectively. Otherwise, this defaults to putting all images onto the same plane.
None
filename_date_pattern
str | None
Optional date pattern for filenames.
None
filename_band_pattern
list[dict[str, str]] | None
Optional band patterns for filenames. E.g. [{"pattern": "*_B[0-9]", "band": "band_1"}] would map all files matching the pattern *_B[0-9] to the band name band_1. Uses Unix filename pattern rules (fnmatch).If specified, all files must match to some pattern, otherwise an error will be raised.
None
visualization_optimization
Union[bool, Literal['auto']]
Whether to optimize for visualization. Use auto for automatic optimization based on size, True to force, or False to disable. Defaults to auto.
'auto'
pixel_info_optimizations
list[str] | None
List of dimensions to optimize for pixel info API. Defaults to None.
None
Return type
Returns
The dataset response.
add_zarr_dataset
Add a Zarr dataset.
When loading into xarray, this dataset type will automatically standardize the dimensions of the dataset to y, x and time if present. It will infer spatial dimensions, so if lon or longitude is present, it will be renamed to x.
This supports arbitrary multi-dimensional datasets, for example a dataset with time or level dimensions in addition to y, x.
Parameters
name
str
The name of the dataset. Creates a new version of an existing dataset if the latest version of a dataset has the same name.
url
str
The URL of the dataset. Can optionally contain a placeholder for the dimension name. If specified, this concatenates multiple Zarrs along either an existing or new dimension as named in the pattern. Example: gs://mybucket/my_dataset/{time}.zarr
labels
list[DatasetLabel] | None
Optional. User-defined labels as key-value pairs. Deprecated, use tags instead. If only labels are provided, a warning will be logged and the labels will be converted to tags. If both labels and tags are provided, tags will take precedence.
None
tags
dict[str, str] | None
Optional. User-defined tags as key-value pairs.
None
rename
dict[str, str] | None
Optional. Dictionary to rename dimensions.
None
visualization_optimization
Union[bool, Literal['auto']]
Whether to optimize for visualization. Use auto for automatic optimization based on size, True to force, or False to disable. Defaults to auto.
'auto'
pixel_info_optimizations
list[str] | None
List of dimensions to optimize for pixel info API. Defaults to None.
None
Return type
Returns
The dataset response.
add_vector_dataset
Add a vector dataset.
This function supports adding vector datasets from a variety of sources, including GeoJSON, GeoParquet, FlatGeobuf, and more.
Parameters
name
str
The name of the dataset. Creates a new version of an existing dataset if the latest version of a dataset has the same name.
url
str
The URL of the dataset.
labels
list[DatasetLabel] | None
Optional. User-defined labels as key-value pairs. Deprecated, use tags instead. If only labels are provided, a warning will be logged and the labels will be converted to tags. If both labels and tags are provided, tags will take precedence.
None
tags
dict[str, str] | None
Optional. User-defined tags as key-value pairs.
None
coordinate_precision
Literal['auto', '1250m', '600m', '300m', '150m', '80m', '40m', '20m', '10m', '5m', '2m', '1m', '50cm', '25cm', '15cm', '8cm']
Controls the approximate precision of coordinates in the optimized dataset. ‘auto’ tries to detect an appropriate precision. Other values specify the desired coordinate precision explicitly. Defaults to ‘auto’.
'auto'
Return type
Returns
The dataset response.
add_tile_server_dataset
Add a tile server dataset.
The URL must be a template string with placeholders for the x, y, and z coordinates, e.g. https://server.com/tiles/{z}/{x}/{y}.png.
Parameters
name
str
The name of the dataset.
url
str
The URL of the dataset.
labels
list[DatasetLabel] | None
Optional. User-defined labels as key-value pairs. Deprecated, use tags instead. If only labels are provided, a warning will be logged and the labels will be converted to tags. If both labels and tags are provided, tags will take precedence.
None
tags
dict[str, str] | None
Optional. User-defined tags as key-value pairs.
None
Return type
Returns
The dataset response.
list_datasets
List datasets with optional filtering.
Returns up to limit datasets in a single request. For iterating over all matching datasets use iter_datasets() instead.
All filters are combined with AND logic. When multiple tags are given, only datasets matching all of them are returned.
Parameters
name
str | None
Case-insensitive substring search on dataset name. For example, name="Zarr" matches “Zarr 1 Band” and “HLS Zarr”.
None
tags
dict[str, str] | None
Filter by tags (AND logic). Only datasets that have all specified key-value pairs are returned. Example: {"source": "satellite", "resolution": "10m"}.
None
bbox
tuple[float, float, float, float] | None
Bounding box filter as (min_lon, min_lat, max_lon, max_lat) in EPSG:4326. Only datasets whose extent intersects the box are returned. For a point query, use the same value for min and max.
None
created_after
datetime | None
Only return datasets created after this timestamp.
None
created_before
datetime | None
Only return datasets created before this timestamp.
None
limit
int | None
Maximum number of results (default 100, max 1000).
None
Return type
list[ListDatasetResponse]
Returns
A list of datasets matching the filters.
Raises
ValueError – If limit is not between 1 and 1000.
Examples
iter_datasets
Iterate over all matching datasets with automatic pagination.
Yields datasets one at a time, transparently fetching successive pages using cursor-based pagination. This is the recommended way to retrieve large or unbounded result sets.
All filters are combined with AND logic. When multiple tags are given, only datasets matching all of them are returned.
Parameters
name
str | None
Case-insensitive substring search on dataset name. For example, name="Zarr" matches “Zarr 1 Band” and “HLS Zarr”.
None
tags
dict[str, str] | None
Filter by tags (AND logic). Only datasets that have all specified key-value pairs are returned. Example: {"source": "satellite", "resolution": "10m"}.
None
bbox
tuple[float, float, float, float] | None
Bounding box filter as (min_lon, min_lat, max_lon, max_lat) in EPSG:4326. Only datasets whose extent intersects the box are returned. For a point query, use the same value for min and max.
None
created_after
datetime | None
Only return datasets created after this timestamp.
None
created_before
datetime | None
Only return datasets created before this timestamp.
None
page_size
int
Number of datasets to fetch per request (max 1000).
100
Yields
ListDatasetResponse for each matching dataset.
Raises
ValueError – If page_size is not between 1 and 1000.
Examples
get_dataset
Get the latest version of a dataset by dataset ID.
Parameters
Return type
Returns
The dataset response.
get_dataset_version_by_id
Get a dataset version by version ID.
Parameters
Return type
Returns
The dataset response.
delete_dataset
Soft-delete a dataset.
Deletion works by creating a new dataset version that marks the dataset as deleted. This hides the dataset from the catalog and listing APIs, but tiling requests for existing versions are still accepted. Optionally removes the dataset from all maps.
Both dataset IDs and dataset version IDs are accepted — it does not matter which is passed, the entire dataset is deleted either way.
Parameters
dataset_id
str | UUID
The dataset ID or dataset version ID. Either can be used to identify the dataset — the entire dataset is always deleted, not just a single version.
remove_from_maps
bool
Whether to remove the dataset from all maps. Defaults to True.
True
Return type
Returns
Response containing the dataset ID and the ID of the newly created deletion marker version.
check_api_support
Check if the client’s API version is compatible with the server.
This method contacts the server to verify that the API version used by the client is supported by the server. It also provides information about the supported API versions and whether the current version is deprecated.
Return type
Returns
The version check response from the server.
Raises
VersionIncompatibleError – If the API version is not supported by the server.
Last updated