|
--- |
|
license: cc-by-4.0 |
|
pretty_name: Space-based (JWST) 3d data cubes |
|
tags: |
|
- astronomy |
|
- compression |
|
- images |
|
dataset_info: |
|
config_name: tiny |
|
features: |
|
- name: image |
|
dtype: |
|
array3_d: |
|
shape: |
|
- 2048 |
|
- 2048 |
|
dtype: uint8 |
|
- name: ra |
|
dtype: float64 |
|
- name: dec |
|
dtype: float64 |
|
- name: pixscale |
|
dtype: float64 |
|
- name: ntimes |
|
dtype: int64 |
|
- name: image_id |
|
dtype: string |
|
splits: |
|
- name: train |
|
num_bytes: 100761802 |
|
num_examples: 2 |
|
- name: test |
|
num_bytes: 75571313 |
|
num_examples: 1 |
|
download_size: 201496920 |
|
dataset_size: 176333115 |
|
--- |
|
|
|
# SBI-16-3D Dataset |
|
|
|
SBI-16-3D is a dataset which is part of the AstroCompress project. It contains data assembled from the James Webb Space Telescope (JWST). <TODO>Describe data format</TODO> |
|
|
|
# Usage |
|
|
|
You first need to install the `datasets` and `astropy` packages: |
|
|
|
```bash |
|
pip install datasets astropy |
|
``` |
|
|
|
There are two datasets: `tiny` and `full`, each with `train` and `test` splits. The `tiny` dataset has 2 4D images in the `train` and 1 in the `test`. The `full` dataset contains all the images in the `data/` directory. |
|
|
|
|
|
## Local Use (RECOMMENDED) |
|
|
|
You can clone this repo and use directly without connecting to hf: |
|
|
|
```bash |
|
git clone https://huggingface.co/datasets/AstroCompress/SBI-16-3D |
|
``` |
|
|
|
```bash |
|
git lfs pull |
|
``` |
|
|
|
Then `cd SBI-16-3D` and start python like: |
|
|
|
```python |
|
from datasets import load_dataset |
|
import numpy |
|
dataset = load_dataset("./SBI-16-3D.py", "tiny", data_dir="./data/", writer_batch_size=1, trust_remote_code=True) |
|
ds = dataset.with_format("np", dtype=numpy.uint16) |
|
``` |
|
|
|
Now you should be able to use the `ds` variable like: |
|
|
|
```python |
|
ds["test"][0]["image"].shape # -> (5, 2048, 2048) |
|
``` |
|
|
|
Note of course that it will take a long time to download and convert the images in the local cache for the `full` dataset. Afterward, the usage should be quick as the files are memory-mapped from disk. |
|
|
|
|
|
|
|
## Use from Huggingface Directly |
|
|
|
This method may only be an option when trying to access the "tiny" version of the dataset. |
|
|
|
To directly use from this data from Huggingface, you'll want to log in on the command line before starting python: |
|
|
|
```bash |
|
huggingface-cli login |
|
``` |
|
|
|
or |
|
|
|
``` |
|
import huggingface_hub |
|
huggingface_hub.login(token=token) |
|
``` |
|
|
|
Then in your python script: |
|
|
|
```python |
|
from datasets import load_dataset |
|
import numpy |
|
dataset = load_dataset("AstroCompress/SBI-16-3D", "tiny", writer_batch_size=1, trust_remote_code=True) |
|
ds = dataset.with_format("np", columns=["image"], dtype=numpy.uint16) |
|
|
|
# or torch |
|
import torch |
|
dst = dataset.with_format("torch", columns=["image"], dtype=torch.uint16) |
|
|
|
# or pandas |
|
dsp = dataset.with_format("pandas", columns=["image"], dtype=numpy.uint16) |
|
``` |
|
|
|
## Utils scripts |
|
Note that utils scripts such as `eval_baselines.py` must be run from the parent directory of `utils`, i.e. `python utils/eval_baselines.py`. |
|
|