---
license: cc0-1.0
tags:
- chess
- stockfish
pretty_name: Lichess Games With Stockfish Analysis
---
# Condensed Lichess Database
This dataset is a condensed version of the Lichess database.
It only includes games for which Stockfish evaluations were available.
Currently, the dataset contains the entire year 2023, which consists of >100M games and >2B positions.
Games are stored in a format that is much faster to process than the original PGN data.
Requirements:
```
pip install zstandard python-chess datasets
```
# Quick Guide
In the following, I explain the data format and how to use the dataset. At the end, you find a complete example script.
### 1. Loading The Dataset
You can stream the data without storing it locally (~100 GB currently). The dataset requires `trust_remote_code=True` to execute the [custom data loading script](https://huggingface.co/datasets/mauricett/lichess_sf/blob/main/lichess_sf.py), which is necessary to decompress the files.
See [HuggingFace's documentation](https://huggingface.co/docs/datasets/main/en/load_hub#remote-code) if you're unsure.
```py
# Load dataset.
dataset = load_dataset(path="mauricett/lichess_sf",
split="train",
streaming=True,
trust_remote_code=True)
```
### 2. Data Format
After loading the dataset, we can take a first peek at a sample. But it's not very pretty yet! We will try again at the very end.
```py
example = next(iter(dataset))
print(example)
```
A single sample from the dataset contains one complete chess game as a dictionary. The dictionary keys are as follows:
1. `example['fens']` --- A list of FENs in a slightly stripped format, missing the halfmove clock and fullmove number (see [definitions on wiki](https://en.wikipedia.org/wiki/Forsyth%E2%80%93Edwards_Notation#Definition)). The starting positions have been excluded (no player made a move yet).
2. `example['moves']` --- A list of moves in [UCI format](https://en.wikipedia.org/wiki/Universal_Chess_Interface). `example['moves'][42]` is the move that led to position `example['fens'][42]`, etc.
3. `example['scores']` --- A list of Stockfish evaluations (in centipawns) from the perspective of the player who is next to move. If `example['fens'][42]` is black's turn, `example['scores'][42]` will be from black's perspective. If the game ended with a terminal condition, the last element of the list is a string 'C' (checkmate), 'S' (stalemate) or 'I' (insufficient material). Games with other outcome conditions have been excluded.
4. `example['WhiteElo'], example['BlackElo']` --- Player's Elos.
Everything but Elos is stored as strings.
### 3. Define Functions for Preprocessing
To use the data, you will require to define your own functions for transforming the data into your desired format.
For this guide, let's define a few mock functions so I can show you how to use them.
```py
# A mock tokenizer and functions for demonstration.
class Tokenizer:
def __init__(self):
pass
def __call__(self, example):
return example
# Transform Stockfish score and terminal outcomes.
def score_fn(score):
return score
def preprocess(example, tokenizer, score_fn):
# Get number of moves made in the game.
max_ply = len(example['moves'])
pick_random_move = random.randint(0, max_ply-1)
# Get the FEN, move and score for our random choice.
fen = example['fens'][pick_random_move]
move = example['moves'][pick_random_move]
score = example['scores'][pick_random_move]
# Transform data into the format of your choice.
example['fens'] = tokenizer(fen)
example['moves'] = tokenizer(move)
example['scores'] = score_fn(score)
return example
tokenizer = Tokenizer()
```
### 4. Shuffle And Preprocess
Use `dataset.shuffle()` to properly shuffle the dataset. Use `dataset.map()` to apply our preprocessors. This will process individual samples in parallel if you're using multiprocessing (e.g. with PyTorch dataloader).
```py
# Shuffle and apply your own preprocessing.
dataset = dataset.shuffle(seed=42)
dataset = dataset.map(preprocess, fn_kwargs={'tokenizer': tokenizer,
'score_fn': score_fn})
```
# COMPLETE EXAMPLE
You can try pasting this into Colab and it should work fine. Have fun!
```py
import random
from datasets import load_dataset
from torch.utils.data import DataLoader
# A mock tokenizer and functions for demonstration.
class Tokenizer:
def __init__(self):
pass
def __call__(self, example):
return example
def score_fn(score):
# Transform Stockfish score and terminal outcomes.
return score
def preprocess(example, tokenizer, score_fn):
# Get number of moves made in the game.
max_ply = len(example['moves'])
pick_random_move = random.randint(0, max_ply-1)
# Get the FEN, move and score for our random choice.
fen = example['fens'][pick_random_move]
move = example['moves'][pick_random_move]
score = example['scores'][pick_random_move]
# Transform data into the format of your choice.
example['fens'] = tokenizer(fen)
example['moves'] = tokenizer(move)
example['scores'] = score_fn(score)
return example
tokenizer = Tokenizer()
# Load dataset.
dataset = load_dataset(path="mauricett/lichess_sf",
split="train",
streaming=True,
trust_remote_code=True)
# Shuffle and apply your own preprocessing.
dataset = dataset.shuffle(seed=42)
dataset = dataset.map(preprocess, fn_kwargs={'tokenizer': tokenizer,
'score_fn': score_fn})
# PyTorch dataloader
dataloader = DataLoader(dataset, batch_size=1, num_workers=1)
n = 0
for batch in dataloader:
# do stuff
print(batch)
break
```