File size: 5,154 Bytes
20dbc54
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
---
license: mit
task_categories:
- text-classification
language:
- en
tags:
- chemistry
size_categories:
- 1K<n<10K
pretty_name: Blood-Brain Barrier Database
dataset_summary: >- 
 Curation of 50 published resources of categorical and numeric measurements of Blood-Brain Barrier penetration.
citation: >- 
 COPY AND PASTE WHAT YOU GOT FROM THE BIBTEX WEBSITE 

config_names:
- B3DB_classification
- B3DB_classification_extended
- B3DB_regression
- B3DB_regression_extended
configs:
- config_name: B3DB_classification
  data_files:
  - split: test
    path: B3DB_classification/test.csv
  - split: train
    path: B3DB_classification/train.csv
- config_name: B3DB_classification_extended
  data_files:
  - split: test
    path: B3DB_classification_extended/test.csv
  - split: train
    path: B3DB_classification_extended/train.csv
- config_name: B3DB_regression
  data_files:
  - split: test
    path: B3DB_regression/test.csv
  - split: train
    path: B3DB_regression/train.csv
- config_name: B3DB_regression_extended
  data_files:
  - split: test
    path: B3DB_regression_extended/test.csv
  - split: train
    path: B3DB_regression_extended/train.csv
dataset_info:
- config_name: B3DB_regression_extended
  features:
    - name: "NO."
      dtype: int64
    - name: "compound_name"
      dtype: object
    - name: "IUPAC_name"
      dtype: object
    - name: "SMILES"
      dtype: object
    - name: "CID"
      dtype: object
    - name: "logBB"
      dtype: float64
    - name: "Inchi"
      dtype: object
    - name: "reference"
      dtype: object
    - name: "smiles_result"
      dtype: object
    - name: "group"
      dtype: object
    - name: "comments"
      dtype: float64
    - name: "ClusterNo"
      dtype: int64
    - name: "MolCount"
      dtype: int64
  splits:
    - name: train
      num_bytes: 82808
      num_examples: 795
    - name: test
      num_bytes: 27480
      num_examples: 263
---

# Blood-Brain Barrier Database


## Quickstart Usage

### Load a dataset in python
Each subset can be loaded into python using the Huggingface [datasets](https://huggingface.co/docs/datasets/index) library. 
First, from the command line install the `datasets` library 

     $ pip install datasets

then, from within python load the datasets library

    >>> import datasets

and load one of the `B3DB` datasets, e.g., 


 and inspecting the loaded dataset                          


### Use a dataset to train a model
One way to use the dataset is through the [MolFlux](https://exscientia.github.io/molflux/) package developed by Exscientia.
First, from the command line, install `MolFlux` library with `catboost` and `rdkit` support

    pip install 'molflux[catboost,rdkit]'            

then load, featurize, split, fit, and evaluate the a catboost model

    import json
    from datasets import load_dataset
    from molflux.datasets import featurise_dataset
    from molflux.features import load_from_dicts as load_representations_from_dicts
    from molflux.splits import load_from_dict as load_split_from_dict
    from molflux.modelzoo import load_from_dict as load_model_from_dict
    from molflux.metrics import load_suite

    split_dataset = load_dataset('maomlab/B3DB', name = 'B3DB_classification')

    split_featurised_dataset = featurise_dataset(
      split_dataset,
      column = "SMILES",
      representations = load_representations_from_dicts([{"name": "morgan"}, {"name": "maccs_rdkit"}]))
 
    model = load_model_from_dict({
        "name": "cat_boost_classifier",
        "config": {
            "x_features": ['SMILES::morgan', 'SMILES::maccs_rdkit'],
            "y_features": ['BBB+/BBB-']}})

    model.train(split_featurised_dataset["train"])
    preds = model.predict(split_featurised_dataset["test"])

    classification_suite = load_suite("classification")

    scores = classification_suite.compute(
        references=split_featurised_dataset["test"]['BBB+/BBB-'],
        predictions=preds["cat_boost_classifier::BBB+/BBB-"])

    >>> B3DB_classification = datasets.load_dataset("maomlab/B3DB", name = "B3DB_classification")
Downloading readme: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 100/100 [00:00<00:00, 635500.61%/s]
Downloading data: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 82808/82808 [00:00<00:00, 524655476.79%/s]
Downloading data: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 27480/27480 [00:00<00:00, 195686712.94%/s]
Generating test split: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 795/795 [00:00<00:00, 5218265.54 examples/s]
Generating train split: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 263/263 [00:00<00:00, 1835444.18 examples/s]## About the B3DB

### Features of B3DB      


### Data splits
The original B3DB dataset does not define splits, so here we have used the `Realistic Split` method 
described in [(Martin et al., 2018)](https://doi.org/10.1021/acs.jcim.7b00166).

###Citation