License discussion
Congrats on the release, but doing non-commercial datasets in 2025 is not great. The big AI companies will find a way to gobble this up and all others will not touch this, so I don't think this is a great way to protect the data.
Given this is a pre-release, things are bound to change which is why I opened this discussion to say this: I would highly recommend putting the least-strict license on this release. Perhaps an MIT or Apache 2.0 license.
An added voice: if I understand correctly, the source material is public data that you have put a lot of work in in OCR'ing and correcting, making them a highly valuable resource! But if the data were initially public domain, what are the reasons for restricting its OCR'd output? Is it to protect the dataset from being immediately used by companies in commercial LLMs? They don't care and will eat this data up, just like any other data they can find. In practice I fear this restriction will only hurt researchers who wish to open-source/open-science their work, not being able to share their end results with permissive licenses.
Again, to emphasize, this is Herculean work and it is impressive that you make it available!
Hi all, thanks for raising your voices here and doing so in such kind ways. Rest assured that, in its final form, this dataset will be released under a permissive license. While I understand some degree of skepticism about whether actors in the ecosystem will engage respectfully, I don’t see it as a foregone conclusion. Our thesis is that, by releasing data (and specifically impactful data), we have a chance to open up conversations across the ecosystem (including this one), begin setting norms that fundamentally change the dynamics within it, and ultimately help ensure the public-interest has a strong foothold in AI. How can we iterate toward terms and norms that advantage public-interest efforts, and those who support them, while making as much knowledge as accessible as possible? This dataset and its Early-Access terms are a first step in answering that question by exploring whether opening up the data sooner, under an initial set of terms, can build new types of community engagement on a path to transitioning the data to more traditional terms and licenses. And if we can work with the community to establish clarity and trust around those terms—including where they’ll land in our final releases—it’s my hope that everyone will be able to engage, with confidence, with every dataset we release. I’ll be writing more in a forthcoming blog post, but wanted to share my immediate thoughts here.
It’s my hope that everyone will be able to engage, with confidence, with every dataset we release.
Love to hear it. Such a good release‼️
Seconding @BramVanroy , it will be interesting for the ecosystem to get feedback on how commercial players interact with the release and whether the initial license choice primarily affects them or smaller open-source reusers.
But if the data were initially public domain, what are the reasons for restricting its OCR'd output?
While the books themselves are in the public domain, the digital copies that libraries participating in Google Books' digitization programme receive from Google are typically placed under commercial usage restrictions.
Google digitizes these libraries' collections "free of charge". The free lunch comes at a cost, however. The library needs to agree to certain restrictions in the use of the library's digital copy. Harvard most likely signed one these agreements, as did most other organizations participating in the Google Books project. The agreements tend to be confidential and negotiations under strict NDAs, which is likely why @leppert is unable to offer much in the way of an explanation in this discussion.
An actual agreement between Google and the British Library that was published after a freedom of information request brings some clarity the possible reasons for the non-commercial license. I suspect most agreements between libraries and Google follow this template, which unfortunately restricts the libraries from releasing their digital copies as datasets. Take a look in particular at sections 4.7 and 4.8 of the contract. An excerpt:
[...] Library shall also use best endeavours to prevent third parties from (a) downloading or otherwise obtaining any portion of the Library Digital Copy for commercial purposes, (b) redistributing any portions of the Library Digital, or (c) automated and systematic downloading from its website image files from the Library Digital Copy. Library shall develop methods and systems for ensuring that substantial protions of the Library Digital Copy are not downloaded from the services offered on the Library's website or otherwise disseminated to the public at large. [...]
The non-commercial clause has been consistently applied. Here's a recent 2025 press release from Bamberg State Library where public domain works digitized "free of charge" were released under a non-commercial license. Libraries around the world are agreeing to these terms because national governments have shown virtually no interest in digitizing archives. The choice is essentially between Google Books or remaining analog.
I hope the Institutional Data Initiative ultimately manage to release this dataset with a permissive license. It's a great initiative, regardless of what one might think of this entire backstory. I just wanted to provide some context as someone working in a library that has not had their collections digitized by Google Books.
I would not assume that Harvard has any external restrictions (from Google or from OCR providers) on its licensing of the scans or their OCR.
See above:
Rest assured that, in its final form, this dataset will be released under a permissive license.
@metasj Essentially every library that was part of Google Books has non-commercial usage restrictions on the resulting digital copies. This is not a coincidence.
See:
Austrian National Library: https://viewer.onb.ac.at/106E34D9/
Bavarian State Library: https://www.europeana.eu/en/item/368/item_RAQIGR4ZP3LDIY6T6VYJIZKMLYGCOHBV
Bodleian Libraries: https://www.europeana.eu/en/item/9200143/BibliographicResource_2000069335413
Complutense University of Madrid: https://www.europeana.eu/en/item/9200110/BibliographicResource_1000126612477
Since Harvard is confident this dataset will eventually be released with a permissive license, and seem to hint at expanding this initiative to other libraries' collections, it's likely they are renegotiating the terms of these agreements with Google.
That would be interesting if true; however my understanding is rather different:
- Google has no say in how people access this collection. Harvard also said no to the more restrictive agreements Google offered later in the Google Books project. https://www.wired.com/2008/11/harvard-says-no/
- These scans are CC0, as are their metadata. https://library.harvard.edu/services-tools/harvard-library-public-domain-corpus
- Bulk access to the corpus (getting access to a link that lets you download it, say) is gated by a sanity check (they manually approve each request) and a Terms of Use that restricts reuse to 'non-profit, educational, and research uses' [such as training LLMs]. These limitations were set by Harvard Library, who can unilaterally relax them in the future. https://library.harvard.edu/services-tools/harvard-library-public-domain-corpus