🌁#86: Four Freedoms of truly open AI

Community Article Published February 3, 2025

– what are they? Defining the future


This Week in Turing Post:

  • Wednesday, AI 101, Method: we will talk about Time Test Compute
  • Friday, Unicorn series: we are profiling ElevenLabs

🔳 Turing Post is on 🤗 Hugging Face as a resident -> click to follow!


The main topic – AI is not software anymore. What is open-source for AI, and what are Open AI’s Four Freedoms?

By open AI here, I mean really open AI – not the company. But it's actually Sam Altman who got me thinking about freedoms of open AI.

Look at this picture. On the right side (from right to left), there’s Paul Graham, Sam Altman, and next to him, in a grey t-shirt, smiling – Aaron Swartz.

image/png

Image Credit: Flaming Hydra

Aaron Swartz was famous for fighting for freedom of information. He helped develop the RSS standard and worked with Creative Commons, all fueled by the belief that information – especially taxpayer-funded research – should belong to everyone. In 2010, convinced that academic paywalls stifled progress, Swartz downloaded millions of scholarly papers from JSTOR, accessing them through MIT’s network. Though his motivation was rooted in the principle of free access, federal authorities hit him with charges that threatened decades in prison.

He couldn't bear it – and in 2013, he took his own life.

In 2015, two years later, Sam Altman co-founded the company with the beautiful name OpenAI. At first, OpenAI looked like a natural heir to Aaron’s crusade for free information. We all know what happened later: proprietary code, restricted access, paywalls, a for-profit structure etc. Recently Sam Altman confessed he might be on the wrong side of history:

image/png

Image Credit: AMA at Reddit (which btw Aaron Swartz co-founded)

But what we see now is still the same: him (or OpenAI in general) accusing rivals of doing exactly what his company has done so many times before.

In January 2025, DeepSeek blew everyone’s mind by open-sourcing (“open-weight”, if to be exact) their best model R1, which achieves performance comparable to OpenAI-o1 across math, code, and reasoning tasks.

And what did OpenAI do? It alleged that the Chinese AI startup DeepSeek used a technique called "distillation" to replicate its proprietary models without authorization. This crackdown raises questions about double standards in AI training. Critics argue that OpenAI itself trained on vast amounts of web data without permission – yet now seeks to block competitors from using its outputs.

And then we have a manifesto from Dario Amodei at Anthropic, basically arguing that China must be locked out of advanced AI by tightening U.S. chip export controls.

"It is mine, I tell you. My own. My precious. Yes, my precious." Bilbo Baggins, under the Ring's growing influence, in "The Fellowship of the Ring"

So, it got me thinking: we talk a lot about open-source in AI, but let’s be honest – it’s not the same as for software. In software, open-source wasn’t just about sharing code; it was a philosophy, anchored in the Four Freedoms that redefined how technology is built, owned, and controlled. Real freedom – the kind that shapes entire eras – has never been accidental. It needs a foundation, a structure that makes it real.

AI is at that inflection point now. It doesn’t just need more open models or better licensing – it needs its own philosophy, a set of core freedoms that define how it’s created, shared, and governed. We’re standing at the moment where AI’s version of the Four Freedoms has to be written. The questions are:

What are the Freedoms of Open AI? and How do we establish them before power is consolidated beyond reach, before the “growing influence of the ring” has ruined the bearers?

I’m here to start the conversation, so I suggest six freedoms. Choose what you think is the most important: You can leave your choice of four main as a comment below:

  • Freedom to Access – Open AI should be available to all, ensuring that research, models, and datasets remain accessible to foster innovation and prevent monopolization.
  • Freedom to Understand – AI systems should be transparent and interpretable, allowing users to comprehend how decisions are made and avoid black-box dependency.
  • Freedom to Forget – AI should not be a permanent recorder of human actions. You should have the ability to erase, unlearn, or discard information when necessary – whether for privacy, ethical concerns, or simply to prevent stagnation in learning.
  • Freedom to Dissolve – AI should integrate seamlessly into human life without dominating or replacing it.
  • Freedom from Overfitting – AI should not be trapped in static world models that attempt to preempt all possible inputs. Instead, it should maintain adaptability, learning from interactions rather than relying on exhaustive pretraining that inevitably loses relevance.
  • Freedom from Excess – AI should not be over-trained, over-aligned, or over-regulated to the point that it loses its effectiveness.

Historical background: Four Freedoms

In 1941, when the world was basically on fire, Franklin D. Roosevelt came up with his Four Freedoms:

image/png

  • Freedom of Speech – The right to express opinions without government restraint.
  • Freedom of Worship – The right to practice any religion (or none) without persecution.
  • Freedom from Want – Economic security and a decent standard of living for all.
  • Freedom from Fear – A world free from war and oppression.

Sure, they were American values, but FDR made it clear these rights belonged to everyone, everywhere. For him, it was about protecting the core of human dignity.

His Four Freedoms were baked into his New Deal mindset, which was all about giving citizens both economic security and personal freedoms. He was also staring down Nazi Germany and Imperial Japan – who were all about killing free thought and speech. Roosevelt’s rallying cry shaped the future founding of the United Nations and the Universal Declaration of Human Rights, turning his Four Freedoms into a worldwide mission statement.

Fast-forward 45 years, and the fight had moved to the world of software. By the mid-80s, technology was being locked up tighter than a billionaire’s wallet. Corporations controlled how programs were used, shared, or even peeked at. Richard Stallman, a programmer from MIT, didn’t like the new trend of “pay up or shut up,” so in 1985, he launched the Free Software Foundation. As an homage to Roosevelt, Stallman laid out four freedoms of his own:

  • Freedom 0 – The freedom to run the program for any purpose.
  • Freedom 1 – The freedom to study how the program works and modify it.
  • Freedom 2 – The freedom to distribute copies of the program.
  • Freedom 3 – The freedom to distribute modified versions of the program.

Stallman’s playbook lit the fuse for the open-source movement, birthing Linux and powering everything from servers to smartphones – all while kicking off endless debates about digital rights and online autonomy.

Now, here we are in the age of AI. Big players want to hold the keys to the new digital kingdom. This setup is rife with the potential for fresh flavors of oppression – algorithms we can’t question, never-ending surveillance, AI bias reinforcing old injustices.

If Roosevelt articulated freedoms to guide a post-war world, and Stallman did the same for the digital age, I suggest it’s time to think and define the Freedoms of AI.


This text is very important to me. Please vote, comment, forward it to your colleagues, and share it on social networks.


We thank our AI practitioner Will Schenk for the inspiring conversation about four freedoms. That was his recommendation when I started to talk about open-source in AI


Curated Collections

image/png


Do you like Turing Post? –> Click 'Follow'! And subscribe to receive it straight into your inbox -> https://www.turingpost.com/subscribe


News from The Usual Suspects ©

  • AI Safety Report: Progress, Peril, and the Race to Keep Up The first International AI Safety Report – led by Yoshua Bengio and 96 experts – warns that AI is evolving faster than our ability to control it. Cyber threats, bias, and labor disruption loom large, leaving policymakers with a tough call: regulate now or risk chaos later. The only hope? AI is still in human hands.

  • ElevenLabs Finds Its Voice—And $180M to Amplify It ElevenLabs just secured a $180M Series C, with a16z and ICONIQ leading the charge. Their AI voice tech is making waves, and with new backers like NEA, WiL, and Deutsche Telekom, they’re gearing up for global expansion. In an AI-powered world, staying silent isn’t an option.

  • OpenAI’s Deep Research and Partnership with US National Labs

    1. They are[rolling out an AI agent for deep research – the same name for a tool that Google already has.
    2. Partnering with U.S. National Labs, OpenAI is pushing AI into clean energy, cybersecurity, and nuclear security – with serious computing power behind it.
  • Madrona Marks 30 Years With $770M Raise Seattle-based Madrona celebrates three decades with a fresh $770M to back visionary founders and applied AI. Not a bad birthday gift.

We are reading/listening

  • Interview with Kevin Xu by China Talk – great insights on China’s open-source AI. DeepSeek embodies rapid, academic-style innovation, driven by “kai yuan qinghuai” (open-source zeal). Engineers aim to match Western tech, but tensions between transparency and national interests could shape policy.
  • Coming in April 2025, but already available online – an outstanding book on ML in production by CMU. Check it out here: ML in Production.
  • AI is so hot that Lex Fridman just interviewed our favorite Nathan Lambert from Interconnect.AI and Dylan Patel from SemiAnalysis for five hours straight.

Top models to pay attention to

  • Qwen2.5-Max from Alibaba trains a Mixture-of-Experts model on 20T tokens, excelling in reasoning and competitive benchmarks. Future improvements target reinforcement learning and intelligence expansion.
  • Baichuan-Omni-1.5, technical report, details their omni-modal model which integrates text, audio, and vision, outperforming major multimodal benchmarks with a real-time bilingual speech system.
  • OpenAI o3-mini is a cost-efficient reasoning model optimized for STEM tasks It is very impressive, and – for my personal use – outperforms Gemini Deep Research.
  • Mistral Small 3 delivers a 24B-parameter model focused on low-latency inference, rivaling larger models while running 3× faster and supporting local deployment.
  • Ai2’s Tülu 3 405B utilizes Reinforcement Learning from Verifiable Rewards (RLVR) to outperform DeepSeek V3 and GPT-4o, pushing scaling potential despite compute limitations.

The freshest research papers, categorized for your convenience

There were quite a few super interesting research papers this week, we mark the ones we recommend the most with 🌟 in each section.

Reinforcement Learning and Generalization

Fine-Tuning vs Reinforcement Learning

OpenAI’s O-Series Models: Capabilities and Safety

Guardrails and Their Breakers

Novel Architectures & Training Paradigms

Efficient Model Scaling & Optimization

That’s all for today. Thank you for reading!


Please share this article to your colleagues if it can help them enhance their understanding of AI and stay ahead of the curve.

image/png

Community

Sign up or log in to comment