← Blog
philosophylocal-firstprivacy

On Decentralized Acceleration

Applying Vitalik's d/acc framework to AI — why every cloud prompt is a quiet transfer of power, and what the alternative looks like.

Vitalik's d/acc framework asks a simple question about every technology: does it concentrate power, or distribute it? Accelerate progress, but tilt the balance toward defense over offense. Toward sovereignty over dependency. Bio defense, cyber defense, info defense — each pillar targets a domain where decentralization builds resilience.

I want to apply that question to AI. Not to alignment or superintelligence — to something closer to the ground. Every time you use a cloud-hosted AI, you are quietly transferring power from yourself to a platform. And almost nobody is talking about it.

You Are Subsidizing Your Competition

When you use a cloud-hosted AI, you don't just send a query. You send context. Your prompts encode what you know, what you're working on, what makes your work valuable. Over time, this adds up to something that looks like institutional knowledge — your decision patterns, your domain expertise, your strategic thinking — all sitting on someone else's servers.

A law firm spent fifteen years building a playbook for cross-border M&A deals. That playbook is the edge that wins them clients over firms ten times their size. An associate pastes it into a cloud AI to draft a memo. That knowledge now lives on someone else's infrastructure.

A healthcare startup has six months of clinical trial data no competitor has seen. A researcher uploads it for pattern analysis. The one thing separating them from every other company chasing the same indication is now training data for a foundation model.

A hedge fund spent two years refining a quantitative strategy that consistently outperforms. An analyst feeds the parameters into an API. Their alpha — the thing LPs are paying 2-and-20 for — just left the building.

A SaaS company has a decade of support tickets that encode exactly how their product breaks and how to fix it. An engineer dumps them into a chatbot to build a knowledge base. That corpus is the closest thing they have to a moat, and they just gave it away.

These aren't hypotheticals. This is what normal use looks like. In 2023, Samsung engineers leaked semiconductor trade secrets through ChatGPT — source code, internal meeting notes, hardware data — not through negligence, but by using the product exactly as designed. Three separate incidents in twenty days. In West Technology Group v. Sundstrom, a court found that an employee used Otter AI to transcribe and extract confidential data — customer records, pricing, proprietary manufacturing processes — before walking out the door.

It's not slowing down. The World Economic Forum's 2026 cybersecurity report found that data leaks from generative AI are now the top cyber concern for organizations — outranking adversarial AI attacks for the first time. According to LayerX Security's 2025 report, 18% of enterprise employees paste data into AI tools, and more than half of those paste events include corporate information.

Anything that can be leaked, will be leaked. And the quiet leak is worse than the dramatic one. Your data becomes training signal for models that get sold back to you and your competitors. Your competitive edge gets averaged into a foundation model that everyone can access. You're paying for the privilege of giving away the thing that makes you valuable.

Inference Is a Commodity. Your Data Is Not.

Models are converging. Providers are multiplying. The cost per token drops every quarter. OpenAI, Anthropic, Google, xAI, open-source models running on your own hardware — the intelligence layer is approaching feature parity.

Your data isn't converging. Your fifteen-year playbook, your proprietary dataset, your hard-won customer knowledge, your institutional memory. That's the moat. That's the thing a competitor can't replicate by subscribing to the same API.

The current AI paradigm asks you to give up the only non-commodity asset you have in exchange for the one thing that's rapidly becoming a commodity. That's not a trade. That's a subsidy.

The Law Isn't On Your Side

The legal framework makes this worse.

In Smith v. Maryland (1979), the Supreme Court ruled that information you voluntarily share with a third party carries no reasonable expectation of privacy. The government can access it without a warrant. This became known as the third-party doctrine. The original ruling was about phone numbers dialed through a telephone company, but the precedent has been stretched to cover nearly every form of digital communication. Justice Sotomayor called it "ill-suited to the digital age" in her Jones concurrence. The Court narrowed it slightly in Carpenter v. United States (2018) for cell-site location data. But the core doctrine still stands — and every prompt you send to a cloud AI is information "voluntarily shared" with a third party.

We don't have to guess what this means in practice. In New York Times v. OpenAI, a federal court ordered OpenAI to preserve all ChatGPT output log data — including conversations users had explicitly deleted. The NYT initially demanded 1.4 billion private conversations, eventually negotiated to 20 million. The court found that users "voluntarily submitted their communications" to OpenAI, and that privacy interests didn't override discovery needs.

Deleted conversations. Preserved indefinitely. Reviewable by opposing counsel in a lawsuit the users had nothing to do with.

This is Smith v. Maryland applied to AI in everything but name. You used the service voluntarily. You assumed the risk.

And the regulatory picture keeps getting harder. The EU AI Act puts strict rules on high-risk AI applications in healthcare, law, and finance — the exact fields where professionals most need AI and most need to protect their data. GDPR enforcement has crossed €5 billion in fines. In the U.S., there's no federal AI privacy law. States are passing conflicting regulations. It's only getting more complex.

The simplest way to be compliant by default is to never send the data in the first place.

Keep Your Data. Only Pay for Inference.

If inference is the commodity and data is the moat, the architecture should reflect that. Local-first. Provider-agnostic. Open source.

This is what I've been building with Osaurus — a native macOS AI runtime, MIT-licensed. Your data never leaves your machine. Your context — memory, conversations, workflows — stays local, owned by you, portable across providers. The model is interchangeable: OpenAI, Anthropic, Gemini, Grok, a local model on your own hardware. Switch whenever you want. Lose nothing. No tracking, no telemetry, no data collection. Not "anonymized." Not "used only to improve the service." Zero.

And here's what most people miss: keeping your data local isn't just safer — it's better. As context windows grow and on-device vector databases get fast and cheap, the richest AI experience comes from deep, persistent, local context. Your memory compounds over time. Your agents learn your workflows. Your knowledge base gets sharper with every interaction. None of that compounding happens when your context is scattered across cloud providers who delete your history or silo it behind their own walls. Local-first isn't a tradeoff. It's an advantage.

This isn't the only way to build it. But it proves the architecture works. You don't have to trade sovereignty for capability.

Build Accordingly

Vitalik draws a line between defense "like democratic Switzerland" and "the lords and castles of medieval feudalism." The current AI industry is building feudalism. The model providers are the lords. The API is the drawbridge. Your data is the tribute you pay for access to intelligence.

But picture the alternative. Thousands of local agents running on your hardware, compounding your knowledge over years. A doctor's AI that deeply understands their patients' histories — not because a cloud provider stores it, but because the doctor owns it. A law firm's AI that gets sharper on every deal because its institutional memory never leaves the building. A founder's AI that knows their codebase, their customers, their strategy — and works for them, not for a platform.

That's what decentralized acceleration looks like applied to AI. Not slower. Not weaker. Faster, because your context compounds. Stronger, because your moat deepens with every interaction instead of eroding.

The models are commoditizing. The providers are multiplying. The only scarce resource left is the knowledge you've spent years building. Protect it.

Inference is all you need. Everything else should be owned by you.