News
The latest article by our CEO Marius Khan in Tagesspiegel Background Digitalisierung & KI addresses a topic that is of intense concern to us: the conflict surrounding AI, government use, and geopolitical dynamics—specifically the dispute between Anthropic and the Pentagon—is more than an isolated case.
It shows how closely technological development, regulation, and questions of sovereignty are intertwined. This has key implications, especially for European healthcare systems: Who will control critical infrastructure in the future? And on what technological foundations?
Read the full article at Tagesspiegel here:
https://background.tagesspiegel.de/digitalisierung-und-ki/briefing/was-der-anthropic-pentagon-streit-ueber-das-europaeische-gesundheitssystem-verraet?utm_source=linkedin&utm_medium=organic&utm_campaign=2026-03-05_linkedin_organic_fokusseite-digitalisierung–ki
The article was translated into German from the original English version, that you can read below.
——————
Europe Is Outsourcing the Rules of Medicine — and Washington Just Showed Why That Matters
The Pentagon’s standoff with Anthropic should alarm European health policymakers — but probably won’t.
By Marius Khan
The Pentagon’s dispute with Anthropic has escalated to the courts — with a hearing scheduled for March 24 — and OpenAI standing ready to fill the gap in classified military networks. It has been reported as a story about Silicon Valley ethics and the Trump administration’s taste for confrontation. It is both of those things. But for those of us working at the intersection of AI and European healthcare, it is something else: a demonstration of where the rules governing medical AI are actually being written and how far that is from any democratic deliberation on this continent.
What the episode reveals is the underlying logic of how AI infrastructure governance actually works: not through legislation or public deliberation, but through bilateral negotiations between technology companies and their most powerful clients. The terms on which AI operates, what it can do, whose data it uses, what safeguards it carries, are set in procurement rooms, then radiate outward into the broader commercial ecosystem. A contract dispute in Washington is already reshaping the operational choices of organisations that have nothing to do with defence.
Europe is the clearest case study for what this shift looks like in practice. Pharmaceuticals are regulated rigorously. Reimbursement policy is debated openly, at length, sometimes endlessly. The AI Act exists. Data protection frameworks are, at least in theory, the envy of the world. Yet even here, there has been no serious reckoning with the fact that the digital infrastructure underpinning modern medicine — the data pipelines, model architectures and clinical AI platforms on which health systems increasingly depend — is being built, financed and governed outside institutional frameworks, by companies subject to exactly the kind of pressure Anthropic just experienced.
I lead a European software company focused on medical and life sciences applications. We work with clinicians, pharmaceutical companies and research institutions across the globe. The transformation we observe is not primarily about AI outputs — chatbots drafting discharge summaries, algorithms flagging radiology — though those matter. The deeper shift is upstream, in the systems that structure drug development, clinical trial design and regulatory submissions. Long before a doctor prescribes a treatment, algorithms now help determine which molecules are worth pursuing, which patient populations are statistically attractive for research, which endpoints are measurable. These decisions were previously embedded in professional norms and regulatory negotiation. They are increasingly embedded in software.
When an algorithm defines which trial designs are efficient, it reshapes what research gets funded. When a data model systematically excludes patients with incomplete records, it quietly narrows representation. When predictive systems influence risk assessments, they begin to recalibrate standards. None of this happens through public debate or new law. It is built into the software, through the features companies choose to develop and the systems hospitals decide to buy.
Clayton Christensen’s theory of disruptive innovation is instructive here. Incumbents fail not from incompetence, but because their incentives prevent timely adaptation. Disruption begins at the margins; it is dismissed as secondary; by the time the architecture has shifted, it is too late to change it without enormous cost. Europe’s healthcare incumbents — hospitals, regulators, academic research centres, public health agencies — are built for caution, peer review and incremental validation. Those qualities protect patients. They also mean that by the time European institutions formally engage with an infrastructure question, the infrastructure is already embedded.
The Anthropic episode illustrates this with unusual clarity. Before any public policy debate, a single company’s architecture — its values, its technical choices — had become load-bearing infrastructure for national security. The same process is underway in healthcare, less visibly and with less drama. Medical AI companies become embedded in workflows long before anyone asks who set the parameters and on whose behalf.
The United States is at least having the argument out loud. Europe risks something more insidious: assuming that existing regulatory frameworks are sufficient to oversee technologies that evolve far faster than their approval cycles. The AI Act addresses certain categories of high-risk systems. It does not address the structural question of who controls the data standards, modelling assumptions and interoperability rules that govern how medical AI operates at scale. Once those are widely adopted, they are difficult to unwind. They shape clinical and research incentives for years, sometimes decades.
There is a version of this that ends well: accelerating drug discovery, improving diagnostics for rare conditions, extending precision medicine to populations currently excluded. AI can genuinely do all of this, and Europe needs that ambition if it wants to remain relevant in life sciences. But ambition without clarity about governance is not a neutral position. It is a decision to let others govern on your behalf.
Ensuring that training data reflects European patient populations. Guaranteeing that algorithmic efficiency does not quietly exclude complex or rare conditions from research pipelines. Deciding which safeguards must be non-negotiable regardless of what a procurement contract in another jurisdiction demands. These are not abstract ethical concerns. They are questions of sovereignty, about who retains strategic control over the digital backbone of European medicine.
Healthcare is one of the few domains where European societies have deliberately chosen to temper economic logic with moral commitments: to equity, access and care for the vulnerable. The Anthropic-Pentagon dispute showed, in compressed and dramatic form, what it looks like when those commitments collide with the operational logic of AI infrastructure governance. In that case, the collision was public, contentious and unresolved. In European healthcare, the same collision is happening quietly, incrementally, and without headlines.