In 2019 I wrote about “preventing digital feudalism”—a phenomena even more relevant today than it was then:
“Today’s tech companies originally used their broad networks to bring in diverse suppliers, much to the benefit of consumers. Amazon allowed small publishers to sell titles (including my first book) that otherwise would not have made it to the display shelf at your local bookstore. Google’s search engine used to return a diverse array of providers, goods, and services.
But now, both companies use their dominant positions to stifle competition, by controlling which products users see and favoring their own brands (many of which have seemingly independent names). Meanwhile, companies that do not advertise on these platforms find themselves at a severe disadvantage. As Tim O’Reilly has argued, over time, such rent seeking weakens the ecosystem of suppliers that the platforms were originally created to serve.
Rather than simply assuming that economic rents are all the same, economic policymakers should be trying to understand how platform algorithms allocate value among consumers, suppliers, and the platform itself. While some allocations may reflect real competition, others are being driven by value extraction rather than value creation.
Thus, we need to develop a new governance structure, which starts with creating a new vocabulary. For example, calling platform companies “tech giants” implies they have invested in the technologies from which they are profiting, when it was really taxpayers who funded the key underlying technologies – from the Internet to GPS.”
Next week I will be speaking at the AI Action Summit in Paris, which will bring together world leaders, tech companies, academics including Nobel Prize winning Daron Acemoglu, and civil society to galvanize momentum behind Europe's AI ambitions as it struggles to compete with China and the US. Last month, the UK announced its own AI action plan, promising to increase computing power 20-fold and "mainline AI into the veins" of the nation, with Rachel Reeves, the UK’s Chancellor of the Exchequer, setting out the UK Government’s ambition to create ‘Europe’s Silicon Valley’ in her growth plan last week. However, both initiatives reveal the blind spots in how we think about AI governance, innovation, and the creation of public value.
AI is not a sector - it's a general purpose technology that is and will continue to shape all sectors of our economy. Like many transformative technologies, from the hammer to nuclear power, AI can be used to create tremendous value or cause serious harm. This makes steering its development toward the common good more urgent than ever. The real question isn't whether or not to regulate AI, but how to actively steer its development toward public value creation rather than value extraction.
Steering is not just about regulating, it is also paying more attention to the value creation itself. As I argue in my 2013 book, The Entrepreneurial State: Debunking public vs. private sector myths, much of modern technology came from a collective investment, with public institutions like the US Defense Advanced Research Projects Agency (DARPA) or the European Council for Nuclear Research (CERN), leading the way in the most high-risk capital intensive phase. What would Google be without the DARPA-funded internet? What would UBER be without the US Navy-funded GPS?What would Apple be without the CIA-funded touch-screen technology and DARPA-funded voice assistant, Siri?
One of the dangers in the modern world is that the excessive rents that have been earned by companies that benefitted from these public investments—who often then dodge their tax contributions— are now being used to attract top talent from people that used to work in universities and public labs. This form of brain-drain exacerbates unequal distribution of knowledge between public and private sectors. It is impossible to regulate a system you don’t understand: what happens when all the knowledge is concentrated in five private companies?
Furthermore, as I outline in The Big Con: How the consulting industry weakens our businesses, infantilizes our governments and warps our economies, the decimation of Government capacity over the last 40 years through the outsourcing of knowledge to private sector consultants has created a dangerous dependency. As NASA's Head of Procurement warned back in the 1960s, the dangers of what he called 'brochuremanship' become particularly acute when government relies on consultation with 'Big Tech' companies to advise on technologies that the public sector no longer has the expertise to implement and regulate themselves. This vicious cycle of capacity erosion further weakens the state's ability to shape and steer technological development in the public interest.
This challenge is particularly evident in the UK's approach to AI. The government's recent announcement of a sweeping AI action plan reveals both the ambition and the blind spots in how we think about technological change.
While the focus on AI adoption and computing power is important, there's a false divide between innovation policy and regulation that needs addressing. Through a research project I directed with the Omidyar Network, we found that today's algorithmic systems are increasingly being used to extract what we call "algorithmic rents" - using AI and machine learning not to create genuine value, but to concentrate market power and extract wealth from users and smaller players in the digital economy.
The recent emergence of DeepSeek, a Chinese AI company, is challenging our assumptions about the inherent barriers to entry in AI development. By delivering performance comparable to leading AI models while requiring significantly less computing power and energy consumption, DeepSeek raises an intriguing possibility: could more efficient approaches to AI development help break the stranglehold that major cloud computing companies - Google, Amazon, Microsoft - have established through their control of vast computing resources? While it's too early to tell whether this technical breakthrough will translate into genuine market restructuring, it highlights the difference between cloud computing as infrastructure and AI services as applications. The fundamental question remains: will reducing the computational barriers to AI development be enough to ensure these technologies serve the public good, or will other forms of market concentration emerge?
Whatever the outcome, we already see familiar patterns of value extraction emerging in AI development. Just as platforms like Facebook and Google evolved into what Cory Doctorow calls ‘enshittification’ - the process of degrading user experience to extract more value - today's AI systems risk following the same extractive path. Companies developing generative AI are already showing the classic signs: using copyrighted content without fair compensation, centralizing value within their services, and reducing value flows to the creators and developers they depend on.
As I argued in "Governments Must Shape AI's Future," innovation is not just serendipitous - it has a direction that depends on the conditions in which it emerges. The current AI infrastructure serves insiders' interests and risks exacerbating economic inequality. Without proper governance, AI risks becoming another engine of rent extraction rather than value creation. We need an 'entrepreneurial state' capable of establishing pre-distributive structures that share risks and rewards of AI innovation fairly from the start.
This connects directly to my earlier work co-authored with Gabriela Ramos, Assistant Director-General for Social and Human Sciences at UNESCO, where she leads the Management of Social Transformations Programme (MOST), on which I serve on the ‘high level advisory group’. In 2022 we wrote an article called "AI in the Common Interest," where we outlined how AI can enhance our lives in many ways - from improving food production to bolstering resilience against natural disasters - but without effective governance, it risks creating new inequalities and amplifying existing ones. Between 2013 and 2021, China and the US accounted for 80% of private AI investment globally. As Ian Hogarth, Associate Professor at the UCL Institute for Innovation and Public Purpose, notes in his Financial Times piece on how Europe can build it’s first trillion dollar company, this dominance wasn't inevitable - Europe pioneered much early AI development through DeepMind, but lacked the audacious capital and long-term investment needed to maintain its leadership. The same pattern risks repeating unless we fundamentally change how we invest in and govern AI development.
This history informs potential solutions. As Francesca Bria, former Chief Digital Technology and Innovation Officer for Barcelona and Associate Professor at UCL Institute for Innovation and Public Purpose argues, Europe's path toward digital sovereignty requires building what she calls the 'EuroStack' - independent digital infrastructure that includes cloud computing, advanced chips, AI, digital IDs, and data spaces; all governed as public goods rather than monopolistic enterprises.
The question isn't whether Europe or the UK can become an "AI superpower," but whether they can help build an AI ecosystem that serves the common good. This isn't about choosing between innovation and regulation, nor is it about top-down management of technological development. Rather, it's about creating the right incentives and conditions that steer markets toward delivering the outcomes we want as a society. By establishing clear conditions for public investment and support, we can shape an AI future that creates value for all, not just extracts it for the few.
Further reading:
AI Now Institute. (2024). Redirecting Europe's AI Industrial Policy: From Competitiveness to Public Interest. AI Now Institute Report.
Hogarth, I. (2024). How can Europe build its first trillion-dollar start-up?, Financial Times, 18 December. Non-paywalled version here.
Mazzucato, M. (2019). Preventing Digital Feudalism, Project Syndicate, 2 October.
Mazzucato, M. (2024). The Ugly Truth Behind ChatGPT: AI is Guzzling Resources at Planet-Eating Rates , The Guardian, 30 May.
Mazzucato, M. and Gernone, F. (2024). Governments Must Shape AI's Future, Project Syndicate, 11 March.
Mazzucato, M. and Ramos, G. (2022). AI in the Common Interest, Project Syndicate, 26 December.
Mazzucato, M. and Strauss, I. (2024). The Algorithm and Its Discontents, Project Syndicate, 28 February.
Mazzucato, M., Schaake, M., Krier, S. and Entsminger, J. (2022). Governing Artificial Intelligence in the Public Interest, UCL Institute for Innovation and Public Purpose, Working Paper Series (IIPP WP 2022-12).