The market capitalization of Nvidia crossed $3 trillion in 2024. That figure — larger than the GDP of France — represents the financial value attached to one company's central role in the AI infrastructure build-out. Nvidia's revenue grew by 122% in a single year. Its workforce grew by roughly 10%. The productivity gains embedded in that asymmetry did not distribute themselves broadly. They compounded inside a valuation that benefits, primarily, the small number of people who owned the shares.
This is not an anomaly. It is the pattern. AI is generating value at a scale and pace that has few historical precedents — and the ownership of that value is concentrated in ways that should be part of the central public conversation about the AI transition, but largely isn't.
How Ownership Works in the AI Economy
To understand the wealth concentration problem, it helps to trace how value flows in the AI economy.
Large language models and other foundation AI systems are trained on data — enormous amounts of it. That data was generated by billions of people: writing, searching, creating, communicating, working. The data is collected by platforms, processed by AI labs, and converted into systems that generate value for the companies and investors who own those systems. The people who generated the training data receive nothing.
This is not a metaphor for exploitation. It is a description of the actual economic structure. A writer whose twenty years of published work was included in a training corpus receives no compensation when that work contributes to a system that replaces writing as a professional activity. A programmer whose open-source contributions trained code generation systems receives nothing when those systems reduce demand for programmers. A radiologist whose annotated imaging work helped train diagnostic AI receives no share of the value that system creates.
The legal framework that permits this is complex and contested — ongoing litigation around copyright in training data may alter some of it — but the economic reality is clear: the value created by AI systems flows to their owners, not to the people whose labor and creativity made those systems possible.
The Scale of Concentration
The magnitude of the concentration is historically unusual. McKinsey's 2024 Global AI Report estimated that AI could contribute between $2.6 trillion and $4.4 trillion annually to the global economy by 2030. That estimate is probably conservative given the pace of development.
The question of who receives that value is not an afterthought to these projections. It is the central question. And the current structure of AI ownership — concentrated in a small number of large companies, whose shares are owned disproportionately by the wealthiest households — suggests that the distribution will be extraordinarily narrow.
The top 1% of US households own approximately 54% of publicly traded stocks. This means that when AI productivity gains flow through corporate earnings into stock prices, more than half of those gains flow to a group that represents 1% of the population. The bottom 50% of households own approximately 1% of the stock market. For them, AI's productivity gains are effectively invisible in terms of wealth.
Why This Doesn't Get Named
The wealth concentration problem in AI doesn't get named as directly as it should for several reasons.
The technology industry has cultivated a narrative of democratization: AI tools are available to everyone, AI will increase productivity for all workers, AI will lower the cost of goods and services. These claims are not false. They describe real effects. But they obscure the distinction between access to AI tools and ownership of AI systems — the same distinction that exists between using a road and owning the company that charges tolls on it.
There is also an ideological frame in which the owners of AI systems are celebrated as innovators whose rewards are earned by their contribution to human progress. This frame is not entirely wrong — building AI systems does require significant investment, talent, and risk. But it sidesteps the extent to which the value of those systems depends on inputs — human data, creative work, intellectual production — that were not compensated and were not freely given.
Finally, the concentration problem is politically uncomfortable because addressing it requires confronting interests that have significant political power. The technology companies and their investors have lobbying resources, political relationships, and the halo of cultural prestige that makes challenging their ownership of AI's returns more difficult than challenging, say, the returns of pharmaceutical companies or oil producers.
What Broader Ownership Could Look Like
The imagination of alternative ownership structures is still in its early stages, but several concrete proposals have emerged.
Data dividends: a mechanism by which people who generate data receive compensation when that data is used to train systems that generate commercial value. Jaron Lanier has advocated versions of this for years. The computational challenge of tracking data provenance through training processes is real but not insurmountable.
Sovereign AI wealth funds: public entities that hold stakes in AI infrastructure and distribute returns broadly — either as public services or as direct payments to citizens. Norway's oil fund provides a model, though the analogy is imperfect.
Expanded public ownership of AI research and infrastructure: reversing the current pattern in which publicly funded research produces the foundational breakthroughs that private companies then commercialize and privatize.
None of these proposals is without problems. All of them represent more serious engagement with the distribution question than the current political conversation offers.
The Democratic Stakes
The wealth concentration problem in AI is not just an economic problem. It is a democratic one. Extreme concentrations of wealth translate, through well-documented mechanisms, into extreme concentrations of political influence. If AI accelerates wealth concentration to the degree that current trends suggest, the political capacity to make the kinds of institutional changes that the AI transition requires will itself be diminished.
This is the recursive problem at the core of the AI transition: the wealth AI generates, if concentrated as current arrangements suggest, will be deployed partly to protect the arrangements that concentrate it. The window for addressing the distribution problem is also the window for preserving the political conditions in which the problem can be addressed.
After Work names this problem because naming it is the prerequisite for addressing it. The conversation about AI's benefits cannot proceed honestly while avoiding the conversation about who owns those benefits.