In 2017, Bill Gates suggested that robots should be taxed. The proposal attracted enormous attention — a famous technologist endorsing a tax on the machines his industry was building — and generated a policy conversation that has persisted, in various forms, for the better part of a decade. The robot tax is now a fixture of AI policy discussions. It is also, as a complete solution to the AI transition, inadequate. But so is everything else that's been proposed.

This is not a counsel of despair. It's an argument for complexity — for understanding why no single policy instrument can address a disruption this large, and what a serious multi-part response might look like.

The Robot Tax: What It Gets Right and Wrong

The robot tax — variously described as a levy on automated systems that replace human workers, a tax on AI-generated revenues, or a fee on automation above certain thresholds — has genuine appeal. It addresses the distribution problem directly: if automation generates returns that don't flow to displaced workers, a tax creates a mechanism to redirect some portion of those returns.

The conceptual problem is definitional. What counts as a robot? An AI-assisted spreadsheet that increases one analyst's productivity by 30% — is that a robot? A legal research tool that allows one associate to do the work of three — taxed or not? The automation that the robot tax proposes to capture is not discrete equipment that can be counted. It is embedded in software, in processes, in the intelligence layered into systems that are not obviously distinguishable from productivity tools that have existed for decades.

South Korea implemented what is sometimes called a robot tax in 2017 — actually a reduction in tax deductions for automation investment, rather than a direct levy. The empirical evidence on its effects is mixed: it may have modestly slowed automation adoption in some sectors, but its revenue generation was minimal and its effect on displaced workers essentially zero.

A robot tax could be part of a serious response. It is not, by itself, a solution.

Universal Basic Income: The Promise and the Constraint

UBI has become the policy proposal most closely associated with the AI transition in popular discourse. The logic is simple: if AI displaces enough workers, provide everyone with enough income to live without work. Several high-profile experiments — in Stockton, California; in Finland; in Kenya through the GiveDirectly program — have produced encouraging results for the effects of unconditional income on wellbeing and economic mobility.

The scale problem is significant. A UBI of $1,000 per month for all US adults would cost approximately $3 trillion annually — larger than the entire current federal discretionary budget. This is not an impossible number in an economy that generates $25 trillion in GDP, but it requires either a massive tax increase, a massive reallocation of existing spending, or both. Neither is politically tractable under current conditions.

The more fundamental concern is whether income is sufficient. UBI addresses the economic dimension of AI displacement. It does not address the identity dimension, the social dimension, the question of meaning and contribution. A society in which large numbers of people have adequate income but nothing socially valued to do is not obviously better than a society working through the disruption more slowly.

UBI deserves serious consideration as a component of a response. The claim that it is the solution deserves skepticism.

AI Regulation: Necessary but Insufficient

The proposition that better regulation of AI development would prevent displacement is, in its strong form, not credible. The technology is being developed across jurisdictions, by actors with enormous resources and strong competitive incentives, in a geopolitical environment where unilateral restraint by any country would likely shift development elsewhere rather than slow it globally.

The more defensible version of the regulatory argument is not about slowing AI but about shaping its deployment: requiring impact assessments before AI systems are deployed in high-stakes occupational settings, mandating worker notice periods before significant automation, requiring that AI systems in certain domains meet higher bars for safety and transparency. These are meaningful interventions. They are not, collectively, a solution to the structural transformation that AI is producing.

Reskilling: Already Addressed

The case against reskilling as sufficient response has been made at length elsewhere. The short version: retraining works for some workers in some circumstances and cannot absorb displacement at the scale that AI is likely to produce.

What a Serious Response Actually Looks Like

A serious response to the AI transition looks less like a silver bullet and more like the policy architecture built around the New Deal — a set of interlocking institutional changes, each addressing a different dimension of the problem, that together create a floor below which the disruption cannot push people.

That architecture would include: a redesigned social insurance system that is not dependent on employment status; mechanisms for distributing AI productivity gains more broadly; support systems for workers in extended transition periods measured in years; investment in the care economy where human labor remains irreplaceable; and governance frameworks for AI deployment that protect workers and communities from the most abrupt forms of displacement.

Building this architecture requires accepting that there is no shortcut — no single instrument that resolves the complexity without the political work of building consensus for multiple interventions simultaneously.

The robot tax is not the answer. Neither is UBI, or reskilling, or regulation. The answer is all of them, and more, and it requires political will that does not yet exist at the scale the problem requires. After Work is an argument for developing that will before the window closes — before the institutional path dependencies of the AI transition lock in arrangements that are very expensive to change.

The silver bullet is seductive precisely because the real work is hard. But the real work is what's required.

AI Impact Stack — This Article Mapped