In 1935, Congress passed the Social Security Act. It was imperfect, exclusionary — domestic workers and agricultural laborers, disproportionately Black, were explicitly left out. But the architecture it created proved durable enough to be expanded over the following decades, and today it provides income security to 70 million Americans. What made it consequential was not just what it did, but that it created an institutional structure — an administrative apparatus, a funding mechanism, a political constituency — that proved very difficult to dismantle or redirect.

Institutional economists call this path dependence. Once a system exists, it shapes what is possible next. The choices that create systems have disproportionate influence over the long arc of what follows. We are in such a moment now.

Why This Window Is Different

Transformative technologies don't produce transformative institutional responses automatically. They produce them during a specific window — a period when the disruption is visible enough to motivate action but the new arrangements haven't yet crystallized. Before the window, the disruption isn't real enough to generate political will. After the window, the path dependencies have set.

The electrification of industry in the early twentieth century had such a window. So did the post-World War II economic expansion. The institutional responses to those transitions — labor law, the GI Bill, the expansion of social insurance — happened because political conditions aligned with the scale of the disruption at a specific moment.

The AI transition is producing a window now. The disruption is becoming visible. Labor markets are starting to show the effects. The political conversation, while still dominated by reskilling narratives and techno-optimism, is beginning to engage with structural questions. This is the moment when the responses that will shape the next fifty years can be designed.

What Gets Locked In

Path dependence is not about destiny. It's about friction. The arrangements that get built now will be very expensive to change later — not impossible, but politically and institutionally costly in ways that make change rare.

Consider healthcare. The US employer-based health insurance system was an accident of World War II wage controls — employers, unable to compete on wages, began offering health benefits. That accident created an administrative infrastructure, an insurance industry, an employer constituency, and a political lobby that made the system nearly impossible to fundamentally restructure for the next eighty years. The Affordable Care Act, the largest health policy reform in a generation, worked entirely within the existing architecture rather than replacing it.

The analogous question for the AI transition: what institutional arrangements are being built right now, by default or by deliberate choice, that will be very expensive to change later?

The answer is disturbing. The arrangements being built by default include: concentration of AI capability and its economic returns in a small number of companies; labor market structures that treat work as the primary mechanism for distributing income and healthcare; regulatory frameworks that lag the technology by years; and educational systems designed for a knowledge economy that the technology is disrupting.

None of these are being designed. They're crystallizing through the accumulation of individual decisions that each seem reasonable in isolation.

The Alternatives That Are Still Available

What would it mean to make deliberate choices in this window?

On ownership: AI systems are built on data generated by billions of people, trained on creative and intellectual work produced across human civilization, and deployed in ways that generate enormous economic returns for a small number of shareholders. The question of whether some portion of that return should flow differently — through data dividends, AI commons, sovereign wealth funds — is still open. In twenty years, it will not be open. The ownership structures will have been litigated, legislated, and locked in.

On labor and security: The link between employment and social safety — healthcare, retirement, income support — made sense when full employment was achievable and stable. AI is making full employment structurally less likely. The question of whether social security should remain dependent on employment status is answerable in this window. The administrative infrastructure to answer it differently still doesn't exist — which means it can be built. In twenty years, the existing infrastructure will have accumulated too many dependents to be restructured.

On education: The credential system was designed for an economy that valued stable, long-term skill profiles. The AI economy will value adaptability, continuous learning, and fundamentally different cognitive capabilities. The educational institutions being built now — the online platforms, the credential frameworks, the corporate learning infrastructure — will define what "education" means for the next generation of workers. Most of what's being built is trying to do the old thing faster.

The Asymmetry of Urgency

The most dangerous feature of this window is that the people with the most power to shape its outcomes have the least urgency to do so.

Technology companies are capturing value from the existing arrangement. Investors are benefiting from the status quo. Governments are managing multiple competing crises with limited bandwidth. The workers who will bear the costs of inadequate institutional response are not yet sufficiently organized, or sufficiently aware of the scale of what's coming, to generate the political pressure that this moment requires.

This is the classic structure of a policy failure in slow motion: the disruption is real, the need is urgent, the solutions are available — but the political timing doesn't align with the structural timing.

The Argument for Speed

The twenty-year window is not actually twenty years of comfortable deliberation. The path dependencies begin forming immediately. Every year of delay narrows the range of what's achievable. Every default arrangement that crystallizes makes the alternative harder to reach.

The choices available in 2026 are not the choices that will be available in 2030. And the choices available in 2030 are not the choices that will be available in 2040. The window is open now. The argument for moving quickly is not impatience. It's arithmetic.

After Work is a book about what those choices look like, why they matter, and what kind of society becomes possible if we make them deliberately rather than by default. The window is not a metaphor. It is the actual structure of how transformative change happens — and how, through inaction, it fails to happen.

AI Impact Stack — This Article Mapped