Notes from the people building your future

The companies disrupting your career are now volunteering to manage the fallout. OpenAI has published its vision for the post-AI economy. Not everyone is invited.

Once the machines have taken the jobs, we are told, people will find meaning in other things. Community. Creativity. The pursuits that were always more important than work, if only we’d had the time. It’s a compelling vision. It also happens to be very convenient for the people selling the machines.

This is the philosophical sleight of hand at the centre of a new document published by OpenAI this week. Industrial Policy for the Intelligence Age: Ideas to Keep People First runs to 13 pages and covers a lot of ground: public wealth funds, shifting the tax burden from labour to capital, expanded social safety nets, even a four-day working week. It is, by the standards of corporate policy documents, unusually candid about the risks. It acknowledges that jobs will disappear, that economic gains could concentrate in the hands of a few, and that existing governance frameworks are not equipped to handle the disruption ahead.

Less candid, however, is who is doing the talking, and why now.

Illustration of a large hand with a circuit board patterned sleeve holding puppet strings attached to a policy document. Below, a diverse group of workers in various attire (business suits, overalls, hard hats, and service uniforms) look up with question marks above their heads, standing in the shadow cast by the document. The workers cannot see the strings controlling the document above them.
Writing the rules of your own game.

Taking the vision at face value

It is worth taking the effort seriously before pulling it apart, and credit where it’s due: the writing is more self-aware than the average corporate white paper. It does not pretend that AI-driven upheaval will be painless or evenly distributed. The authors explicitly warn that without thoughtful policy, the technology could widen inequality, compounding advantages for those already positioned to benefit while communities with fewer resources fall further behind.

The proposals are organised around three goals:

  • distributing technology-fuelled prosperity more broadly,
  • building safeguards to reduce systemic risk,
  • and ensuring that economic power doesn’t become too concentrated.

To achieve this, OpenAI suggests governments consider taxing capital rather than labour, establishing public wealth funds, expanding social safety nets, and giving workers a formal voice in how AI is deployed in their workplaces.

“Workers using AI might well agree that it’s increasing their productivity without believing they’re seeing the benefits.”

— OpenAI, Industrial Policy for the Intelligence Age (2026)

The document also concedes, notably, that it doesn’t have most of the answers. Its ideas are framed as “intentionally early and exploratory,” a starting point for democratic debate rather than a finished blueprint. For a company with an $852 billion valuation and direct access to the policymakers it is supposedly advising, that is either admirably humble or conveniently non-committal, depending on your appetite for the benefit of the doubt.

Ambition without accountability

The further you read, the more the vagueness accumulates. Public wealth funds are suggested without any mechanism for funding them. Shifting the tax burden from labour to capital is floated without a suggested rate. Worker representation in AI deployment is encouraged without any indication of what that would look like in practice, or who would enforce it.

This is the structural problem with the blueprint. The language is ambitious; the substance is not. There is a significant gap between calling for a “new social contract” and specifying what one might actually contain, and it never quite closes that gap. What remains is a series of directions without destinations, a perfectly reasonable output for an academic working paper, but a curious one for the company that is, by its own account, directly building the technology that makes all of this urgent.

The timing sharpens this. The report arrived this week as the Trump administration finalises a national AI framework, and with US midterms on the horizon. OpenAI president Greg Brockman has donated millions to Donald Trump, and the wider tech industry has poured hundreds of millions into pro-AI political action committees. A policy paper calling for measured, democratic governance of the technology, published at this particular moment, is not an entirely neutral act.

Illustration of a bridge viewed from the side. On the left, the structure is solidly built with brick arches and supports, with workers in suits and business attire walking confidently across. As the bridge extends to the right, it transitions through skeletal scaffolding to dotted lines fading into fog and clouds. Workers in overalls and hard hats at the front hesitate and reach for railings as the solid structure gives way beneath them.
The bridge is still being built. Destination: unclear.

The conflict of interest hiding in plain sight

Here is the central tension the report never quite addresses: OpenAI is proposing the governance framework for serious turbulence it is directly accelerating. The jobs being lost are lost partly because of decisions OpenAI and its peers have already made, and are continuing to make. Positioning itself as a thoughtful participant in the policy debate rather than a primary cause of the dislocation requires a certain amount of rhetorical work, and it puts that work in.

A 2025 paper published in Frontiers in Artificial Intelligence examined this dynamic directly, arguing that by presenting themselves as champions of solutions like universal basic income, AI companies mask their own role in creating the conditions that make those solutions necessary. The study raised a further point that cuts to the heart of it: what is being offered is a basic economic safety net. What is not being offered is basic and affordable access to AI itself. The companies driving the change retain control of the tools; governments are invited to manage the fallout.

This is not unique to OpenAI. It is the defining move of an entire industry at a particular moment in its development: get ahead of regulation by writing the terms of the debate yourself. Whether that constitutes good-faith engagement with a genuine problem or sophisticated reputation management ahead of a regulatory crackdown is a question it cannot answer, because it is also a question the report was designed to forestall.

The evidence base is thinner than the rhetoric

Advocates for universal basic income as a response to AI-driven job displacement tend to point to a body of pilot programmes as proof of concept. The reality is more complicated. The most prominent experiment in this space was funded by Sam Altman himself through his OpenResearch project, providing $1,000 per month to 1,000 low-income participants in Texas and Illinois over three years.

The results were, at best, mixed. Recipients worked slightly fewer hours and were marginally less likely to be employed than the control group. Proponents argued this reflected people making better choices about the quality of work they accepted. Critics argued it demonstrated exactly what opponents of UBI have always claimed: that unconditional income reduces the incentive to work. Both interpretations are defensible. Neither is conclusive. And the fact that the most significant UBI experiment in recent US history was bankrolled by the CEO of the company now advocating for UBI as a policy response to AI disruption is a detail that has received far less scrutiny than it warrants.

Meanwhile, the scale of what is actually happening in labour markets is becoming harder to wave away. Research from McKinsey suggests that generative AI could automate activities accounting for up to 70% of employees’ working time, affecting high-wage knowledge workers as severely as lower-wage administrative staff. This is not the familiar story of automation displacing routine manual labour. It is something with a broader demographic reach and, consequently, a broader political constituency.

The transition nobody is talking about

The “meaning” framing that runs through OpenAI’s document, and through much of the wider discourse around AI and work, rarely gets the examination it deserves. The argument, stated plainly, is that once these systems handle the economically productive parts of life, people will be freed to focus on what genuinely matters: relationships, creativity, community. Elon Musk has made a version of this case too, suggesting that in a world where AI and robots can do everything better than humans, the central question becomes one of meaning rather than survival.

It is a seductive idea. It is also a description of a destination without any serious engagement with the journey. The period ahead, which is already underway and likely to span decades, will be neither smooth nor evenly distributed. A Morgan Stanley study reported by Bloomberg found that AI-related job cuts have produced 8% net job losses in the UK over the last 12 months, the highest of any country surveyed and twice the international average. The UK government’s own Minister for Investment has acknowledged the need for some form of income support and lifelong learning provision to cushion the blow in industries that are disappearing. That is not the language of an orderly handover.

For the communities bearing the immediate cost, the promise of future meaning is cold comfort. The question of what happens between now and the post-scarcity future OpenAI is gesturing towards is precisely what a serious industrial policy would need to answer. To the company’s credit, the question is at least acknowledged. The answer is another matter.

Illustration of a boardroom viewed from above at an isometric angle. Inside, executives in dark suits sit around a table with laptops, papers, and coffee cups. Outside the glass walls on all sides, a diverse crowd of workers (including people in hijabs, hard hats, traditional dress, and casual clothes, some holding laptops or luggage) look in and wave, trying to get the attention of those inside. They can see the meeting but have no seat at the table.
The seat that wasn’t offered.

Who’s missing from this conversation

Perhaps the most revealing thing about Industrial Policy for the Intelligence Age is not what it says, but who wrote it. The workers whose jobs are most immediately at risk did not contribute to it. The communities already absorbing the first wave of AI-driven displacement were not consulted. The smaller economies without the lobbying infrastructure to participate in shaping US AI policy do not feature, despite its passing acknowledgement that “the conversation and the solutions must ultimately be global.”

There is a more immediate version of this absence, too. While OpenAI frames AI access as a foundational right, as essential as electricity or literacy, the market it has helped create is actively pricing people out. The freelancer rationing prompts to stay within a monthly allocation, or the small agency that quietly built workarounds rather than absorb another licensing cost, are not part of this policy conversation either. They are, however, very much part of the workforce it claims to be protecting.

What we have instead is a framework designed by one of the most powerful companies in the world, addressed primarily to the US government, timed for maximum political relevance, and presented as the beginning of a democratic process. That last part may even be genuine. But a debate that starts here, on these terms, with this as the reference point, has already been shaped before most of the participants even get the chance to pull up a chair.

That is not a reason to ignore it. The problems it identifies are real, the disruption is already measurable, and some version of the policy discussion it is calling for does need to happen. But there is a difference between a company contributing to the discussion and a company attempting to own it. OpenAI, characteristically, has opted for the latter.

What matters is not whether its suggestions are sensible. Several of them are. The real issue is why we are treating a corporate blueprint as the natural starting point for one of the most consequential policy debates of the coming decade, and what that says about who we have already decided gets to shape what comes next.

Thanks for reading! 📖

If you enjoyed this, follow me on Medium for more on design, psychology and technology.

References & Credits

  1. OpenAI. (2026, April). Industrial Policy for the Intelligence Age: Ideas to Keep People First. https://cdn.openai.com/pdf/561e7512-253e-424b-9734-ef4098440601/Industrial Policy for the Intelligence Age.pdf
  2. Bellan, R. (2026, April 6). OpenAI’s vision for the AI economy: public wealth funds, robot taxes, and a four-day workweek. TechCrunch. https://techcrunch.com/2026/04/06/openais-vision-for-the-ai-economy-public-wealth-funds-robot-taxes-and-a-four-day-work-week/
  3. Bélisle-Pipon, J-C. (2025, February). AI, universal basic income, and power: symbolic violence in the tech elite’s narrative. Frontiers in Artificial Intelligence. https://www.frontiersin.org/journals/artificial-intelligence/articles/10.3389/frai.2025.1488457/full
  4. Moon, L. (2026, January 26). AI job cuts are landing hardest in Britain, Morgan Stanley says. Bloomberg. https://www.bloomberg.com/news/articles/2026-01-26/ai-job-cuts-are-landing-hardest-in-britain-morgan-stanley-says
  5. Chui, M. et al. (2023). The economic potential of generative AI: The next productivity frontier. McKinsey & Company.
    https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/the-economic-potential-of-generative-ai-the-next-productivity-frontier
  6. OpenResearch. (2024). Unconditional Cash Study: Initial Findings. https://openresearch.com/unconditional-cash
  7. Zeff, M. (2026). Greg Brockman’s political donations. Wired. https://www.wired.com/story/openai-president-greg-brockman-political-donations-trump-humanity/


Notes from the people building your future was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.

Need help?

Don't hesitate to reach out to us regarding a project, custom development, or any general inquiries.
We're here to assist you.

Get in touch