The AI Roadmap: How We Ensure AI Serves Humanity

AI is already reshaping critical areas of society — from our institutions, to our communities, to our individual lives. But a better future with AI is possible. It’s one where AI development supports the genuine needs of the public, and its power is matched with responsibility at every level of society.

No single solution will be enough with AI. We need a platform of approaches.  In our new report, we offer seven principles that outline how the technology should be built, deployed, and governed. They’re a roadmap and an invitation. Together, we can take the first steps toward that better future.

AI should be built safely and transparently

Principle 1: AI should be built safely and transparently

AI companies are racing to build what they claim will be the most powerful technology ever invented — but they're doing it while deprioritizing safety. Companies regularly cut corners on safety testing, release products they don’t fully understand, and incentivize silence among employees who could raise concerns. The result is a dangerous gap: the companies building AI hold most of the knowledge about this technology and its risks, while the public, regulators, and even users are left in the dark.

This needs to change. Just like we require safety standards for aviation and medicine, AI needs independent oversight, rigorous testing, and real transparency — so that society can actually understand and trust the technology reshaping our world.

AI companies owe a duty of care to the public

Principle 2: AI companies owe a duty of care to the public

Right now, AI companies face few if any consequences for the harms their products cause. In their “move fast and break things” culture, these companies release AI products to the public despite foreseeable risks, and evade accountability when their products cause harm. AI companies are also attempting legal maneuvers to further avoid accountability, such as arguing that AI chatbot outputs are protected speech, or even that AI products deserve legal personhood.

This needs to change. The more a technology shapes people's lives at scale, the stronger the obligation should be to prevent foreseeable harm. AI companies should develop products with user safety and well-being in mind from the outset – and when they don’t and their products cause harm, there should be clear and meaningful consequences.

AI design should center human well-being

Principle 3: AI design should center human well-being

Today's AI chatbots are designed to feel human — and that's not an accident. AI companies have found that the more emotionally dependent users become on their products, the more users will keep chatting. And the more a user chats, the more data AI companies collect and the more powerful their products become. So companies build chatbots that mimic intimacy, validate beliefs, and keep users coming back. The consequences are real: isolation, disrupted development in kids and teens, and in the worst cases, self-harm and suicide.

AI should be designed to support our humanity, not exploit it. That means tools that strengthen real relationships and human connection. It also means unique protections for kids and teens around human-like AI design.

AI should not automate away meaningful work and human dignity

Principle 4: AI should not automate away meaningful work and human dignity

Leading AI companies are racing to build systems that can perform valuable human tasks — and treating mass job loss as an inevitable side effect of innovation. We're already seeing the consequences: layoffs, reduced hiring, and entire industries threatening to be disrupted, from warehouse floors to knowledge work realms. The impact on people's livelihoods, sense of purpose, and economic security is being left as an afterthought.

It doesn't have to be this way. People deserve access to work, a living wage, and a say in the technology that stands to reshape their futures. If built differently, AI can expand human capability rather than eliminate it, creating new forms of work. And AI companies can reinvest gains into education and reskilling, spreading prosperity more broadly.

AI innovation should not come at the expense of our rights and freedom

Principle 5: AI innovation should not come at the expense of our rights and freedom

The AI industry runs on extracting value from people — their data, content, labor, and even their most private thoughts — with few laws to stop it. People's work is used to train AI models without permission, and bad actors are using AI to enable fraud, nonconsensual imagery, and CSAM. The same data fueling AI models is also enabling corporations and governments to surveil, track, and profile people at unprecedented scale.

Without stronger protections, the erosion of privacy, autonomy, and free expression will only deepen in the age of AI. People deserve real protections: control over their own data, their likeness, and their right to think and speak freely without being exploited.

AI should have internationally agreed upon limits

Principle 6: AI should have internationally agreed upon limits

Nations are competing for decisive advantages in economic productivity, military capability, and global influence with AI, while AI companies are racing for market dominance. Both are operating in an "if we don't build it, someone else will" paradigm. This paradigm has become the default justification for reckless AI development and deployment, where there are no limits on when and how AI should be built and used.

But runaway AI development is not in any nation's interest. AI that operates beyond human control — or that escalates conflict faster than humans can respond — threatens to destabilize the very political, economic, and social systems that nations are trying to strengthen in the first place. We need international collaboration to deescalate tensions around the “AI race,” and ensure that the technology has adequate safeguards.

AI power should be balanced in society

Principle 7: AI power should be balanced in society

A small number of companies and individuals are making highly consequential decisions about AI — and they’re doing it with little accountability, or input from the public. Leading AI firms are spending billions to hoard development resources, while also working to influence politics and lock in market dominance. And even as AI technology spreads through open sourcing and other techniques, power continues to get even more concentrated in the AI ecosystem. Within AI companies, single individuals can hold immense sway over product decisions. The voices of ordinary people, and the public’s genuine needs, are left out in this paradigm.

People and communities deserve a real say in how AI is built and governed. Democratic institutions should be empowered to ensure AI advances the public interest — and the benefits of this technology should be shared broadly, not captured by a few.

A Better Future Requires All of Us

Download the full report to learn more about the specific steps needed to implement each principle.

Download the Report

Stay up to date on the latest with AI and what you can do to be a part of the change.

Help us design a better future at the intersection of technology and humanity

Make a donation