Areas of Work

Artificial intelligence should enhance human well-being and serve the public good. But today’s AI landscape is shaped by incentives that reward speed, scale and engagement. To ensure that AI upholds public values, Center for Humane Technology supports targeted interventions in policy and tech design that promote human-centered products and genuine innovation. Our areas of work address key themes in AI design, correct dangerous incentive structures, and drive better outcomes for society.

Liability and Duties of Care: Establishing legal liability frameworks while fostering a duty of care rooted in social and moral responsibility.

Don’t Humanize AI: Reinforcing the essential boundary between humans and machines in order to uphold societal norms and legal frameworks that protect human well-being.

Personal Protections: Upgrading our social and legal frameworks to preserve human agency and dignity in the age of AI.

Whistleblower Protections: Balancing information asymmetries in the AI space to enhance democratic oversight and public safety.

CHT Image

Liability and Duties
of Care

CHT Image

Don't Humanize AI

CHT Image

Personal
Protections

CHT Image

Whistleblower
Protections

Liability and Duties
of Care

All major players in the AI ecosystem – AI developers, policymakers, and civil society actors – have a duty of care to ensure that AI technologies serve the public good and enhance our lives. This responsibility includes anticipating harms, protecting vulnerable populations, and promoting long-term societal well-being. One way to institutionalize this is by creating legal duties of care and establishing clear liability. These mechanisms establish enforceable obligations for AI developers to build their products safely from the outset. At the same time, they promote a cultural shift that redefines the role of AI companies in society — they’re not just innovators, but stewards with a structural and ethical duty to safeguard the public interest.

Why it matters

The speed of today’s AI development is astonishing. Companies are racing to build increasingly powerful AI models often at the expense of safety, informed oversight, and public health. While many in the AI industry intend to build products that benefit society, the intensity of this competition inevitably erodes caution. Companies are left locked in a “move fast or be left behind” ethos. In such an ecosystem, the absence of liability standards and duties of care means that AI companies face little deterrence when rushing their risky or poorly-designed products to market. 

We’re already seeing the consequences – from sycophantic and manipulative AI chatbots, to jailbreakable AI systems that bypass critical safeguards. Legal frameworks like liability and duties of care do more than hold companies accountable after harms have transpired – they establish a forward-looking obligation for developers to anticipate risks and protect the public. 

These legal mechanisms help recenter AI development around a core principle — those building and deploying powerful technologies have a meaningful role to play not just in innovation, but in anticipating risks and safeguarding the public. By embedding this duty into law, we can shift incentive structures and cultural norms so that we reward thoughtful, safety-conscious development over speed at any cost, ultimately prioritizing people and public values.

Policy in Action
  • In 2024, CHT released its Framework for Incentivizing Responsible AI as a light-touch policy approach that leverages existing product liability concepts to meet the age of AI. Learn more about our product liability approach to AI here.

Help us design a better future at the intersection of technology and humanity

Make a donation