Areas of Work

The CHT policy team drives interventions that produce meaningful change in the tech ecosystem and in broader culture — without disrupting innovation.

Our policy team’s core interventions reflect deep policy research, extensive dialogues with policymakers and industry insiders, and a precise understanding of the mechanisms behind dangerous tech design. CHT’s core interventions include:

CHT Image

Liability and Duties
of Care

CHT Image

Don't Humanize AI

CHT Image

Personal Protections

CHT Image

Whistleblower Protections

Liability and Duties
of Care

All major players in the AI ecosystem – AI developers, policymakers, and civil society actors – have a duty of care to ensure that AI technologies serve the public good and enhance our lives. This responsibility includes anticipating harms, protecting vulnerable populations, and promoting long-term societal well-being. One way to institutionalize this is by creating legal duties of care and establishing clear liability. These mechanisms establish enforceable obligations for AI developers to build their products safely from the outset. At the same time, they promote a cultural shift that redefines the role of AI companies in society — they’re not just innovators, but stewards with a structural and ethical duty to safeguard the public interest.

Why it matters

The speed of today’s AI development is astonishing. Companies are racing to build increasingly powerful AI models often at the expense of safety, informed oversight, and public health. While many in the AI industry intend to build products that benefit society, the intensity of this competition inevitably erodes caution. Companies are left locked in a “move fast or be left behind” ethos. In such an ecosystem, the absence of liability standards and duties of care means that AI companies face little deterrence when rushing their risky or poorly-designed products to market. 

We’re already seeing the consequences – from sycophantic and manipulative AI chatbots, to jailbreakable AI systems that bypass critical safeguards. Legal frameworks like liability and duties of care do more than hold companies accountable after harms have transpired – they establish a forward-looking obligation for developers to anticipate risks and protect the public. 

These legal mechanisms help recenter AI development around a core principle — those building and deploying powerful technologies have a meaningful role to play not just in innovation, but in anticipating risks and safeguarding the public. By embedding this duty into law, we can shift incentive structures and cultural norms so that we reward thoughtful, safety-conscious development over speed at any cost, ultimately prioritizing people and public values.

Policy in Action
  • In 2024, CHT released its Framework for Incentivizing Responsible AI as a light-touch policy approach that leverages existing product liability concepts to meet the age of AI. Learn more about our product liability approach to AI here.

Help us design a better future at the intersection of technology and humanity

Make a donation