Policy in Action: How to Balance Innovation and Responsibility in AI

Regulating a new technology — especially one as powerful and promising as artificial intelligence — is a meaningful challenge for lawmakers. Amid the regulatory debates, CHT was one of the earliest voices in the call to use liability to drive responsible AI innovation.

Introduction

Regulating a new technology — especially one as powerful and promising as artificial intelligence — is a meaningful challenge for lawmakers. Amid the regulatory debates, CHT was one of the earliest voices in the call to use liability to drive responsible AI innovation. Now, CHT has developed the Framework for Incentivizing Responsible Artificial Intelligence Innovation and Use as a resource for policymakers tackling the lack of accountability for harms in the AI industry. Our framework outlines key principles that can be used to inform effective AI legislation and shift incentives in the tech ecosystem in a targeted yet propulsive way. 

Why this Matters

  • The scope of AI development can feel dizzying. Tech companies are currently locked in a race to develop AGI — which is leading to a “race to the bottom” with reckless AI design.

  • Early policy interventions can produce meaningful changes in the trajectory of this technology, driving substantially better outcomes for society.

  • Common arguments in tech policy say that we must choose between caution and innovation with AI. This is simply not true. Our framework shows that innovation and responsibility can and should be in proactive dialogue with each other throughout a regulatory process.

  • Our framework holds broad public appeal and taps into people’s basic sense of fairness.

Overview of the Framework

As AI systems are increasingly integrated into American national security, the economy, and citizens’ daily lives, it’s essential that innovation and safety be viewed in ongoing partnership. CHT offers a two-pronged approach that fills critical gaps in existing product liability and product safety law. With this approach, we ensure that consumers have clear recourse for harms caused by AI products, and that developers are incentivized to design AI responsibly. CHT’s approach does two things: 

  • Clarifies that AI is, in fact, a product. Developers assume the role and responsibility of product manufacturer, including liability for harms caused by unsafe product design or inadequate product warnings.

  • Incentivizes safe design and development. By establishing risk management standards for AI developers and deployers, our framework requires transparency from AI developers, and provides limited liability protections to those who uphold these requirements.

The six key principles of the framework include: safe innovation; consumer and small business protection; clarity and certainty; accountability; and addressing immediate harms.

How We Got Here

As an organization, we focus on solutions that balance what is necessary to shift the direction of the ecosystem with what will work well in implementation. Without both aspects, a solution will be inefficient either because it’s simply too small or because it will be ineffective. When developing our framework for responsible innovation:

When developing our framework for responsible innovation...

How Our Framework Shifts Incentives

  • Right now, consumers have no meaningful legal recourse if an AI product harms them. And, in current dynamics, companies are incentivized to rush AI products to market without taking necessary safety precautions. This is not a healthy ecosystem for consumer safety, wellbeing, or genuine innovation. It’s also not the legal standard to which we hold products across other successful American industries.

  • The core of CHT’s framework is product liability. Basic product liability means that AI companies would bear direct responsibility for how their products impact consumers — and that companies could be held financially accountable if their products cause harm. CHT’s framework also requires AI developers to take concrete steps to mitigate potential risks in exchange for some protections against potential liability. 

  • CHT’s framework thus incentivizes AI developers to take consumer safety into account at the outset of their design and development process, and thoroughly test their systems before deployment. This is the standard to which physical products are held in America across industries. Our approach upgrades existing and reliable legal concepts to meet our 21st century needs with AI.

  • With this framework applied, the tech ecosystem would be positioned to integrate consumer safety into visions of long-term innovation. Small and large businesses alike would receive clarity on their roles and responsibilities along the AI supply chain. And consumers would enjoy rigorously tested products, promoting the safe adoption of AI across society. 

Current Impact

CHT is actively working with policymakers across the country to educate them on this framework and encourage its adoption. We've already witnessed a shift in the discourse on liability. We’ve also seen momentum from policymakers and stakeholders in the responsible tech space around liability as a solution for AI:

  • In a 2024 Senate Judiciary hearing, former AI-industry insiders named liability as a policy solution that supports safe AI innovation in the United States. Senators engaged directly, asking specific questions about liability as a solution

  • Helen Toner: “The good news is that there are light-touch, adaptive policy measures that can not only help navigate the issues I’ve focused on here — they would also help tackle many of the present-day challenges we are already experiencing from less advanced AI systems… Clarify how liability for AI harms should be allocated, to ensure AI developers and deployers are incentivized to take reasonable care when their products carry a risk of causing serious damage.”

  • David Evan Harris: “I would recommend…liability, clearly holding AI companies liable for the products they make.”

  • In 2025, Transparency Coalition AI spotlighted CHT’s framework in their coverage of state-level AI bills, describing AI liability as a “rising issue.”

CHT’s policy team continues to socialize the Framework for Incentivizing Responsible Artificial Intelligence Innovation and Use with key stakeholders, building bipartisan support and shaping the discourse for future legislative sessions.

Further Reading and Resources

Help us design a better future at the intersection of technology and humanity

Make a donation
Policy in Action: How to Balance Innovation and Responsibility in AI