All episodes

Episode 66 | Apr 21, 2023

Can We Govern AI? with Marietje Schaake

When it comes to AI, what kind of regulations might we need to address this rapidly developing new class of technologies? What makes regulating AI and runaway tech in general different from regulating airplanes, pharmaceuticals, or food? And how can we ensure that issues like national security don't become a justification for sacrificing civil rights?

Answers to these questions are playing out in real time. If we wait for more AI harms to emerge before proper regulations are put in place, it may be too late. 

Our guest Marietje Schaake was at the forefront of crafting tech regulations for the EU. In spite of AI’s complexity, she argues there is a path forward for the U.S. and other governing bodies to rein in companies that continue to release these products into the world without oversight. 

Correction: Marietje said antitrust laws in the US were a century ahead of those in the EU. Competition law in the EU was enacted as part of the Treaty of Rome in 1957, almost 70 years after the US.

Major Takeaways

  1. There are fundamental challenges to regulating AI that make it different from other technologies. Data sets aren’t accessible to lawmakers or journalists and stay in proprietary hands. Updates and hyper-personalized experiences make the product or service fluid, which in turn makes it harder to regulate. Global companies operate from their local jurisdiction but end up reaching consumers in a different context on the other side of the world.

  2. We shouldn’t be discouraged by the complexity of the problem. It’s similarly difficult to regulate chemicals, financial services, and food. There’s enormous complexity, wide variety, and constant innovation in many sectors beyond tech - and yet regulation works, for the most part. 

  3. The US is behind the EU when it comes to tech regulation. There’s a twin set of regulations in the EU that address related but different aspects of tech company business models: the Digital Services Act and the Digital Markets Act. These regulations are still proving their effectiveness, and are meant to confront speech and content issues, trust and disinformation issues, and the imbalance of power between large tech companies and the public.

  4. Enforcers should be empowered with stronger mandates and capabilities. Budgets for enforcers are undersized; there’s relatively little investment in regulation enforcement. Sanctions need teeth in order to be felt. Levying a $3 billion fine on a company that’s worth $200 billion is simply a cost of doing business at that scope.

  5. The next decade will be crucial for AI policymaking. Most of the regulations that will be significant in dealing with AI and other emerging technologies are still in the pipeline. Regulations should be flexible so they can expand to cover new threats and challenges as they surface. Success will depend on the rigor and effectiveness with which enforcement happens.

Other recommended reading

The AI Dilemma 

Tristan Harris and Aza Raskin’s presentation on existing AI capabilities and the catastrophic risks they pose to a functional society. Also available in podcast format (linked below)

The Wisdom Gap

This blog post from the Center for Humane Technology describes the gap between the rising interconnected complexity of our problems and our ability to make sense of them

The EU’s Digital Services Act (DSA) & Digital Markets Act (DMA)

The two pieces of legislation aim to create safer and more open digital spaces for individuals and businesses alike

Help us design a better future at the intersection of technology and humanity

Make a donation
Can We Govern AI? with Marietje Schaake