When it comes to AI, what kind of regulations might we need to address this rapidly developing new class of technologies? What makes regulating AI and runaway tech in general different from regulating airplanes, pharmaceuticals, or food? And how can we ensure that issues like national security don't become a justification for sacrificing civil rights?
Answers to these questions are playing out in real time. If we wait for more AI harms to emerge before proper regulations are put in place, it may be too late.
Our guest Marietje Schaake was at the forefront of crafting tech regulations for the EU. In spite of AI’s complexity, she argues there is a path forward for the U.S. and other governing bodies to rein in companies that continue to release these products into the world without oversight.
Correction: Marietje said antitrust laws in the US were a century ahead of those in the EU. Competition law in the EU was enacted as part of the Treaty of Rome in 1957, almost 70 years after the US.
Marietje Schaake is international policy director at Stanford University Cyber Policy Center and international policy fellow at Stanford’s Institute for Human-Centered Artificial Intelligence.
Between 2009 and 2019, Marietje served as a Member of European Parliament for the Dutch liberal democratic party where she focused on trade, foreign affairs, and technology policies. Marietje is an (Advisory) Board Member with a number of nonprofits including Mercator Institute for China Studies (MERICS), European Council on Foreign Relations (ECFR), Observer Research Foundation (ORF) and AccessNow. She writes a monthly column for the Financial Times and serves as (independent) Special Advisor to Executive Vice-President of the European Commission, Margrethe Vestager.
Tristan Harris and Aza Raskin’s presentation on existing AI capabilities and the catastrophic risks they pose to a functional society. Also available in podcast format (linked below)
This blog post from the Center for Humane Technology describes the gap between the rising interconnected complexity of our problems and our ability to make sense of them
The two pieces of legislation aim to create safer and more open digital spaces for individuals and businesses alike