All episodes

Episode 35 | May 10, 2021

Mr. Harris Zooms to Washington

Back in January 2020, Tristan Harris went to Washington, D.C. to testify before the U.S. Congress on the harms of social media. A few weeks ago, he returned — virtually — for another hearing, Algorithms and Amplification: How Social Media Platforms’ Design Choices Shape Our Discourse and Our Minds. He testified alongside Dr. Joan Donovan, Research Director at the Harvard Kennedy School’s Shorenstein Center on Media Politics and Public Policy and the heads of policy from Facebook, YouTube and Twitter. The senators’ animated questioning demonstrated a deeper understanding of how these companies’ fundamental business models and design properties fuel hate and misinformation, and many of the lawmakers expressed a desire and willingness to take regulatory action. But there’s still room for a more focused conversation. “It’s not about whether they filter out bad content,” says Tristan, “but really whether the entire business model of capturing human performance is a good way to organize society.” In this episode, a follow-up to last year’s “Mr. Harris Goes to Washington,” Tristan and Aza Raskin debrief about what was different this time, and what work lies ahead to pave the way for effective policy.

Major Takeaways

  • In this hearing, members of the Senate Judiciary Subcommittee on Privacy, Technology and the Law demonstrated an evolved and increasingly sophisticated understanding of social media’s impact on our society. Several senators focused on the role of the tech platforms’ business model and design in shaping their decisions, and repeatedly redirected questioning back to that frame.

  • Lawmakers are politically ready and in some cases hungry for regulation and action, but they may need more guidance on policy specifics. Focusing solely on changes to Section 230 of the Communications Decency Act will be necessary but not sufficient to repair the harms caused by these tools.

  • In future hearings, the line of questioning should remain focused on the fundamental design and business model. Discussion of the steps the tech companies are taking to mitigate the spread of misinformation and toxic content is a diversion when they are generated by a broken business model.

  • The misinformation challenges in the U.S. may seem daunting, but the same tools are operating with even less oversight all around the world. The problems we’re facing are happening to a greater extent with more serious impacts in countries where language, political and economic stability, and infrastructure are all barriers to ensuring a safe social media environment.  

  • Is the future world going to be run by digital open societies or digital closed societies? Right now, in most open societies such as the U.S., our digital infrastructure is in many ways debilitating our capacity to solve difficult problems. How do we entirely re-envision our digital infrastructure to create stronger open societies? What does “Open Society 2.0” look like in a post-digital age? 

Other recommended reading

U.S. Senate Hearing on Algorithms and Amplification: How Social Media Platforms’ Design Choices Shape Our Discourse and Our Minds, April 27, 2021

Watch the hearing in full, jump to highlights, or search the transcript.

The Media Manipulation Case Book

From the Technology and Social Change Project at the Harvard Shorenstein Center, led by Dr. Joan Donovan, this collection of case studies on media manipulation and disinformation is a resource for researchers seeking to understand how algorithmic amplification spreads misinformation and contributes to our toxic media environment.

The Society of the Spectacle, by Guy Debord

In this text, published in 1967, French philosopher Guy Debord points to human performance as the focal point of society, and explores contemporary power dynamics from the lens of political and cultural theory.

Help us design a better future at the intersection of technology and humanity

Make a donation