Disinformation Then and Now. Guest: Camille François

March 18, 2021

Camille François, Chief Information Officer at the cybersecurity company Graphika and an affiliate of the Harvard Berkman Klein Center for Internet & Society, and other disinformation researchers have been fighting two battles over the last decade: one to combat and contain harmful information, and one to convince the world that these manipulations have an offline impact that requires complex, nuanced solutions. Camille believes that our common understanding of the problem has recently reached a new level. In this interview, she catalogues the key changes she observed between studying Russian interference in the 2016 U.S. election and helping convene and operate the Election Integrity Partnership watchdog group before, during and after the 2020 election. “I'm optimistic, because I think that things that have taken quite a long time to land are finally landing, and because I think that we do have a diverse set of expertise at the table,” she says. Camille and Tristan Harris dissect the challenges and talk about the path forward to a healthy information ecosystem.


Camille François

Camille François works on the impacts of technology on society, with an emphasis on cyber conflict and information operations. She currently serves as the Chief Innovation Officer at Graphika where she leads the company’s work to detect and mitigate disinformation, media manipulation, and harassment. She was previously a Principal Researcher at Google, in the “Jigsaw” team, an innovation unit that builds technology to address global security challenges and protect vulnerable users. Camille has advised governments and parliamentary committees on both sides of the Atlantic, investigated Russian interference in the 2016 Presidential election on behalf of the U.S. Senate Select Committee on Intelligence, and served as a special advisor to the Chief Technology Officer of France. She is an affiliate scholar of the Harvard Berkman Klein Center for Internet & Society, a Fulbright Scholar, and a Mozilla Fellow. She holds a master’s degree in human rights from the Paris Institute of Political Studies (Sciences-Po) and a master’s degree in international security from the School of International and Public Affairs (SIPA) at Columbia University. Camille was distinguished by the MIT Tech Review in the prestigious "35 Innovators Under 35" annual award in the "Visionary" category for her work leveraging data science to detect and analyze deceptive campaigns at scale. She was also distinguished by TIME magazine in 2019 as one of the 100 most influential people in the world, for her work protecting open societies from the threat of disinformation.

Twitter: @camillefrancois

Episode Highlights

Major Takeaways

  • Although foreign interference has contributed to mis- and disinformation campaigns, inaccurate information doesn't come only from bots and trolls. Real people, including "blue check" verified social media accounts, are driving the spread of misinformation by sharing falsehoods that they believe to be true.
  • Researchers need more access to data from social media platforms. Companies like Facebook keep track of information about topics and online groups that are likely to cause violence but do not share their methods or findings with researchers.
  • Online conflict can become offline violence. The events on January 6, 2021 show how online events impact our offline lives in meaningful ways.
  • Social media platforms should make design interventions to contain the user-driven spread of misinformation and disinformation. For example, during the 2020 U.S. election, Twitter changed the retweet button by asking users to add commentary to another tweet before sharing it. By increasing friction in the sharing process, Twitter's design intervention makes it harder to spread misinformation mindlessly.

Take Action

Share These Ideas