The Three Rules of Humane Tech

April 6, 2023

In our previous episode, we shared a presentation Tristan and Aza recently delivered to a group of influential technologists about the race happening in AI. In that talk, they introduced the Three Rules of Humane Technology. In this Spotlight episode, we’re taking a moment to explore these three rules more deeply in order to clarify what it means to be a responsible technologist in the age of AI.

Correction: Aza mentions infinite scroll being in the pockets of 5 billion people, implying that there are 5 billion smartphone users worldwide. The number of smartphone users worldwide is actually 6.8 billion now.

Guests

Tristan Harris started his career as a magician. He studied persuasive technology at Stanford University, and used what he learned to build a company called Apture, which was acquired by Google. It was at Google where Tristan first sounded the alarm on the harms posed by technology that manipulates attention for profit. Since then, he's spent his career articulating the insidious effects of today’s social media platforms, and envisioning how technology can serve humanity. Today, Tristan is the executive director and co-founder of the Center for Humane Technology. 

Aza Raskin was trained as a mathematician and dark matter physicist. He took 3 companies from founding to acquisition before co-founding the Center for Humane Technology with Tristan and Randima Fernando. Aza is also a co-founder of the Earth Species Project, an open-source collaborative non-profit dedicated to decoding animal communication. Aza’s father, Jef Raskin, created the Macintosh project at Apple — with the vision that humane technology should help, not harm, humans.

Episode Highlights

Major Takeaways

Here are the three rules that Tristan and Aza propose: 

  • RULE 1: When we invent a new technology, we uncover a new class of responsibility. We didn't need the right to be forgotten until computers could remember us forever, and we didn't need the right to privacy in our laws until cameras were mass-produced. As we move into an age where technology could destroy the world so much faster than our responsibilities could catch up, it's no longer okay to say it's someone else's job to define what responsibility means. 
  • RULE 2: If that new technology confers power, it will start a race. Humane technologists are aware of the arms races their creations could set off before those creations run away from them – and they notice and think about the ways their new work could confer power.
  • RULE 3: If we don’t coordinate, the race will end in tragedy. No one company or actor can solve these systemic problems alone. When it comes to AI, developers wrongly believe it would be impossible to sit down with cohorts at different companies to work on hammering out how to move at the pace of getting this right – for all our sakes.  

Take Action

Share These Ideas