Spotlight | Apr 6, 2023
The Three Rules of Humane Tech
In our previous episode, we shared a presentation Tristan and Aza recently delivered to a group of influential technologists about the race happening in AI. In that talk, they introduced the Three Rules of Humane Technology. In this Spotlight episode, we’re taking a moment to explore these three rules more deeply in order to clarify what it means to be a responsible technologist in the age of AI.
Correction: Aza mentions infinite scroll being in the pockets of 5 billion people, implying that there are 5 billion smartphone users worldwide. The number of smartphone users worldwide is actually 6.8 billion now.
Major Takeaways
Here are the three rules that Tristan and Aza propose:
RULE 1: When we invent a new technology, we uncover a new class of responsibility. We didn't need the right to be forgotten until computers could remember us forever, and we didn't need the right to privacy in our laws until cameras were mass-produced. As we move into an age where technology could destroy the world so much faster than our responsibilities could catch up, it's no longer okay to say it's someone else's job to define what responsibility means.
RULE 2: If that new technology confers power, it will start a race. Humane technologists are aware of the arms races their creations could set off before those creations run away from them – and they notice and think about the ways their new work could confer power.
RULE 3: If we don’t coordinate, the race will end in tragedy. No one company or actor can solve these systemic problems alone. When it comes to AI, developers wrongly believe it would be impossible to sit down with cohorts at different companies to work on hammering out how to move at the pace of getting this right – for all our sakes.
Other recommended reading
We Think in 3D. Social Media Should, Too
Tristan Harris writes about a simple visual experiment that demonstrates the power of one's point of view
Let’s Think About Slowing Down AI
Katja Grace’s piece about how to avert doom by not building the doom machine
If We Don’t Master AI, It Will Master Us
Yuval Harari, Tristan Harris and Aza Raskin call upon world leaders to respond to this moment at the level of challenge it presents in this New York Times opinion piece