AI Myths and Misconceptions

May 11, 2023

A few episodes back, we presented Tristan Harris and Aza Raskin’s talk The AI Dilemma. People inside the companies that are building generative artificial intelligence came to us with their concerns about the rapid pace of deployment and the problems that are emerging as a result. We felt called to lay out the catastrophic risks that AI poses to society and sound the alarm on the need to upgrade our institutions for a post-AI world.

The talk resonated - over 1.6 million people have viewed it on YouTube as of this episode’s release date. The positive reception gives us hope that leaders will be willing to come to the table for a difficult but necessary conversation about AI.

However, now that so many people have watched or listened to the talk, we’ve found that there are some AI myths getting in the way of making progress. On this episode of Your Undivided Attention, we debunk five of those misconceptions. 


Tristan Harris started his career as a magician. He studied persuasive technology at Stanford University, and used what he learned to build a company called Apture, which was acquired by Google. It was at Google where Tristan first sounded the alarm on the harms posed by technology that manipulates attention for profit. Since then, he's spent his career articulating the insidious effects of today’s social media platforms, and envisioning how technology can serve humanity. Today, Tristan is the executive director and co-founder of the Center for Humane Technology. 

Aza Raskin was trained as a mathematician and dark matter physicist. He took 3 companies from founding to acquisition before co-founding the Center for Humane Technology with Tristan and Randima Fernando. Aza is also a co-founder of the Earth Species Project, an open-source collaborative non-profit dedicated to decoding animal communication. Aza’s father, Jef Raskin, created the Macintosh project at Apple — with the vision that humane technology should help, not harm, humans.

Episode Highlights

Major Takeaways

  • AI Myth 1: The net good outweighs the net bad. AI can potentially help solve some complex societal problems. However, if those solutions land in a broken, dysfunctional society, how many of them can be realized?
  • AI Myth 2: The only way to get safe AI products is by testing and deploying AI as quickly as possible into society. It's one thing to test these AI systems with real people; it's another to quickly bake immature technologies into fundamental social infrastructure right away, which quickly creates economic dependencies.
  • AI Myth 3: We can’t afford to pause or slow down. This is a race, and we need to stay ahead of China. This shouldn’t be a race to recklessly deploy AI as fast as possible. It should be a race to determine who can safely harness AI within their society. The overzealous AI race happening in the West is actually helping China move faster and catch up to the United States.
  • AI Myth 4: We shouldn’t worry about AI because it’s ‘just a tool.’ Conventional tools don’t have the ability to run in an autonomous loop. Today, anyone can give GPT-4 a goal and it can make and execute a plan on its own, creating opportunities for potential societal chaos. And because they are autonomous AI agents, it won’t be possible to hold them accountable for downstream effects.
  • AI Myth 5: The biggest threats from AI stem from the bad actors abusing AI, not the AI itself. No doubt there will be some bad actors, but the biggest risk comes from normal, average uses of AI to speed up everyday processes that are creating harms within our societies and for our planet. This raises the question of whether we can align AI with good outcomes, since it’s landing in the misaligned system of late stage capitalism. With capitalism being supercharged by AI, it’s going to supercharge the existing misalignment of that system.

Take Action

Share These Ideas