The AI Dilemma

March 24, 2023

You may have heard about the arrival of GPT-4, OpenAI’s latest large language model (LLM) release. GPT-4 surpasses its predecessor in terms of reliability, creativity, and ability to process intricate instructions. It can handle more nuanced prompts compared to previous releases, and is multimodal, meaning it was trained on both images and text. We don’t yet understand its capabilities - yet it has already been deployed to the public.

At Center for Humane Technology, we want to close the gap between what the world hears publicly about AI from splashy CEO presentations and what the people who are closest to the risks and harms inside AI labs are telling us. We translated their concerns into a cohesive story and presented the resulting slides to heads of institutions and major media organizations in New York, Washington DC, and San Francisco. The talk you're about to hear is the culmination of that work, which is ongoing.

AI may help us achieve major advances like curing cancer or addressing climate change. But the point we're making is: if our dystopia is bad enough, it won't matter how good the utopia we want to create. We only get one shot, and we need to move at the speed of getting it right.

Guests

Tristan Harris started his career as a magician. He studied persuasive technology at Stanford University, and used what he learned to build a company called Apture, which was acquired by Google. It was at Google where Tristan first sounded the alarm on the harms posed by technology that manipulates attention for profit. Since then, he's spent his career articulating the insidious effects of today’s social media platforms, and envisioning how technology can serve humanity. Today, Tristan is the executive director and co-founder of the Center for Humane Technology. 

Aza Raskin was trained as a mathematician and dark matter physicist. He took 3 companies from founding to acquisition before co-founding the Center for Humane Technology with Tristan and Randima Fernando. Aza is also a co-founder of the Earth Species Project, an open-source collaborative non-profit dedicated to decoding animal communication. Aza’s father, Jef Raskin, created the Macintosh project at Apple — with the vision that humane technology should help, not harm, humans.

Episode Highlights

Major Takeaways

  • Half of AI researchers believe there's a 10% or greater chance that humans will go extinct from their inability to control AI. When we invent a new technology, we uncover a new class of responsibility. If that technology confers power, it will start a race - and if we don’t coordinate, the race will end in tragedy.
  • Humanity’s ‘First Contact’ moment with AI was social media - and humanity lost. We still haven’t fixed the misalignment caused by broken business models that encourage maximum engagement. Large language models (LLM) are humanity’s ‘Second Contact’ moment, and we’re poised to make the same mistakes.
  • Guardrails you may assume exist actually don’t. AI companies are quickly deploying their work to the public instead of testing it safely over time. AI chatbots have been added to platforms children use, like Snapchat. Safety researchers are in short supply, and most of the research that’s happening is driven by for-profit interests instead of academia.
  • The media hasn’t been covering AI advances in a way that allows you to truly see what’s at stake. We want to help the media better understand these issues. Cheating on your homework with AI or stealing copyrighted art for AI-generated images are just small examples of the systemic challenges that are ahead. Corporations are caught in an arms race to deploy their new technologies and get market dominance as fast as possible. In turn, the narratives they present are shaped to be more about innovation and less about potential threats. We should put the onus on the makers of AI - rather than on citizens - to prove its danger.

Take Action

Share These Ideas