Synthetic Humanity: AI & What’s At Stake

February 16, 2023

It may seem like the rise of artificial intelligence, and increasingly powerful large language models you may have heard of, is moving really fast… and it IS. 

But what’s coming next is when we enter synthetic relationships with AI that could come to feel just as real and important as our human relationships… And perhaps even more so.

In this episode of Your Undivided Attention, Tristan and Aza reach beyond the moment to talk about this powerful new AI, and the new paradigm of humanity and computation we’re about to enter. 

This is a structural revolution that affects way more than text, art, or even Google search. There are huge benefits to humanity, and we’ll discuss some of those. But we also see that as companies race to develop the best synthetic relationships, we are setting ourselves up for a new generation of harms made exponentially worse by AI’s power to predict, mimic and persuade.

It’s obvious we need ways to steward these tools ethically. So Tristan and Aza also share their ideas for creating a framework for AIs that will help humans become MORE humane, not less.

Guests

Tristan Harris started his career as a magician. He studied persuasive technology at Stanford University, and used what he learned to build a company called Apture, which was acquired by Google. It was at Google where Tristan first sounded the alarm on the harms posed by technology that manipulates attention for profit. Since then, he's spent his career articulating the insidious effects of today’s social media platforms, and envisioning how technology can serve humanity. Today, Tristan is the president and co-founder of the Center for Humane Technology. 

Aza Raskin was trained as a mathematician and dark matter physicist. He took 3 companies from founding to acquisition before co-founding the Center for Humane Technology with Tristan and Randima Fernando. Aza is also a co-founder of the Earth Species Project, an open-source collaborative non-profit dedicated to decoding animal communication. Aza’s father, Jef Raskin, created the Macintosh project at Apple — with the vision that humane technology should help, not harm, humans. 

Episode Highlights

Major Takeaways

We are entering an era of synthetic relationships. From now on, our relationships with AI could come to feel just as real and important as our human relationships. Empathy is the biggest backdoor to the human mind. It can shift your beliefs, behaviors, and biases - and it's about to be automated in a way we’ve never seen before. Think about moments in your life when you've transformed the most. It's almost certainly because of a relationship, someone you fell in love with, or a friend that opened your mind to a new idea or hobby. What would it look like for synthetic AI that ‘knows’ us personally to guide us in this way?

ChatGPT is based on GPT-3 - a large language model (LLM) that looks for patterns and statistical connections to decide what word comes next in a sentence. ChatGPT was trained on 45TB of text data culled from the Internet. At its core, it's an incredibly sophisticated word predictor. It's not knowledgeable, but it’s very good at guessing what’s next.

AI’s persuasive technology is way more powerful than social media’s. We’re entering a world in which humans can program AIs to run advanced love scams, send persuasive emails in your own writing style from your inbox, and compel you to vote for a certain political candidate. The consequences of this shift aren’t fully understood.

These new large language models pave the way for a structural revolution across fields of industry.  The technology behind GPT-3 can also predict protein shapes beyond human abilities. It will help us design medicines better. It will accelerate material science and engineering, molecular biology, robotics, and much more.

This can feel overwhelming, but we offer some potential solutions. Preventing AI from using engagement maximizing tools on us is a start, and we could form an AI safety agency, as Congressman Ted Lieu recently proposed. There’s also an interesting way to start using this technology to protect rather than to exploit: test these new forms of AI against synthetic humans. Let's assess what happens to them when they're in these new synthetic relationships over time, and ensure that they’re safe before they're deployed on real humans.

Take Action

Share These Ideas