Back to all Insights

EVENT: Re-imagining The Web: Downstream Impact & Intentional Design for All

This post is part of The Catalyst newsletter series. Subscribe here for future resources.

On July 6, 2022, Center for Humane Technology and Mozilla got together on Twitter to talk about the consequences of today’s technology and how we can build a better tech ecosystem.

Speakers

  • Lindsey Shepard, Chief Marketing Officer, Mozilla
  • Imo Udom,  Senior Product VP, Mozilla
  • Tristan Harris, Co-Founder, Center For Humane Technology
  • Randima Fernando, Co-Founder and Executive Director, Center For Humane Technology

Moderator

  • Xavier Harding, Content Writer, Mozilla

Two decades into the launch of the internet, we’re faced with the complex ways technology has shaped our societies. While there have been incredible social and economic benefits, persuasive technology has also had significant downstream consequences for both individuals and society. At the crux of these harms is an engagement-maximizing, advertising-based business model fueled by the massive amount of personal data that platforms collect on users. 

As we reimagine what the future of tech should look like, we envision technology that’s in service of society, not extracting from us. In this informal discussion, CHT and Mozilla explore how better data and privacy practices can move the needle on today’s ad-based business model and what it looks like to shift our paradigmatic thinking in order to build more humane tech.

Listen below for the complete discussion.


LIGHTLY EDITED TRANSCRIPT

Xavier Harding:

All right. We are a go. Hey, everyone. My name is Xavier Harding and I'm on the content team over at Mozilla Foundation. Today, we are... First of all, thank you for joining us. A lot of people out there. Thanks for joining this Twitter Spaces today. Today, we're talking about the internet. Specifically, the ethics of the internet. We all know that when a company tracks your location, even if you told it not to, that's bad. When a tech company goes out of its way to prioritize your privacy like encrypting your messages by default, that's good. That's great. These sorts of setups, these sorts of arrangements, are what we will be talking about today. All the ways tech companies do and don't look out for us and our data and the importance of tech that ultimately treats our data in an ethical way. Today with me to talk about this are four people. One, Lindsay Shepard (Shep), CMO of Mozilla. Imo Udom, senior product VP here at Mozilla. Randima Fernando, co-founder and executive director of Center for Humane Tech. And Tristan Harris, co-founder of Center of Human Tech. How's everybody doing today?

Lindsey Shepard:

Great.

Tristan Harris:

Good to be here.

Xavier Harding:

Nice. Perfect. All right. So, I want to kick us off with a pretty simple question for the people on this panel. Who cares about any of this stuff? Why should I care about this? People talk to me with a general apathy towards... People think, my data is already out there. Why should I care about any of this? What do you say to those people? Maybe we can start with Shep.

Lindsey Shepard:

Sure. Who cares? I care. I think a lot of people. A lot of people are listening right now that care. I think that this is getting more personal, frankly. We're not just talking about some abstract dataset here at this point. One reality is the overturning of Roe v. Wade. It's really showing us how much our ability to make things like healthcare decisions is under the microscope and how much personal data is being collected on people every single day and what the impact of what that lack of privacy can have on our freedom. It's already very real and it's happening right now with the most intimate personal information being collected and tracked. Honestly, while it does sometimes seem like we don't have control over this stuff, we don't have control over data collections, tools do exist that can help protect our privacy. And it's not that hard to switch to more ethical tech and products. It's safer. It's just as efficient as the other guys. You really aren't losing anything by supporting this sort of tech. So, I think a lot of people care. A lot of people should.

Xavier Harding:

Randima, when you hear that question, what comes to mind?

Randima Fernando:

With persuasive technology, so technology that interfaces with our minds and affects how we make sense of the world and how we make choices, there's actually a huge opportunity to do something good that every consumer would value. Nobody actually wants to use products that hijack their attention... that confuse them about what's real and what's not real, but they can't make good decisions... Products that harm their children. Nobody wants those things. So, I think there's a huge opportunity. People will recognize those products when they see them. They just aren't yet fully aware of what's possible because they're so used to using free products that are driven by the wrong... Engagement optimization algorithms or advertising based revenue models that basically lead our minds astray. And so when they see the right products, they'll know and they'll appreciate that. That's very different from a lot of other "doing the right thing" situations that come up in life, where doing the right thing is harder or less convenient, so I think there's a big opportunity there.

Xavier Harding:

One thing I was talking about with our producer Sarah earlier today was... She mentioned, before working for Mozilla, that she was kind of into advertising. She was kind of into seeing ads that she found as relevant to her. And then what it made me think of was, when I use YouTube, I specifically go to YouTube because I want the algorithm to suggest videos to me. Tristan, maybe you can talk about how we square that circle of I kind of want suggestions, but I also want to use tech that is good for me. What comes to mind when you hear all that?

Tristan Harris:

One of the things that we're seeing as cost of that personalization right now is, actually, a breakdown of our shared reality. I think that's actually one of the things that (silence).

Xavier Harding:

Are you able to hear Tristan? I don't know if we lost him.

Randima Fernando:

No, I'm not.

Xavier Harding:

Hey, Tristan. We lost you, unfortunately. Let's see if we can address this technical difficulty real quick. One second. And thanks for holding. Can the other panelists hear me okay?

Imo Udom:

Yes, we can.

Xavier Harding:

That's good. I feel bad. I think Tristan doesn't know that we can't hear him. We're going to assess these technical difficulties. Maybe, Imo, in the meantime, you can talk about suggestions and... I like seeing suggested ads, but how do we square that circle of having it be not unethical?

Imo Udom:

Yes, absolutely. I think one of the key challenges is around the incentive models. My perspective is that these things, in and of themselves, some of the ways... ads in and of themselves, the technology in and of itself, is not necessarily inherently bad. I think what changes things or where these things get abused is because of the incentive model around how these organizations or companies want these things to grow. If we want to use these systems, if we want to continue to evolve from a technology perspective, I think one of the key things we need to do is attack or go into how to create a different paradigm or framework around aligning incentives with this technology. I'll use a paradigm that a lot of people use when working with sales organizations. People will do what they need to do based on how you incentivize the sales rep. Taking that approach of rethinking incentive models, I think will be one thing that will help us use technology in a slightly different way.

Xavier Harding:

Yeah, I'm glad you mentioned that. That's a really good point because I think about, unfortunately, how the systems we've built... Sometimes it seems more lucrative to do the wrong thing. But I want to see if we got Tristan back. Are you there, Tristan?

Tristan Harris:

Yeah, I'm back.

Xavier Harding:

Hey.

Tristan Harris:

Ironically, it was Apple's Screen Time features that knocked me out of the Twitter app, which is-

Xavier Harding:

Oh, man. That is great.

Tristan Harris:

... ironically, one of the solutions to this problem of how do you help people fight the engagement maximizing economy. What I was just, quickly, saying was just that the personalization of content and... You were asking about what happened with YouTube and the personalization. Don't we want this trade where they have as much data as possible so they can serve us? I think that's actually, in general, the delicate aspect of this conversation is that we want things with asymmetric information about us to be in service to us. I think about a doctor or a lawyer or a financial advisor. You want to give them the most information about you, but only insofar as their fiduciary obligation to be in 100% service to you and that there's a confidentiality arrangement.

I think one of the fundamental problems is that the engagement based economy is based on using as much of that information to extract outcomes from you. To make money off of you, where you are the product and not the customer, as was repeated so many times in The Social Dilemma. And so I think that's actually the delicate issue here in that, if you think about it, how much information does a tech company have over us compared to that which a doctor or a lawyer has? Imagine if a doctor or a lawyer wasn't trying to help you but was basically saying whoever pays me the most money will change the advice that I give the client. That's the perverse relationship that we're in with technology.

Xavier Harding:

That would be the worst doctor of all time. That would not be a good doctor at all.

Tristan Harris:

Not a very good doctor.

Xavier Harding:

No, not at all. I think a phrase I've heard you and Randima and this Twitter account throw around a lot is humane tech. Can someone explain what exactly is humane tech? What does that mean?

Tristan Harris:

Randy, do you want to go for it? I can say-

Randima Fernando:

Why don't you go for it?

Tristan Harris:

One of the things is obviously... This is a broad category. I think that one of the things that the phrase does is it does inspire the idea that technology can be humane and caring about the person and the society on the other end of the wire. Because this question is so big, we actually have this course that we launched called Foundations of Humane Technology with 11000 people from Facebook and Apple and TikTok and companies and the United Nations, places like that, to try to help define this question because it is so challenging.

One of the things that we find in our work is that it's hard to define what's inhumane about the current system. We actually go to looking at the paradigmatic beliefs often held by the tech industry. If you think about what people often believe, it's like, we're giving users what they want. Every technology has some goods and some bads. We've always had moral panics about television, radio... Elvis was shaking his hips. Our job is to maximize personalized content. Technology is neutral. Who are we to choose what's good for people? We just maximize the collection of data, obsess over metrics, and grow engagement.

If you believe those things, that we're giving users what they want, every tech has goods and bads, and we've always had moral panics, then the fallout onto society of addiction, distraction, polarization, narcissism, influencer culture, are direct consequences of those fundamental beliefs that we're just giving people what they want. And so that's why, in our work, we try to focus on more of the paradigmatic beliefs of, instead of giving users what they want, how do respect human vulnerabilities? Instead of saying technology is neutral, we have to say how is it actually actively shaping human behavior and how do we choose those values conscientiously?

Xavier Harding:

My next question is, on the other side of the token, what exactly are the effects of inhumane tech? What does that look like in the real world? Anyone can take this. Shep, do you want to take this? Or does one of the Center for Human Tech want to take this? Anything goes.

Lindsey Shepard:

Sure. I can kick it off. I think there are countless downstream effects. Frankly, I don't think we know what all of them are yet. That's going to keep showing up over time. I do think... We know that everything we do online is tracked and sold. A lot of times without our knowledge, to Tristan's point. And there is some upside to this. Finding the right videos on YouTube, publishers getting a platform, things like that. But we're also seeing things like... with the ad economy partnered up with this... the weaponization of polarizing content that creates incredible division. We've seen that play out over the past several years in a big way. But also, the expense of personal freedom and privacy and happiness. We did this Internet Health Report in 2019. Despite the fact that the ad industry is so huge, it's estimated that Facebook and Google alone controlled about 84% of the digital ad market. When you think about those incentives that Tristan is talking about, the product design really maximizes engagement. It maximizes time spent on those apps. And so really thinking about holding users' attention. The downstream impact of that, we all feel it. Tristan's phone tried to prevent him from it. Doomscrolling has become an unofficial hobby of this pandemic. There are these huge societal impacts and negative effects, but also, I think, we all feel it every day.

Xavier Harding:

Got it. I want to get some other folks in on this. Randy, for you... Inhumane tech, real world effects. What do you think of when you hear that?

Randima Fernando:

It's such a long list. We could spend all of our time on it. What I wanted to share is a resource for people who are interested. Ledger.humanetech.com. That's our Ledger of Harms. That has summaries of some of the most relevant research on each of these different harm areas. For example, when we talk about harms to kids. How the amount of time spent on social media significantly correlates with later levels of alcohol use. This is crazy stuff. When children see videos of child influencers holding unhealthy food, they consume more calories than those who see influencers holding other types of objects. These are terrifying effects. Not to mention choking challenges and all these other stupid challenges on Snapchat or TikTok.

We actually did an episode on our podcast... Our podcast is called Your Undivided Attention... with Jonathan Haidt on this topic. And there's so many more. With addiction and stress and loneliness. 30% of 18 to 44 year olds feel anxious if they haven't checked Facebook in the last two hours. This kind of stuff is just terrifying. We talked about that with Johann Hari on our podcast. Another one that a lot of your listeners might be really interested in is the number of teens encountering racist hate speech online has practically doubled in the last two years. This was back in 2020. 23% of 14 to 18 year olds often encounter racist content in 2020 compared to 12% in 2018. And almost 50% more teens report encountering sexist or homophobic material online. We covered that with Fadi Quran. That was a really excellent episode. I loved it.

And then, of course, fake news. Fake news spreads six times faster than real news. During the COVID pandemic... which we're still in, but in the beginning... four times more views were generated by the 10 most popular Facebook COVID misinformation sites compared to content from the 10 leading international health institutions. So, what does that do when we can't get information from reliable sources and the unreliable fake news sites are actually getting the majority of the traffic? We have a real problem. We talked about that with Renée DiResta on our podcast. So, just as examples. There is so much here. I just wanted to give a test.

Xavier Harding:

Those are some chilling stats. Imo, how about you? When you hear inhumane tech, what comes to mind?

Imo Udom:

I think, Randy and Shep, you did a great job really speaking specifically about some of these topics. For me, one of the things that I believe is just... Technology has become so intertwined with our lives. Our lives are so digital. The difference between the online experience and life versus offline, it's all one. We're seeing the impact of online affecting us offline. As a result, when I think about the future of inhumane tech, it's tech that doesn't take the time... the companies, the teams, the people building, who don't take the time to try to look ahead at the unintended consequences. Just building that muscle. That you need to not just create the thing or the technology for its core purpose, but you need to look and spend time analyzing what could an unintended consequence be? I think that's how we start shifting from inhumane tech to humane tech. Yes, all the privacy and ethics upfront are great, but the behavioral patterns and the mental model of how we approach these things needs to evolve.

Xavier Harding:

I do want to talk about unintended consequences and intentionality when it comes to designing these products that we use every day. And I want to point this question towards Tristan. Are there any examples you can think of of intentional design... not just intentional design but intentionally positive design when it comes to tech products? Are there times it works and doesn't work?

Tristan Harris:

What comes to mind when thinking about that question is the belief that technology is neutral and how central believe this. People say this all the time. I want to really say it's not true, but it appears true. It's like a persistent magic trick. It's a kind of catnip for the mind. It makes a lot of people really think over and over again that technology is neutral. Here's an example when you think about intentional design. If, on Twitter, you're getting an outrage feed of the most outrageous stuff that makes you angry every day. It's basically just the most efficient way to see all the cultural fault lines that have inflammation on them. You might say, well, that's just an neutral mirror. We're just showing people what they have already... The people that they chose to follow. They chose to follow those Twitter users.

But an example is... There are some engineers at Twitter. One day, they built this feature... Because they're designing for growth and engagement... which is the suggested users widget. Hey, you already follow these three users. Well, here's five more that you might want to follow. But the way that it selects those initial set of people for you to follow was based on, well, who tends to post a lot that gets people to engage a lot? It's already going to highlight the most extreme voices from one side or the other. And then, when you follow one set of extreme voices on one side, it says, by the way, here are three more voices just like that. So, here is an example of intentional design, where they're... rather non-intentional design... where they're not thinking through the other consequences. The foundation that was the mistake underneath that poor design decision was the belief that technology is neutral. Because we're just offering you a menu. But it's like a magician. Pick a card, any card, but I've already designed the upstream experience so that I know that, no matter what card you pick, it's going to drive you in this other direction.

Those are examples of not being intentional. I do think that there are ways of designing in a more intentional way. To do that, we have to really understand human vulnerabilities. How do human minds really work and how do they respond to things? That's one of the missing domain expertise areas that I think a lot of technologists don't have. We try to cover some of that in our course. There's more to say, but I'll follow there.

Xavier Harding:

What you talk about, it reminds me of when I first learned what a dark pattern was. Basically, the way tech products are designed so that tech companies can make you do what they want you to do as opposed to what you want to do. It's really fascinating. Like you said, it's not just a menu where I choose a burger with fries. It's kind of like a menu saying, "Hey, you should choose this burger and fries. The other stuff is kind of lame."

Tristan Harris:

And the confusing part is that you'll often say, oh, yeah, part of me does want the burger and fries. So, the manipulation. What's dark about that choice is, well, one part of me does want to follow those outrageous voices on Twitter or the burger and fries. Just like a magician, I look for the place where your choice intersects with the choice that I want you to make. It's not dark in the sense that you would've never done it. It's finding the overlap between the outcomes that I want and the part of you that I can exploit that gets you to go down the rabbit hole that I want you to go down.

Xavier Harding:

Yeah.

Randima Fernando:

The other piece that's often ignored is the dosage. Some of these things in a small dose... One dance video is fine. If people are dancing with a friend, right? It's a very different thing when that's what you spend the majority of your time doing now or your kids do now as a result of these choices and these algorithms. I think that's a huge factor that seems to always go missing in the discussion.

Xavier Harding:

I'm too old to really be on TikTok in a big way... I'm 32... but I hear stories about how people just get sucked in, watching TikTok for hours, because dance video, dance video, dance video. You're just constantly watching... What Randima is saying is don't you dare send him over five dance videos because he's not having any of it.

Randima Fernando:

Thanks.

Xavier Harding:

I want to kick it now to Shep because something I think I've heard you talk about before is companies that make products like this and how they make money... Sometimes it feels like the most lucrative companies are the ones doing the wrong things. They're tracking, they're surveilling, they're collecting our data. Is there an argument to be made that doing the right thing can be profitable, too?

Lindsey Shepard:

Yeah. Of course. Absolutely. It has to be. That's what we're learning. I think what we're seeing right now with the success of things like The Social Dilemma and these conversations happening more often is that people are actually starting to care about this stuff. A handful of years ago, when folks would talk about data privacy, those concerns were really mostly centered on information like their credit card number or social security number. Things like that. But we're starting to see that folks recognize that it's actually a bigger problem than that. With more people caring and more consumers caring, we're starting to see an alignment of incentives. If bad behavior or these bad patterns are bad for business, then we're going to see people start to react in kind.

And we are starting to see that. At least the awareness of it. I think Tristan talked about the magic trick. When people are recognizing that these things are happening, it's not good for business. We're seeing a lot of the big tech companies really respond to this. Hundreds of millions of dollars are being spent to advertise the fact that these companies really care about your privacy. Obviously, a lot of that at this point is likely just spin, but we are starting to see some real progress here. That's very true. A good example of that is the work that Apple did to allow users to block cross app tracking. Ideally, that would be on by default, but having the option there is really important. We're also seeing the ad industry, for example, moving away from third party cookies. This is a big deal for consumers. It's a really big deal. That's something that we've advocated for for a long time at Mozilla. CHT has, too. But the question, I think, becomes what fills that gap? We really want to avoid the replacement being worse than the original. That's why it's critical that organizations that are thinking intentionally about this stuff are building their own solutions and engaging with Google and other big industry players to help define what the future of privacy preserving tech and advertising really looks like.

For me, as a marketer, this is an obsession. How can we start to prove that efficiency and impact isn't juxtaposed to privacy? It's really exciting. Things like looking at the ad tech that we have to use. Attribution tooling, for example. In the marketing that do at Mozilla, we've built our own bespoke solutions to that so that you can get the information you need as a business to do the type of advertising you need to grow without screwing people over. Making those investments and having those conversations is really critical. But I think the future of tech, the future of advertising, is more private. It is a place where users have more control over their data. It's got to be good for business to follow that trend.

Xavier Harding:

I think a lot of what you mentioned kind of leads into my next question, which is... It's kind of a tough question, but what are the folks on this panel doing to fix it? What are the solutions that Mozilla, Center of Humane Tech... What are we doing to fix it? Maybe, Randima, you could take that for Center of Humane Tech.

Randima Fernando:

First, I think this point about incentives. It's always illuminating. When you follow the incentives, you learn a lot about why people are doing what they do. We have to look at all of this holistically. Within the constraints of a capitalist economic system, at least in the near-term future. We have to look at what are the levers within there? They always relate to price and cost and demand and supply. This is what we have to think about. There's a few things. One is general cultural awareness. For example, when Social Dilemma was seen by 100 million plus people, that supports a lot of other action because there's this awareness. I think Shep talked about this as well. Once people know, they have a sense that there's something wrong and they're more educated about what's wrong, that changes their buying patterns. The demand side.

Legislation changes the rules. Everyone is always competing within the rule set that exists. They're going to push as close to the boundaries of those rules as they possibly can. Because that's how you make the most money. You're just trying to win a game. Certain kinds of legislation... When it's done right, you change the rules so that everyone can actually compete in the ways that they might even agree are the right ways to compete. But when the incentives aren't aligned, you can't expect people to just do the moral thing while leaving a whole bunch of money on the table.

Another part of this is litigation. Litigation is where the cost of harms that are thrown onto society actually get internalized. They get thrown back onto the company's balance sheet. We need that to happen far more often. This is the general playbook. You build a platform that you claim is neutral and throws harms consistently, all kinds of harms, long tale of harms, onto society's balance shat. And then you say, oh, I'd love anyone's help in figuring out how to handle these. Anything except legislation and litigation. Those are the things that need to be internalized back into the company's balance sheets and litigation is a great way to do it. Litigation also sets a standard for any investors to keep an eye on. To say, oh, not just this company, but every other in the class won't survive this kind of litigation.

Pressure from concerned employees. Retention and acquisition is one of the biggest factors for company. As an example, Facebook had to triple their incoming salary as a result of more people understanding and more employees being aware of these... not wanting to work at companies doing harm. The last piece is this idea of design. More humane design. Training in what are the principles... This is why we built our course. And inspiration. What are exemplars of people doing it well? Tristan, I think you wanted to add something.

Tristan Harris:

I just wanted to add one quick thing, which is that, when we talk about externalities and... People are used to the idea that an oil company might create emissions and that's an externality. An oil company might have an oil spill. That's an externality. What's so unique and what I want people to get from the kind of... especially polarization externalities that come from personalizing the most engaging stuff across all these platforms... is that an oil spill didn't decrease the coherence of the government to regulate oil spill. An oil spill just causes people to be angry and then say we want to regulate oil companies to not have as many accidents. But when technology creates a polarization spill and a breakdown of shared reality and getting people angry for a constantly different set of reasons, it actually breaks the ability for the government to regulate. Look at how the current debate is turning into a free speech versus censorship debate, which is not actually what the debate is. It's about changing the core incentives around engagement. I just wanted to add that quick point.

Xavier Harding:

That's a really, really good point. Imo, maybe you can tell us about the Mozilla side of things.

Imo Udom:

From a Mozilla perspective, the one thing I would say upfront is we're very fortunate at Mozilla. We're unlike almost any other tech company out there. We're owned by a foundation, a non-profit, rather than corporate shareholders. Really, this allows us to commit to the Mozilla mission of building a more healthy, private, and open internet in a way that other organizations aren't able to do. And so, for us, we take a very holistic approach to this. It's a combination of the advocacy work we do, oftentimes led by our foundation. That can like some of what Randy mentioned. Laws, advocating for different things, recourse that might be important. In other cases, that looks like things like our Privacy Not Included. It's educational. Here's what an opportunity looks like. Here are products and services that are doing things a little bit differently. Here's what you might not know about products and services that you have.

That isn't enough. Shep mentioned this as well. We have to show what real products look like that offer a different way. We have things... Everyone knows about Firefox and some of the leading aspects of how we build Firefox. We focus on privacy. New features and technology like total cookie protection. But we also try to go to the cutting edge. Efforts like Common Voice, where we're working to include different voice types, not just the traditional Anglo-Saxon voice pattern, for all the technology we use. These are things that we're doing. Combining advocacy work with educational work with product development. And then, lastly, the key piece... which is a lot of the reason why I'm at Mozilla... is taking an ecosystem approach to this. It's not good enough to just stand on a soap box. We're not going to change the world ourselves. We're not just pointing fingers at people. We're inviting others into the conversation and collaborating with those that can help make a difference. Those four things are how we approach this at Mozilla.

Xavier Harding:

You make a good point. I mean, you're right. Not a lot of companies making tech products have that nonprofit background. In a way, Mozilla is not beholden to just more profits, more profits, more profits. It's a lot of donations involved. Another question I want to ask that's kind of its own hard question... There was talk before of Social Dilemma. Millions of people watched that documentary. Many people also pointed out that that documentary didn't always have diverse voices as part of it. How do we keep this movement intersectional? How do we make sure that there aren't any oversights when it comes to the work that we do in making tech ethical? How do we keep this diverse? Who wants to take this one?

Randima Fernando:

I'm happy to start.

Xavier Harding:

Go ahead, Randy.

Randima Fernando:

Just a few thoughts. There's so much to this question. One is, obviously, inviting more people into the conversation. This is what I've been really happy to see with our course. The breakdown by nationality. 120 countries are represented. Only 35% in the US. 65% international. I think this is a big part of expanding the conversation. Anywhere we have those opportunities, we have to do that. This event is another example of doing that.

Connecting tech companies with people who have been harmed directly. People need to come into the companies... And we've done this many times. Connecting engineers and designers and decision makers with parents who have lost children. People who've lost their families due to misinformation. Journalists who have been harassed. I really want to thank... The folks at [inaudible 00:37:43] have done an amazing job of gathering these kinds of people who have been harmed and trying to connect them with technology folks who need to hear from them. Good feedback mechanisms when you build products. Being really interested in finding the feedback mechanisms.

But also being honest about the systemic nature of the harms you generate. Right now, Sri Lanka is in crisis. My parents are looking at the news on YouTube. And there's no way. YouTube can figure out what's right and wrong there, but there's so much misinformation. This happens in every developing country. Understanding that we need to address the systemic part of how those kinds of harms get generated instead of trying to track down every possible permutation in a world with trillions of videos.

Xavier Harding:

Definitely. That's a really good point. Imo, how about you? How do we keep this ethical tech movement... How do we make sure we're cutting across racial lines, background, ages, including all types of people? How do we do that?

Imo Udom:

Correct. There are three things I would add to what Randy said. Randy hit the high points. The high-level, really important aspects of changing the system. I'll speak to the product folks for a second. I think the three things that product teams, organizations who are building product teams, can do... There's the aspect of the actual diversity of the team itself. Think about the diversity as people with different lived experiences. Because they bring those lived experiences to the products that they build and that can help inform or influence what potential outcomes might look like. Randy actually touched on this as well. Beyond building diverse teams to build the tech, it's actually looking intentionally to broaden your user surveys. Yes, we're building for specific audiences and user types, but how do we make sure that we don't just go down the easy path when we're trying to get that feedback? I would take it a step further to look for diverse panels of users to provide feedback. That may not work for every single product or service, but if you're building something to scale for everyone, you should take the additional time and effort to go deeper and understand how it is impacting those people.

And then I think, lastly, what I would offer that product organizations can do a bit differently is, again, think about the value add. There's a concept that I espouse about value exchange. The value exchange online and on the web needs to be a bit different. I think some dedicated time and effort needs to be on individuals where the initial customer or user, the value exchange may not explicitly clear in your direction, which is what most organizations want... They use. We get it. How can you be more comfortable with value exchange where, actually, the individual or the user themselves are getting more? If you think a little bit differently, that's how you might create products for underserved populations and individuals as well.

Xavier Harding:

I think about this a lot just because, even aside from underserved populations, even aside from race and sex and all the different groups that there are, there's also just a knowledge gap between the types of people affected by this versus the folks who know what's even going on beneath the surface. Everyone's affected by this, but not everybody's paying attention to these issues. I'm always constantly trying to think about how do we make sure everyone knows what's going on. Shep, do you want add something on this topic, too?

Lindsey Shepard:

Yeah, I do. Imo made this point, but we can't forget that a lot of the center of this is the products that we build reflect the people that build them. That's just the facts. In tech, we have an enormous issue with diversity. We've known this for a long time. I think the gender gap is the most stark example of this. A very small percentage of engineers out in the world are women. This is a systemic problem and it starts at a really young age and there's been lots of research done on this, but just recognizing that whoever builds a product is... Regardless of how much you attempt to get feedback and a global understanding, it represents the lived experience of that person. And we know what that feels like. I mean, I have often said that I wonder what it would be like if women had designed seat belts. Most women could probably completely understand what I'm saying. Everything that's built is going to be reflected by the team that builds it. We need to more serious and more focused on closing that gap.

Xavier Harding:

I really like that analogy of the seat belts and what if women designed them. You probably see that in a lot of places. What if there were more women designers building these things? We have an audience question that we received on Twitter. I don't know if we want to stay on this topic or go to that question. But it deals with YouTube. I know Tristan has said a lot about YouTube in the past. I'll read it and you can let me know if it resonates with you. This is from JakeSpeaks on Twitter. "How do YouTubers contribute to the attention-driven economy? Because one individual can quit all social media apps but not necessarily YouTube because legacy media is already biased and emerging media houses which are unbiased, as they claim, can't break into TV cable." I'm not quite sure I understand the question, but I think what he's trying to say is just how hard it is to quit YouTube and how integrated it is into our culture. Tristan, maybe you can talk a little bit about that.

Tristan Harris:

This is actually what makes the current environment we'd say inhumane is that we're forced to use things that are not actually designed in our interest, which is to say if you've watched The Social Dilemma and you're like, whoa, I'm a YouTube creator. I don't want to be posting to an environment that's maximizing engagement.... or if you're a politician and you watched The Social Dilemma. You're like, hey, I don't want to be on Twitter or Facebook and participating in that whole thing... you're just going to lose to the people who do. Because that is the platform. The deepest thing with these platforms is that they have colonized the meaning of social participation in society. Often, parents actually get this wrong with their kids. Frances Haugen, the Facebook whistleblower, spoke about this. That parents give their kids often bad advice about being addicted to social media. They'll say things like, "Oh, honey. Just use it less or just turn it off or just delete Instagram." What they don't understand is that Instagram isn't just the explore tab where you're scrolling all the videos and photos. It's actually the primary way that you communicate with your friends. You use Instagram Messenger. That's like telling one of us... I'm assuming based on our ages here... to not use text messaging. Oh, just don't use text messaging. That's basically saying exclude yourself from social participation in society.

I think the deepest thing that's wrong with these platforms is that, if their business models are not aligned with serving the society that they have colonized, it doesn't leave us with good choices about how to engage. Because you either... And we get this critique all the time. We're known for the addiction, Center for Human Tech issues, but we have to participate on Twitter to advertise this panel, for example. People would say, "Oh, you're a hypocrite. You shouldn't even be using Twitter." And so we use it pretty minimally, but I just want to name this as the kind of inherent tension with all these platforms. This is why interoperability and alternatives are going to be so needed to help get us off these systems.

Xavier Harding:

That's a really good point. One thing I want to make sure to ask is... We all know that hindsight is 20/20. There's a lot that we know about the web now that we probably couldn't have predicted. But also could we have predicted it? What do you all think? Did a lot of us know this was going to happen and folks did it anyway because it's profitable? Or was this actually not foreseeable and a total surprise? What do we think? Should we have expected this?

Lindsey Shepard:

I'll take this one. Hindsight is 20/20. There's a couple of things that have happened and it's like, well, duh. Should've seen this coming. There are two things that are always really top of mind for me. One is that when we think about... Again, we're talking about incentives all the time. When we think about the incentives of these platforms, they're really optimized for engagement and clicks and sharing. We should've seen it coming. The horrible impact this would have on journalism. When we think about rich, deep, investigative pieces that take a lot of time and don't necessarily come with these flashy, sexy headlines that reflect the way people feel and the anger and the rage... When we think about real, deep journalism, it's hard to see how that type of work really thrives and survives in an ecosystem where we're looking for clicks over quality. We should've seen it coming and it's a big problem. Trust in journalists and journalism is plummeting. I think someone on this panel mentioned real-world harms that these folks are facing and I think that's an obvious effect of what's gone on in tech and on social media in general.

But I also think... For more of a geeky, disciplinary perspective, when I look at advertisers and marketers and professionals in this space, the fact that we have access to so much data leads to a reliance on so much data. When we think about the game that we're playing and the rules that surround that game, when the game means, hey, I can know what kind of toilet paper someone uses in order to get them to buy a t-shirt, that's just weird. As marketers, we shouldn't need to know that, but we've become so reliant on this glut of data that just makes it so easy. We should've seen that coming. The incentives around that are just too obvious.

Xavier Harding:

That reminds me of a point that Randima made earlier about how... You describing that sounds scummy, but also, the rules are set up and the legislation is set up in a way that that's in bounds and fair game. So people just do it. You see publications writing stories not because it's good to publish but because it will go viral and it'll get clicks. You make a really good point. I think I want to start to close this out. I want to go to our final question. Where do we go from here? What do we need to start doing to make sure that the only apps and tech products and all the services that we use... the ones that do well aren't just the scummy ones and aren't just the ones trying to get us to click and buy and scroll and scroll and scroll? Where do we go from here? I'll start with Tristan. Aha. Surprise question.

Tristan Harris:

To this last point that was mentioned, people are going to play the game that you set out for them to play. I think we wish people were more ethical and responsible and all that, but the responsible tribes and the ethical tribes just get out-competed by the tribes that... The sustainable tribes get killed by the warlike extractive tribes that accumulate resources faster. The ethical, responsible tech companies get killed by the VC-backed, extract at all costs other companies. The game theory, we call multipolar trap... if I don't do it, the other guy will on a perverse incentive... dominates in this space. It's actually one of the hard things. You can't expect a good faith actor to out-compete these extractive actors except if you change the game so that it's a more fair fight. It's like fighting in your same weight class. Right now, we have to recognize that there's this weight class difference between the guys that are trying to do it in a better way and a humane way and a ethical way that are out-competed by the guys that are jacked up on a 10X VC capital investment. That are jacked up on getting to manipulate human emotions.

We go back to E.O. Wilson's fundamental problem statement. That the fundamental problem of humanity is we have Paleolithic brains and emotions, medieval institutions, and god-like technology. And so when we talk about where we go from here, we have to embrace the fact that we have Paleolithic emotions. We have to upgrade our medieval institutions to not be lagging behind the speed of the 21st century god-like tech, which we have to have the wisdom and responsibility to wield. You can't have god-like powers without the love, prudence, and responsibility of gods. Right now, we don't have that, so that's kind of what this whole conversation is, I think, about.

Xavier Harding:

That's quite the quote. I definitely do not deserve god-like powers. I don't know if anyone on this panel does. We should not have some of this stuff. Randima, where do we go from here?

Randima Fernando:

Not much to add to that and to the previous discussion already about the different factors that need to all come into play. Between the awareness and the legislation and litigation. All that stuff, it's a lot of pieces that have to come together. That's why this is a hard problem. But there's a path. That's why we're here.

Xavier Harding:

Imo, how do we make sure the only apps that survive aren't just the ones that are terrible for my privacy? Where do we go from here?

Imo Udom:

I think there are two things I'll add to what has been said so far. One is, when we frame out the challenges as we've done and laid out some of the key pieces that need to be there, it can seem overwhelming and that can cause people to do nothing. The first thing we need to do is take a baby step. If all you can take is a baby step, take that baby step. That needs to come from individuals who are building. It needs to come from organizational leaders. It needs to come from the consumer, the users, the people who are taking advantage of these things. If each one of takes a baby step, we start that ripple.

The second thing that I believe we need as we move forward is for organizations like us to work in collaboration and partnership. Things like this. We may not be able to do it alone. Yes, we all want change. How do we lean into each other? How do we work more in the open? How do we create artifacts that we can all build off of? Thinking about the generative power of us putting things out into the world that are positive and then trying to collaborate on the change. I think that's how we can move from here.

Xavier Harding:

I love that quote. If you can take that baby step, take that baby step. Lindsey Shepard, final word. Where do we go from here?

Lindsey Shepard:

So much great content from this group already. I think, critically, there just needs to be a mindset shift. We've got to start thinking about this differently. We need to start incentivizing these companies to recognize that maximizing profits can't be the only end goal. That we need to be intentional and be thinking about this differently. As consumers, we can make choices to start to create those incentives. We can choose products that are more humane. We can choose privacy preserving solutions. But as consumers, as advertisers, product managers, leaders, we need to encourage tech companies to be more generous instead of extractive, to Tristan's point, or exploitative. But I think the truth is there's no silver bullet here. We would've done it already if there was. It's not up to any of these groups to change things. It's going to take small, incremental, baby steps from all of us to drive towards it.

Xavier Harding:

I hope one day we can figure out that silver bullet if it exists potentially. If it's out there. All right. Well, that has been our show. Thanks, everyone, for tuning in today. Thanks to Center for Humane Tech for working with us at Mozilla to bring you all this panel. You can follow the panelists by tapping on their faces on Twitter. It's on your screen right now. Just tap them. Hit follow. Special thanks to Sarah Vasquez, Melissa Thermidor, Jim Beard, Camille Carlton, Maria Bridge, and Daniel Kessler for helping produce this segment. I'm Xavier Harding. Make sure to Mozilla on Twitter as well as Center for Humane Tech on Twitter, Instagram, TikTok, all the places, for more panels and content like this. And yeah. Take care, everybody. Thanks for talking with me today.

Lindsey Shepard:

Thanks so much!

Randima Fernando:

Thanks, everybody.

Xavier Harding:

All right. Bye, everyone.

Published on
July 29, 2022

Posted in: