For Technologists

Together, we can change how technology is built.

What We Do

We work directly with technologists to create a new definition of success: one that honors human nature, grows responsibly, and helps us live lives aligned with our deepest values.

Principles OF Humane Technology

Before You Dive in

Before you dive into the principles, it is helpful to understand the context that drives our work. Tech culture needs an upgrade. To enter a world where all technology is humane, we need to replace old assumptions with deeper understanding of how to add value to people’s lives.

🤲🏼 Technology is Never Neutral
We are constructing the social world

Some technologists believe that technology is neutral. But in truth, it never is, for three reasons.

First, our values and assumptions are baked into what we build. Anytime you put content or interface choices in front of a user you are influencing them; whether that is by selecting a default, choosing what content is shown and in what order, or providing a recommendation. Since it is impossible to present all available choices with equal priority, what you choose to emphasize is an expression of your values.

Second, just as our values and assumptions are baked into what we build, the values and assumptions of the world shape the effects of new technology, regardless of the inventor's intentions. Economic pressures (for example, the pressure to grow sales for shareholders) or social dynamics (for example, one ethnic group wielding powerful tech against a marginalized ethnic group) can have profound unintended consequences. Most often, the result is a widening of inequity in the world.

The third way technology is not neutral is that every single interaction a person has, whether with people or products, changes them. Even a hammer, which seems like a neutral tool, makes our arm stronger when we use it. Just like real-world architecture and urban planning influence how people feel and interact, digital technology shapes us online. For example, a social media environment of likes, comments, and shares shapes what we choose to post and reactions to our content shapes how we feel about what we posted.

Neutrality is a myth. Humanity’s current and future crises need your hands on the steering wheel.

🧠 Seeing in Terms of Human Vulnerabilities
The human brain is inherently vulnerable

To see the full implications of technology being values-laden, we must consider the vulnerabilities of the human brain. Many books have been written about the myriad cognitive biases evolution has left us with, and our tendency to overestimate our agency over them (see Resources). To quickly understand this, think of the last time you watched one more YouTube video than you had intended. YouTube’s recommendation algorithm is expert at figuring out what makes you keep watching—it doesn’t care what you intend to do with the next minutes of your life, let alone help you honor that intention. 

Simple engagement metrics like watch time or clicks often fail to reveal a user’s true intent because of our many cognitive biases. When you ignore these biases, or optimize for engagement by taking advantage of them, a cascade of harms emerges. 

Confirmation bias causes us to engage more with content that supports our views, leading to filter bubbles and the proliferation of fake news. Present bias, which prioritizes short-term gains, leads us to binge-watch as self-medication when we’re stressed instead of addressing the source of our stress. The need for social acceptance drives us to adopt toxic behavior we see others using in an online group, even when we would not normally behave that way. 

Aggressively optimizing for engagement metrics is like taking your hand off the steering wheel. It puts the users’ paleolithic, inherently vulnerable brains in charge of determining what is valuable for your product. This approach, combined with the latest machine learning and A/B testing techniques, result in a broad series of harms unleashed at scale, which we call human downgrading.

🛤 Shifting Product Culture
Culture change is necessary and hard

Our vision is to replace the current harmful assumptions that shape product development culture with a new mindset that will generate humane technology. Integrating this new paradigm will mean process changes, time, resources, and energy within the product organization and beyond. 

We realize systemic cultural change is never an easy task, with many opposing forces. Please reach out if you have ideas for how to help move this change forward or specific requests that you think CHT may be positioned to fulfill.

🛒 Creating Market Conditions for Humane Technology
This paradigm shift requires a marketplace that rewards humane technology

CHT and many other organizations are creating these conditions through a combination of pressures from the media, parents, kids, regulation, investors, shareholders, tech employees like you, and more.

For more on this, read about Our Work.

The Principles

This new paradigm is for technologists who accept that technology is increasingly shaping our social fabric and want to apply their exceptional skills to realign technology with humanity.

✨ Obsess over Values
Instead of obsessing over engagement metrics

When you obsess over engagement metrics, you will fall into the trap of assuming you are giving people what they want, when you may actually be preying on inherent vulnerabilities. Outrageous headlines make us click even when we know we should be doing something else. Seeing someone has more followers than we do makes us feel inferior. Knowing our friends are together without us makes us feel left out. And false information, once we believe it, is very hard to displace.

Instead, you can be values-driven while still being informed by metrics. You can spend your time thinking about the specific values (e.g., health, well-being, connection, productivity, fun, creativity…) you intend to create with your product or feature. Those values can be a source of inspiration and prioritization. You can measure your success directly by investing in mechanisms of understanding that match the complexity of what you value, e.g. qualitative research and bringing in outside expertise.


Questions to ask yourselves:

  • What are the core values of your product? If you don’t have any yet, look to your vision for inspiration.
  • If you were to center your design process around your product’s values, how might your features and processes be different?
  • What kind of attitude shifts or behavior change might this product create? Do those changes align with your core values? 
  • What is a direct measure of success that you can use instead of relying on clicks/time spent/daily active users?
  • How can you prioritize features using values as the criteria?
  • How can you deepen your understanding of the values you have chosen for your product? Is there academic research you can take advantage of?
  • Where is your product seemingly “giving users what they want”? What cognitive biases might be underneath?
🌱 Strengthen Existing Brilliance
Instead of assuming more technology is always the answer

Not everything needs an upgrade. Under the right conditions, humans are highly capable of accomplishing goals, connecting with others, having fun, and doing many other things technology seeks to help with. Technology can give space for that brilliance to thrive, or it can displace and atrophy it. In each design choice, you can support the conditions in which brilliance naturally occurs.

For example, Living Room Conversations was created with the understanding that when people find similarities with each other and connect as human beings, they can more easily find common ground and shared perspective. Another example is online group technology like MeetUp that encourages in person get-togethers to deepen connections.


Questions to ask yourselves:

  • What inner capacities or resources make people feel well? How can technology strengthen those capacities without taking over people’s lives?
  • For the values that you have chosen as a product team (e.g. connection, community, opportunity, or understanding), where in the real world can you find inspiration?
  • How might technology help us overcome some of the greatest divides in our society (inequality, polarization, etc.)?
🤝 Make the Invisible Visceral
Instead of assuming harms are edge cases

Ideally your organization would clearly understand the harms it creates and would perfectly incentivize mitigating them. In practice these harms are complex, shifting, and difficult to  understand. Because of this, it is important to build a visceral, empathetic connection between product teams and the users they serve.

While many of today’s best practices use personas, focus groups, or “jobs to be done” to gain empathy for the user, humane technology requires that you internalize the pain your users experience, as if it were your own. Imagine the following scenarios:

  • Your partner must be on board for the first flight of the plane you designed.
  • Your mother ignores public health recommendations because of videos recommended by an algorithm you designed.
  • Your middle-schooler is the subject of bullying on your social media app.

This mindset leads to a drive for deeper understanding and caution. It’s a mindset that your decision-makers and product team must share, for the many people impacted by your work: your users, the people around them (friends, family, colleagues, etc.), different socioeconomic populations (age, income level, disabilities, cultures, etc.), and so on.


Questions to ask yourselves:

  • Who are the stakeholders of your app or service beyond your existing users?
  • Who using your app or service is being negatively impacted? What do you know about them?
  • How can you give people with direct experience in the communities you are serving power in your decision-making process?
  • If your solution is global, how can you understand how the user experience differs in other countries?
  • What outside expertise can you bring in to help you with your chosen problem space?
🧘 Enable Wise Choices
Instead of assuming more choice is always better

As the world becomes increasingly complex and unpredictable, our capacity to understand our emerging reality and make meaningful choices can quickly become overwhelmed. As a technologist, you can help people make choices in ways that are informed, thoughtful, and aligned with their values as well as the fragile social and environmental systems they inhabit.

For example, when presenting new information, appropriate framing can help people make good decisions. The same information lands differently when framed in a more relatable context. Hearing that Covid-19 has a 1% case fatality rate might not mean much to you. But hearing that Covid-19 is several times deadlier than the flu, helps anyone immediately understand it in relation to something they already know. When people are presented with information in an intuitive way, they are empowered to make wise choices.


Questions to ask yourselves:

  • What choices are you offering your users? What framing are you using? How might a different frame lead users to make different choices? 
  • How can information be presented in a way that promotes social solidarity? 
  • How can we help people balance taking care of themselves with staying informed?
  • How can we help people efficiently find the support they need?
♻️ Nurture Mindfulness
Instead of vying for attention

In a world where apps are competing constantly for our attention, our mindfulness is under attack. Mindfulness is being aware of, in a calm and balanced way, what's happening in our mind, in our body, and around us. Mindfulness allows us to act with intention and to avoid a life that becomes a series of automatic actions and reactions, often based on fear and a scarcity mindset. But like any other capacity, mindfulness is one that can be developed. You can help your users regain and increase their capacity for awareness, rather than racing to win more of their attention.

For example, a mail application that by default makes a sound and puts up a notification when mail is received could instead have the user opt in to turn notifications on. Another example is the Apple Watch Breathe app, which supports people in periodically taking a moment to simply focus on their breath.


Questions to ask yourselves:

  • How can technology help increase capacities for concentration, clarity, and equanimity?
  • How can technology foster a sense of agency and community?
  • When are you vying for the user’s attention for the benefit of your product rather than for their benefit?
⚖️ Bind Growth with Responsibility
Instead of simply maximizing growth

Today’s technology has increasing asymmetric power over the humans that use it. Machine learning, micro-targeting, recommendation engines, deep fakes are all examples of technologies that dramatically increase the opportunity for creating harm, especially at scale. To mitigate this, you can invest in understanding the delicate cognitive, social, economic and ecological systems that your technology operates in, what harms your product may be generating, and ways to mitigate those harms.

For example, a product originally intended only for adults will almost inevitably be exposed to children as it scales; a platform originally intended for entertainment can become a target for disinformation; and a product that mostly benefits people in an industrialized country may mostly produce harms in a third world country. What feels like a remote possibility when you first launch becomes a guarantee as you scale to millions of people. 

And even “good” things, when done to an extreme, will have unintended consequences. At first glance, Likes seem like a great signal about what the user wants to see more of. But at scale, it ends up creating filter bubbles and the fragmentation of shared truth.


Questions to ask yourselves:

  • How can you anticipate harms created by scale? What systems and processes will help you understand the impact of your product on its users and their larger social, economic, and ecological systems (for example, testing the product with a broad stakeholder set)? 
  • A/B testing is often compelling because it provides quick feedback to product decision-makers. Can you find similar solutions for tracking harms created by scale? 
  • If you are a platform for user-generated content, how can you avoid the mistakes made by previous platforms that led to widespread disinformation and other toxic content?
  • If you are wildly successful creating the value you are trying to create, what is the failure mode that comes along with that? How can you safeguard against that? How can you find the ones you may have missed?
  • At the end of every episode of Sesame Street, Elmo encourages kids to get up and do a dance so that they won’t watch another episode. How can you provide stopping queues so that users don’t overuse your product in a way that undermines what you value?
Would you recommend these principles to a friend?
YesNo
Tech-Principles

Online Course

We're developing an online course to help technologists become stronger advocates and implementors of humane technology. Sign up here to be notified when the course is available.

Humane Design Worksheet

Podcast: Your
Undivided Attention

CHT Co-Founders Tristan Harris and Aza Raskin explore what it means to become sophisticated about human nature and why we must shift toward humane technologies. We’ve highlighted a few episodes below with particularly relevant insights for technologists.

18 – The Stubborn Optimist’s Guide to Saving the Planet

How can we feel empowered to take on global threats? The battle begins in our heads, argues Christiana Figueres. She became the United Nation’s top climate official, after she had watched the 2009 Copenhagen climate summit collapse “in blood, in screams, in tears.”

May 21, 2020

7 – Pardon the Interruptions

Every 40 seconds, our attention breaks. It takes an act of extreme self-awareness to even notice. That’s why Gloria Mark, a professor in the Department of Informatics at University of California, Irvine, started measuring the attention spans of office workers with scientific precision.

August 14, 2019

4 – Down the Rabbit Hole by Design

When we press play on a YouTube video, we set in motion an algorithm that taps all available data to find the next video that keeps us glued to the screen.

July 10, 2019

Discuss Humane
Technology

Join A Virtual Conversation

We regularly host virtual conversations to help technologists build companies and products with a radically healthier relationship to human nature. Using the lens of our Principles of Humane Technology, these interactive hour-long discussions bring together practitioners working to create humane technology to meet and discuss challenges and learnings in their work. A schedule of upcoming conversations is below:

Resources:

Values Sensitive Design

How to put values at the center of your design process. Also a great card deck to help with red-teaming (the practice of proactively uncovering harms prior to delivering a product or feature).

Nudge

Guidance for how to Enable Wise Choices. Introduced the concept of “choice architecture” (additional support for the fact that technology is never neutral).

Predictably Irrational

A look into how we make decisions and the field of behavioral economics for additional understanding of cognitive biases. Also check out the author’s company, Irrational Labs.

Cognitive Biases Graphic

A comprehensive list of our inherent biases.

Greater Good Science Center

Science-based insights for a meaningful life. Inspiration and research to obsess over values and strengthen natural brilliance.

Design Guide

Our take on organizing common human vulnerabilities and a guide to assess your current product.