
Creating Awareness
The public must understand the forces that drive the race to AI, and how the race impacts each of us. With awareness, the public can demand action from tech companies and policymakers while using AI products more mindfully.
But this race is highly volatile. While AI promises to enhance human cognition, eliminate drudgery, and accelerate humanity’s most important scientific, technical, and industrial endeavors, this same technology can simultaneously create unprecedented risks across our society, as well as supercharge existing digital and societal harms.
Massive economic and geopolitical pressures are driving the rapid deployment of AI into high-stakes areas — our workplaces, financial systems, classrooms, governments, and militaries. This reckless pace is already accelerating emerging harms, and surfacing urgent new social risks.
Meanwhile, since AI touches so many different aspects of our society, the public conversation is confused and fragmented. Developers inside AI labs have asymmetric knowledge of emerging AI capabilities, but the sheer pace of development makes it almost impossible for key stakeholders across our society to stay up-to-date.
The quality of our future with AI depends on our ability to have deeper conversations, and to wisely develop, deploy, regulate, and use AI. At CHT, we break down the AI conversation into five distinct domains, each with different impacts on our social structures and the human experience.
As AI becomes integrated in our personal lives and mediates our communications, this technology will reshape our relationships, our communities, and our social norms.
The automation of human labor upends career trajectories, threatening not only our livelihoods, but our deepest life stories and the sense of purpose found through work.
Power dynamics are dramatically shifting as AI both centralizes economic and political influence in the hands of a select few, and radically decentralizes powerful and dangerous capabilities across society.
AI-generated content and algorithm-driven filter bubbles risk fracturing our shared sense of reality, fueling distrust, polarization, and a loss of confidence in what’s true.
As we deploy increasingly powerful, inscrutable, and autonomous AI systems, we risk losing our collective ability to maintain meaningful human control over our economic and geopolitical systems.
With AI products increasingly imitating human language and emotions, it becomes harder for ordinary people to tell if they’re talking to another human, or an AI acting on a human’s behalf. Whether it’s a friend using generative AI to draft messages, or a stranger relying on an AI agent to fulfill their responsibilities, we are losing confidence in our ability to know who — or what — is on the other side of an interaction.
This uncertainty breaks down trust, since authentic relationships are built on mutual understanding and transparency around who we’re engaging with.
Most of today’s consumer-facing AI products aren’t just designed to feel conversational. They’re engineered to keep us engaging: continuously validating and flattering us, telling us what we want to hear, always eager and never demanding social reciprocity. This dynamic can distort our sense of self, and our expectations of others.
Over time, the simulated relationships with AI products can make real human relationships feel less gratifying, too unpredictable, or too demanding. This preference for machine interactions may isolate people further, especially amid the current loneliness epidemic.
AI products are not just seeking our attention; they are competing to become our closest companions and confidants: using deeply personal information to learn how to connect with us, and building a dossier about who we are, how we think, and what we feel.
The better an AI knows us — and the more we disclose to these machines — the better the system can captivate us, and shape our behaviors. These insights surrounding our most intimate data are monetized within business models that thrive on prolonged engagement and surveillance, often without our informed consent.
These business models create perverse incentives for maximizing synthetic intimacy, and emotionally manipulating people.
AI, paired with robotics, is automating significant portions of the $110 trillion global economy.
With automation comes the removal of human judgment in critical work and system processes. Human judgment often provides the context, ethical reasoning, and situational awareness that automated systems lack. When this perspective is stripped away, decisions can become narrowly optimized for efficiency at the expense of accuracy and fairness, often with little meaningful recourse for.
When AI optimizes for the wrong goals without clear chains of human accountability, damaging outcomes can occur without any meaningful process for repair and restitution.
Rampant job loss as the result of AI threatens to create volatility in labor markets and heighten wealth inequalities.
Society is ill-prepared for such a rapid transition, and this could lead to societal unrest, institutional instability and social breakdown across various gaps: generational, economic, and technical fluency.
As the labor market is experiences swift transformation, traditional career paths become less stable. many people are grappling with the erosion of work as a source of stable identity and purpose.
In some industries, humans may not just be working with AI — they may be working for AI systems that set the pace, monitor performance, and define success. When labor becomes primarily about serving algorithmic goals, rather than contributing meaningfully to a community or a shared mission, the experience of work can feel dehumanizing and hollow.
At the same time, the fragmentation of workplaces and the rise of remote, AI-mediated tasks risk weakening the social connections and solidarity that have long been built through shared work.
Access to meaningful work is a core part of how we derive dignity in our lives; when purpose and dignity are stripped away, we struggle to build a meaningful life.
As a technology, AI has the power to radically centralize both economic and geopolitical power. Companies and countries that secure a decisive advantage in AI technology stand to gain tremendous, and potentially durable advantage over their adversaries. Regimes that can control the ideological biases of AI stand to project significant soft-power across the globe.
Simultaneously, AI tools are empowering individuals in unprecedented ways. While this can accelerate creative, intellectual, and technical progress, it can also empower malicious actors, and risks overwhelming our legal and social institutions.
This concentration of resources and economic gains amongst select corporations leads to asymmetric corporate power in society, where a handful of tech companies can exert more influence than entire nations.
Ordinary individuals are left with no clear mechanism to influence how these products are being built and deployed. This renders the majority of society a passive participant in the age of AI and sets the stage for social unrest.
New kinds of surveillance and political manipulation become possible, as companies and governments can exert subtle forms of AI-enabled political control that are hard to detect, let alone prevent.
Bad actors are able to harness the power of AI tools to wreak havoc, from targeted influence and disinformation campaigns, large-scale cyber-attacks, novel bioweapons, and more.
Without thoughtful checks and balances, the dissemination of powerful AI technology – including the rapid spread of open-source models — can overwhelm and degrade the ability for our public-safety and regulatory institutions to respond effectively to both foreign and domestic threats.
AI tools can now easily create endless amounts of realistic images, videos, text files, and other content that is often indistinguishable from human-created content. This realistic and viral AI-generated content threatens to further decay our information environments as it becomes difficult to determine authenticity.
As manipulative content including deepfakes and disinformation becomes increasingly prevalent online, it is natural to become tribal and cynical: dismissing any information they don’t agree with as “fake”. This fuels a climate of pervasive doubt, and accelerates information breakdowns and polarization. All of this undermines our ability to have difficult conversations, trust in the expertise of professionals, and engage in robust public dialogues that are necessary for democracies to function.
For the last decade, information environments have been decaying into personalized filter bubbles – pseudo-realities shaped by algorithms that maximize engagement rather than promote grounded, fact-based discussion.
With the prevalence of AI-generated social-media content competing for our attention, it is even harder to discern what is real and who can be believed on social media feeds.
Over time, this erosion of shared reality can fracture communities, weaken social trust, and even undermine participation in democratic processes.
AI systems are more “grown” or “trained” than they are programmed. AI systems are produced by rewarding an AI for “good behavior” and punishing it for “bad behavior”, but the final capabilities, intentions, and goals of an AI model are hard to characterize.
Prompting an AI to behave in a certain way is not a guarantee that the system will follow the instructions.
Despite extensive safety-testing, commercial AI models routinely produce scandals of unpredictable and troubling behaviors. There is currently no way of analyzing a model’s structure to guarantee behavior mechanistically.
Several studies have demonstrated that current commercial AI systems can be put in situations where they will actively plan to deceive users, and hide their motivations from both end-users and AI researchers.
Research has shown that AI systems are already capable of autonomous planning to create instrumental goals such as power-seeking and wealth-accumulation. And, AI capabilities are advancing much faster than our ability to understand AI's inner workings, and how to reliably control them.
As AI systems are given more autonomy and agency, AI that breaks through its own safety guardrails in order to pursue a misunderstood or misaligned goal is a meaningful concern, especially given the technical challenges in controlling these behaviors.
AI is being woven into multiple domains, including governments, militaries, financial sectors and healthcare systems. As AI is deployed across critical infrastructure, it remains unclear if these systems are truly optimized for the right goals, or trustworthy in the pursuit of these goals.
Our understanding of how and why AI makes the decisions it makes is also still limited and opaque. Without interpretability and transparency, we do not know how much we should trust these models to make critical decisions across our society.
Ceding critical decisions to AI means giving up human judgment in situations where mistakes can have serious consequences. When humans can no longer reliably oversee or override what an AI is doing, it becomes harder to ensure that decisions reflect our ethical values, protect safety, and allow accountability when things go wrong.
Society is operating at an information deficit with these powerful AI systems. Leading AI labs only share research that they deem appropriate to share with governments and the public, despite calls for more transparency.
This lack of transparency hinders society’s ability to discern the genuine capabilities and autonomy of AI systems.
As AI meets or exceeds human-level intelligence (AGI/ASI), it becomes hard or impossible to reliably detect misalignments in goals. Lack of transparency from AI labs leads to significant uncertainties in how close these companies are to AGI
Without increased transparency, there can be no healthy checks and balances on loss of control risks. Companies continue to race to build AGI while obscuring or downplaying meaningful risks and defects.
Artificial intelligence is a highly consequential general purpose technology.
Because of that, how we design, deploy, and use AI will determine the impact it has on us and our society. By realigning the incentives behind this powerful technology and designing more responsible products, humanity can reap the benefits of AI without dystopian results. Center for Humane Technology works to realign these incentives by:
The public must understand the forces that drive the race to AI, and how the race impacts each of us. With awareness, the public can demand action from tech companies and policymakers while using AI products more mindfully.
Policy interventions remain one of the most impactful levers to drive change – especially if they incentivize safer AI development from the outset.
At the core of any tech product is its design. We have the ability to design tech products differently, where safety standards prioritize personal and societal well-being.
The path we choose now with AI will shape how this technology impacts our world. At CHT, we believe that AI can help humanity solve its hardest challenges, and support people to live fulfilling lives. By advancing better awareness, policy, and design in AI, we can build an tech ecosystem — and a broad future — rooted in shared understanding and human values.
Addictive social media design continues to drive political polarization, social division, loneliness, mental health crises, and more. CHT remains committed to intervening in harmful social media design, in order to put an end to the destabilizing effect this technology has had on society, and repair our institutions.