Back to all Insights

Policy Brief: State of Global Tech Policy September 2022

This post is part of The Catalyst newsletter series. Subscribe here for future resources.

Introduction

Countries have woken up to the power of today’s biggest tech platforms and to the very real harms their citizens experience online. Extractive technology is dividing communities, provoking violence, undermining public health, and weakening public trust in democracy. While the role of traditional media cannot be underestimated, it is digital media platforms, whose business models are built on maximizing engagement, that undermine our individual and collective well-being. These negative impacts won’t improve on their own, because tech platforms use an extractive model to maximize profit and have no built-in incentive to act differently. We have to take action.

Addressing the paradigm shift from traditional media to an extractive digital economy requires rebalancing commercial and democratic interests. Below is a snapshot of the efforts by various countries to make that delicate calculus. No matter where particular countries are in their process of regulating tech platforms, recognition is growing that we must undertake a range of policy approaches to adequately address digital threats to democracy. These approaches include a focus on public safety, updating existing antitrust perspectives, regulating commerical control over our data, and restoring local journalism. Many of these approaches are required in combination to begin addressing the systemic problems we face.

Within the United States

Congress members have introduced hundreds of legislative proposals to regulate tech platforms over the last several years. But none has achieved enough political consensus through the legislative process. –

Some of the barriers include 1) the focus on content moderation and how it arguably conflicts with American protections for free speech and the First Amendment, 2) the fact that many of these proposals have started only to address the symptoms of the problem and not their root cause, and 3) perhaps most importantly, partisan obstructionism aided by industry lobbying.

Despite an influx of federal policy proposals, political will to reach across the aisle and pass regulatory legislation has been lacking. In the 117th Congressional Session (January, 3 2021-2023), legislators have made notable attempts to regulate tech platforms as monopolies, provide a federal standard for privacy in technology, and increase online protections for kids. However, with the prospect of losing the Democratic majority in one or both houses of Congress this fall, the political consensus needed to adopt legislation may wane or dissolve altogether. 

Regardless of congressional gridlock, we can seek recourse at the state level or through the executive and judicial branches of government.

Congress

  • American Data Privacy and Protection Act (ADPPA): a bipartisan federal comprehensive data privacy bill.
  • American Innovation and Choice Online Act (AIOCA): a bipartisan federal bill that prohibits large tech companies from prioritizing their products and services above their competitors’ when they serve as marketplaces. Kids Online Safety Act (KOSA): a bipartisan federal bill that would mandate risk assessments, give parents greater control over their children’s online activity, and require that platforms give kids the option to opt-out of algorithmic recommendations and other potentially harmful features.
  • The Children and Teens’ Online Privacy Protection Act (COPPA 2.0): a bipartisan federal bill that would expand existing protections for children’s privacy by banning companies from collecting the data of users 13 to 16 years old without their consent, as well as by creating an “eraser” button allowing children to remove their data from digital services.
  • The Journalism Competition and Preservation Act (JCPA): a bipartisan federal bill that would create an antitrust exemption allowing news publishers below a certain size to collectively bargain with tech giants over payment for content displayed on their platforms.
  • Platform Accountability and Transparency Act (PATA): a bipartisan federal bill that would require social media companies to provide vetted, independent researchers and the public with access to certain platform data.

White House

On September 8, the White House hosted a Listening Session on Tech Platform Accountability and identified six broad principles for reform, including promoting competition in the technology sector, providing “robust” federal protections for privacy, protecting kids’ safety online, increasing transparency about algorithms, and stopping discriminatory algorithmic decision-making. The other core principle the White House identified was reforming “special legal protections for large tech platforms,” referring to Section 230 of the Communications Decency Act. The controversial provision provides a liability shield for tech companies over content posted by third parties on their platforms.

Federal Trade Commission

With the appointment of Chair Lina Khan last year, the FTC demonstrated its intent to enforce federal antitrust laws against tech platforms. Khan has said she disagrees with traditional antitrust perspectives that focus exclusively on minimizing consumer harm and price inflation. Instead, she plans to address the impact of unlawful mergers and business practices on competition and innovation. With the confirmation of Commissioner Alvaro Bedoya this year, the FTC has signaled ambitious policy intentions, particularly to protect privacy on the basis of civil and children’s digital rights. The FTC recently invited public comment on data harms from businesses that collect, analyze, and monetize information about people, and expressed its intent to craft rules that crack down on commercial surveillance and lax data security practices. The FTC also recently used its enforcement power to sue a data broker for selling the geolocation data of hundreds of millions of people, including to and from reproductive health clinics.

States

The states present an interesting terrain of contestation for tech policy. On the one hand, they are the laboratories of our democracy, where legislatures can innovate new approaches to governance. In states where political consensus is easier to build around tech policy issues, states present a lower-stakes opportunity to advocate for creative policy solutions to tech harms. On the other hand, states’ rights are often invoked as a shield for conservative lawmakers to shirk federal protections and impose more conservative policies “at home.” Notable examples include state attempts to impose censorship laws attacking social media platforms for their alleged liberal bias, despite significant evidence to the contrary.

The California Age-Appropriate Design Code Act (CA Kids Code) is landmark legislation that regulates tech at the state level. The Kids Code requires online platforms to proactively consider how their product design could harm minors, including through algorithms and targeted ads. The groundbreaking bill could become the basis of other state or federal design codes or could even encourage platforms to proactively change their services for young users across the country, due in part to the difficulty of applying different standards based on location. The California Kids Code was modeled after the UK Age Appropriate Design Code, which became law in the UK one year ago. This transnational experiment in policymaking speaks to the importance of solidarity across borders, recognizing that the operations of tech platforms know no borders.

Within the European Union

The EU is undoubtedly the global leader in tech regulation. Earlier this year, the EU passed the Digital Services Act (DSA) and the Digital Markets Act (DMA) with representatives of the European Parliament, European Council, and the European Commission. The DSA and DMA will go into effect in 2023.

The Digital Markets Act aims to ensure a level playing field for all digital companies, regardless of size. The regulation will establish clear rules for big platforms — a list of “do’s” and “don’ts” — to stop them from imposing unfair conditions on businesses and consumers. For example, the DMA prohibits tech platforms from ranking their own services and products above those offered by third parties. It will also be illegal for companies not to offer users the opportunity to remove pre-installed software. Interoperability between messaging platforms will improve: users of small or big platforms will be able to exchange messages, send files or make video calls across messaging apps. The rules should boost innovation and help smaller companies compete with very large players.

The Digital Services Act, on the other hand, establishes a standard set of rules on what digital intermediaries' can and can’t do in order to create a digital space where users’ fundamental rights are protected. The DSA will give people more control over what they see online: users will have better information about why specific content is recommended to them and will be able to choose an option that does not include profiling. Targeted advertising will be banned for minors and the use of sensitive data, such as sexual orientation, religion, or ethnicity, won’t be allowed.  The new rules will also help protect users from harmful and illegal content. They will significantly improve the removal of illegal content, ensuring it is done as quickly as possible. It will also help tackle harmful content, which, like disinformation, doesn’t have to be illegal, and introduce better rules for protecting freedom of speech. Another critical piece of the DSA is that it opens up access to online platforms to researchers and requires independent audits and risk assessments of these platforms.

In addition, the European Commission has prioritized creating a regulatory framework that would prevent and minimize AI’s negative effects. The model for regulating AI takes  a “risk-based” approach which aligns the level of regulatory scrutiny to the level of risk an AI product or service may cause, ranging from minimal oversight to all-out bans.

 

United Kingdom

Over the last five years, the UK has been working to develop the Online Safety Bill (OSB), which recently gained momentum. With Boris Johnson’s resignation as Prime Minister in summer 2022, the OSB was put on hold. It remains to be seen whether the new prime minister, Liz Truss, will advance the bill as one of her priorities. 

The OSB compels social media and user-generated content sites to remove illegal material from their platforms, with a particular emphasis on protecting children from harmful content. In addition, the largest platforms would have to tackle named forms of "legal but harmful" content, which could include promoting self-harm or eating disorders. For the largest platforms, the categories of legal but harmful material that need addressing would be agreed upon through secondary Parliamentary legislation, and tech firms would need to publish and enforce terms and conditions that clearly explain what is and isn't acceptable on their site.

The Bill says it would also protect free speech by exempting news media and content  of "democratic importance" from the regulations. Ofcom, the UK communications sector’s new regulator, would oversee the OSB. They could fine the largest companies billions of pounds or ask internet service providers to block corporations that violate the law.  

The bill has been wildly controversial, seen by some as a form of government censorship. Others have criticized the bill for allowing news publishers off the hook, claiming that this greenlights news publishers to spread disinformation with impunity. Truss’s government may take out the “legal but harmful” category to placate free speech defenders. It remains to be seen how legislators will modify the bill through this and other amendments, along with changes to the core of the bill.

Canada

Canada introduced the Online News Act, a bill similar to the Australian News Media Bargaining Code (adopted in February 2021), which would compel large platforms to negotiate with publishers about payment for the use of their content or be forced into arbitration. The Canadian government has been incentivized to push this legislation through because there is a wide recognition that large tech platforms have absorbed journalism’s largest source of revenue (advertising), that this has negatively impacted the state of journalism in Canada, and that a healthy journalism industry is important for democratic societies. The Online News Act compels digital platforms to enter into financial agreements with publishers for news.

In June, a Canadian minister also introduced a federal privacy bill that would replace Canada’s outdated privacy rules, allow regulators to seek large fines for companies that abuse consumers’ data, add new digital protections for kids, and govern businesses’ use of artificial intelligence.  It remains to be seen whether it will become a legislative priority once the House of Commons returns from recess, but the fervor to achieve political consensus on privacy rules, and on protecting youth from online harms, clearly is transnational.

 

Australia

Australia pushed ahead with an (albeit limited) Online Safety Act in 2021, looking at content years ahead of anywhere else, establishing a public complaints mechanism and independent safety regulator. It also enacted a News Media Bargaining Code in 2020 to force the largest tech platforms to ‘pay for journalistic content.’ While problematic in its early days, the Code has led to significant payouts to large and smaller news publishers. A new government was elected in 2022 who have announced an intention to review and change the legislative landscape, but no concrete reforms have been announced so far.

Published on
September 19, 2022

Posted in: