Smart regulation for combating illegal content



  • We've written before about how we're working to support smart regulation, and one area of increasing attention is regulation to combat illegal content.

    As online platforms have become increasingly popular, there’s been a rich debate about the best legal framework for combating illegal content in a way that respects other social values, like free expression, diversity and innovation. Today, various laws provide detailed regulations, including Section 230 of the Communications Decency Act in the United States and European Union’s e-Commerce Directive.

    Google invests millions of dollars in technology and people to combat illegal content in an effective and fair way. It’s a complex task, and–just as in offline contexts—it’s not a problem that can be totally solved. Rather, it’s a problem that must be managed, and we are constantly refining our practices.

    In addressing illegal content, we’re also conscious of the importance of protecting legal speech. Context often matters when determining whether content is illegal. Consider a video of military conflict. In one context the footage might be documentary evidence of atrocities in areas where journalists have great difficulty and danger accessing. In another context the footage could be promotional material for an illegal organization. Even a highly trained reviewer could have a hard time telling the difference, and we need to get those decisions right across many different languages and cultures, and across the vast scale of audio, video, text, and images uploaded online. We make it easy to easily submit takedown notices; at the same time, we also create checks and balances against misuse of removal processes. And we look to the work of international agencies and principles from leading groups like the Global Network Initiative.

    A smart regulatory framework is essential to enabling an appropriate approach to illegal content. We wanted to share four key principles that inform our practices and that (we would suggest) make for an effective regulatory framework:

    • Shared Responsibility: Tackling illegal content is a societal challenge—in which companies, governments, civil society, and users all have a role to play. Whether a company is alleging copyright infringement, an individual is claiming defamation, or a government is seeking removal of terrorist content, it’s essential to provide clear notice about the specific piece of content to an online platform, and then platforms have a responsibility to take appropriate action on the specific content. In some cases, content may not be clearly illegal, either because the facts are uncertain or because the legal outcome depends on a difficult balancing act; in turn, courts have an essential role to play in fact-finding and reaching legal conclusions on which platforms can rely.

    • Rule of law and creating legal clarity: It’s important to clearly define what platforms can do to fulfill their legal responsibilities, including removal obligations. An online platform that takes other voluntary steps to address illegal content should not be penalized. (This is sometimes called “Good Samaritan” protection.)

    • Flexibility to accommodate new technology:While laws should accommodate relevant differences between platforms, given the fast-evolving nature of the sector, laws should be written in ways that address the underlying issue rather than focusing on existing technologies or mandating specific technological fixes. 

    • Fairness and transparency: Laws should support companies’ ability to publish transparency reports about content removals, and provide people with notice and an ability to appeal removal of content. They should also recognize that fairness is a flexible and context-dependent notion—for example, improperly blocking newsworthy content or political expression could cause more harm than mistakenly blocking other types of content. 

    With these principles in mind, we support refinement of notice-and-takedown regimes, but we have significant concerns about laws that would mandate proactively monitoring or filtering content, impose overly rigid timelines for content removal, or otherwise impose harsh penalties even on those acting in good faith. These types of laws create a risk that platforms won’t take a balanced approach to content removals, but instead take a “better safe than sorry” approach—blocking content at upload or implementing a “take down first, ask questions later (or never)” approach. We regularly receive overly broad removal requests, and analyses of cease-and-desist and takedown letters have found that many seek to remove potentially legitimate or protected speech.

    There’s ample room for debate and nuance on these topics—we discuss them every day—and we’ll continue to seek ongoing collaboration among governments, industry, and civil society on this front. Over time, an ecosystem of tools and institutions—like the Global Internet Forum to Counter Terrorism, and the Internet Watch Foundation, which has taken down child sexual abuse material for more than two decades—has evolved to address the issue. Continuing to develop initiatives like these and other multistakeholder efforts remains critical, and we look forward to progressing those discussions.



    https://www.blog.google/perspectives/kent-walker-perspectives/principles-evolving-technology-policy-2019/smart-regulation-combating-illegal-content/

Log in to reply
 

© Lightnetics 2024