FacebookRegulating Online Content – The Balance Between Free Speech and Free-For-All

Regulating Online Content – The Balance Between Free Speech and Free-For-All

This year kicked off with an explosive culmination to the ongoing tensions between free speech and social media, with Twitter bans, lawsuits and enduring questions about who gets to regulate content on the internet—or if it should be regulated at all. America is distinctly uncomfortable with the government stepping in to regulate speech. But public pressure has forced Big Tech to fill the void, spurring claims of unfair treatment and violations of First Amendment rights.

At the heart of the matter: unlike other countries that have laws against hate speech and fake news, America seems to have left it up to private companies to decide what content is acceptable with little legal obligation to explain their choices. Compounding the problem is what some argue the enormous power that a few big tech companies wield over our online infrastructure and channels of communication, leaving some to wonder if service providers like Facebook should really be treated more like a utility, with the government regulations to match. 

Are there restrictions or laws regulating online content?

In Reno v. American Civil Liberties Union, the U.S. Supreme Court declared speech on the Internet equally worthy of the First Amendment’s historical protections. That means pornography, violent films, explicit racism are all fair game on social media in the eyes of the law. The government only deems very narrow categories of speech as criminal, such as “true threats,’ or language that is explicitly intended to make an individual or group fear for their life or safety. Although it’s interesting to note that arguing a politician should be shot wouldn’t necessarily meet the criteria for incitement or true threat.

As of late, America has held tightly to an interpretation of the First Amendment that protects the free marketplace of ideas, even when it comes at a cost. Landmark cases like Brandenburg v. Ohio that protected the speech of a Ku Klux Klan leader, have solidified our particularly high bar for punishing inflammatory speech.  

But America has also supported the rights of private companies to decide what kind of speech is appropriate in their venues and by extension, virtual squares. Unlike most of the world, where ISPs are subject to state mandates, content regulation in the United States mostly occurs at the private or voluntary level. Social media companies are allowed to decide what their user policies are and are expected to self-regulate, creating internal speech policies that, in theory, protect against unfair censorship.

Beyond the social media companies themselves, the regulators and legal recourse that do exist present their own set of problems. ICANN, the non-profit that controls contracts with internet registries (.com, .org, .info, etc.) and registrars (companies that sell domain names), has immense power over who gets to claim a domain name—ICANN decisions are not subject to speech claims based on the First Amendment. The Digital Millennium Copyright Act (DMCA), designed to offer anti-piracy protections, is often used as a tool of intimidation or as a means for companies to keep a tight control on how consumers use their copyrighted works, stifling free speech in the process. Apple, for example, tried to use the DMCA in 2009 to shut down members in the online forum BluWiki who were discussing how to sync music playlists between iPods and iPhones without having to use iTunes. John Deere refuses to unlock its proprietary tractor software to let farm owners repair their own vehicles, leaving tractor owners in fear of DMCA lawsuits if they try to crack the software protections themselves.

The Growing Pressure to Regulate Content

In the absence of legal pressure, public opinion seems to be the real driver of online content regulation. It was a tipping point of public outrage that finally pushed big tech to ban the president and Parler. Apple pulled Tumblr from the App Store in 2018 because it was failing to screen out child sex abuse material, but only after multiple public complaints. After decades of proudly promoting free speech, regardless of the consequences, external pressures are now forcing companies like Facebook to police their domains, using legions of reviewers to flag harmful content.

While the world grapples with how to manage online speech, it’s clear that businesses will continue to face a variety of legal, social, and moral pressures regarding the content they provide or facilitate—and they must be prepared to monitor and account for what goes on in their virtual public spaces. Companies that allow for the posting of content – words, photos, videos – have a slew of laws to consider in allowing this practice, including free speech rights and controls. Companies should work with sophisticated and experienced tech legal counsel, like Octillo, to address these issues.

Subscribe to our newsletter.

*Attorney Advertising.  Prior results do not guarantee future outcomes.

Parler v. AWSParler v. Amazon Web Services – The Ongoing Conversation Surrounding Social Media, Big Tech, and Freedom of Speech

Parler v. Amazon Web Services – The Ongoing Conversation Surrounding Social Media, Big Tech, and Freedom of Speech

As the fallout from last week’s attack on the Capitol continues to be front page news, big questions surround big tech’s role as the arbiter of acceptable online speech.

After Facebook suspended President Trump’s account indefinitely and Twitter shut him down permanently, YouTube announced Wednesday that it will be freezing the president’s account for a week, citing concerns over the ongoing potential for violence.

Apple, Google, and Amazon have also pulled the plug on Parler, a social network that has become increasingly popular in recent months with conservatives, with a reputation for allowing content that would not otherwise be tolerated on other channels, including numerous calls for violence. Parler has responded by filing a lawsuit against Amazon, including claims that Amazon Web Services (AWS) violated antitrust laws and is in breach of contract for not providing a 30-day notice of cancellation.

In the 18-page complaint, filed in the U.S. District Court for the Western District of Washington, Parler argues that the decision to suspend its account “is apparently motivated by political animus” and designed to “reduce competition in the microblogging services market to the benefit of Twitter,” which recently signed a long-term deal with AWS and stands as one of Parler’s main competitors. The suit includes claims for breach of contract, tortious interference, and violation of antitrust law, alleging that Amazon failed to take similar actions in suspending Twitter’s account that included similar rhetoric. Parler is seeking a temporary restraining order to prevent Amazon from removing the social platform from its servers and prevent what it says will be irreparable harm to its business.

Can Amazon really do that? What about the First Amendment?

The suit also comes as tensions over alleged First Amendment violations remain high.  It’s well established that the First Amendment limits the government’s ability to restrict people’s speech, not private businesses’ ability to do so. Stated differently, the First Amendment only applies to public places, not private spaces, such as a social media platform.  But not so fast –  in 1980, the Supreme Court in Pruneyard v. Shopping Center v. Robins held that a shopping mall owner could not exclude a group of high school students who were engaged in political advocacy in quasi-public spaces in a private shopping mall. The Court accepted the argument that it was within California’s power to guarantee this expansive free speech right since it did not unreasonably intrude on the rights of private property owners. Likewise, in 2017, the Supreme Court in Packingham v. North Carolina held that the First Amendment prohibited the government from banning sex offenders from social media websites, finding implicitly social media to be a public space. The question, then, of whether Twitter and other social media spaces, and their associated cloud servers, where people congregate are “public” and deserving of First Amendment protections is not clear-cut. 

For its part, Amazon claims it was well within its rights to dismiss Parler after it failed to promptly identify and remove content encouraging or inciting violence against others, a direct violation of Amazon’s terms of service. According to court documents, Amazon says it reported more than a hundred examples of such violative content to Parler in just the past several weeks. In its official response to Parler’s restraining order request, AWS states that this “case is not about suppressing speech or stifling viewpoints. It is not about a conspiracy to restrain trade. Instead, this case is about Parler’s demonstrated unwillingness and inability to remove from the servers of Amazon Web Services (‘AWS’) content that threatens public safety.”

Most experts see Amazon’s decision to remove Parler as legitimate, and the microblogger will have a steep climb arguing against what are clear violations of terms. It’s also not without precedent: Cloudflare, a small company that provides tools to help websites protect against cyber attacks and load content more quickly, made a similar decision after facing pressure to drop The Daily Stormer, a neo-Nazi hate site, from their service after the deadly riots in Charlottesville in 2017. It later dropped 8Chan, a controversial forum linked to several deadly attacks, including those in El Paso, Texas and Christchurch, New Zealand.

What does this means for businesses, consumers and the future of social media?

While this case was born out of a national crisis, there is little incentive and less legal standing for businesses to start an online political witch hunt. As Amazon stated in their response to Parler, “AWS has no incentive to stop doing business with paying customers that comply with its agreements.”

But while Amazon and others are arguably on solid legal ground in their choice to drop Parler or block the president, these decisions bring up much larger questions about how we ended up with a few huge companies holding immense power over the trajectory of public discourse.

In many ways, the Constitution and our legal frameworks have not caught up to the pace, scope, and influence of online and social media. There’s not a lot of legal guidance on how tech companies or third-party vendors should treat illegal or inflammatory content posted on their networks or produced with their tools. Lawmakers are also grappling with how much responsibility should fall on social behemoths, like Facebook, that produce and house immense amounts of online content, but are not treated like traditional publishers under the law.

This is certainly both a landmark moment and a moment of reckoning for digital media consumers and providers. It’s too soon to tell how this will push transformation in the tech world and the digital town square of social media, but we’ll be following the conversation closely.

Subscribe to our newsletter.

*Attorney Advertising.  Prior results do not guarantee future outcomes.