fbpx

This past week the Supreme Court began hearing arguments in a case called Gonzalez v. Google, which questions whether social media companies can be held legally responsible for content promoted by their algorithms.

The case originated in 2015 when ISIS-affiliated terrorists killed 130 people in coordinated attacks across Paris. Nohemi Gonzalez, a 23-year-old American student died in those attacks. Her family sued Google, which owns YouTube, claiming that the social media platform’s recommendation algorithm promoted content from the terrorist group thereby aiding their efforts concerning the events of that day.

Google argues that using algorithms to sort content is necessary for users to be able to navigate the Internet at all. They say they are protected under Section 230 of the Communications Decency Act of 1996. The Gonzalez case focuses on whether platforms are protected by Section 230 when their own algorithms boost questionable material, photos, and videos.

In another case before the court, Twitter v. Taamneh, the family of a man killed in a 2017 ISIS attack in Turkey said the platform didn’t go far enough to identify and remove ISIS content. They claim this is a violation of the Justice Against Sponsors of Terrorism Act. They argue that Section 230 protections don’t apply to such content.

As early as this fall, the Supreme Court is also expected to take up cases that explore two conflicting state laws about content moderation by social media platforms.

Section 230 is the legal statute at the heart of all of these cases.

Section 230 states that, under law, internet platforms and service providers cannot be treated as the publishers of information created by someone else and protects them from liability for that content. For example, Mr. Smith could publish a slanderous review of a restaurant on Yelp and the restaurant could sue the reviewer but would have no case against Yelp.

Section 230 also allows internet platforms to restrict access to any content they deem objectionable. The platforms themselves get to choose what is and what is not acceptable content, and they can decide to host it or moderate it accordingly. That means the free speech argument often employed by people who are suspended or banned from these platforms and their claims that their Constitutional right to free speech has been violated does not apply.

Section 230 came into being in 1996 and was intended as a measure to make the internet open and available to publishers large and small. It sought to allow everyone to conduct business and publish content without the constant threat of being sued.

This protection has essentially allowed the internet as we know it to thrive. While it allowed the growth and expansion of the big tech companies, it also fostered the growth and proliferation of smaller, individual creators of websites, blogs, and forums that depend on user content contributions. This real-time transmission of user-generated content that Section 230 fosters has become a critical component of the Internet.

Proponents of Section 230 say that without the protections it provides, the internet as we know it couldn’t exist. It would drive small entrepreneurs, content publishers, podcasts, blogs, playlists, and any site dependent on user driven content out of business. It would also change the nature of the lucrative online advertising marketplace immeasurably.

Detractors of Section 230 say the statute doesn’t give on-line platforms enough accountability and allows some of the worst and dangerous parts of the internet to flourish. They believe that the broad license to self-regulate content and restrict users smacks of censorship and can lead to bias and abuse.

These cases all give the Supreme Court the chance to redefine or repeal Section 230 which could fundamentally change the face of the Internet.

In the course of deciding Google vs. Gonzalez, the Supreme Court could go several ways.

If the Court agrees with Google and declares that Section 230 is fine as is, everything essentially would stay the same.

If it throws out Section 230 completely or significantly changes the law, the impact on the Internet could be huge. Online platforms would potentially need to revise or eliminate the recommendation algorithms that govern their feeds. It could potentially leave any publisher of any online platform more vulnerable to lawsuits based on the content they publish. It could result in less speech online and fewer recommended playlists, podcasts, product reviews, social media posts, and more. If these platforms were tasked with monitoring every single thing posted, most could not keep up with what would be a huge amount of content. If they didn’t moderate anything at all, the internet could potentially be overrun with malicious content.

The Supreme Court could also hand down a more moderate ruling that doesn’t have such widespread impacts. They could choose to define exceptions to the far-reaching statute and define more clearly those situations that are protected and not protected. They could adapt the statute in specific ways that would require technology companies to face some additional liability in specific circumstances.

Decisions from the Supreme Court will most likely be in late June.