On May 18, 2023, the Supreme Court of the United States ruled on two highly debated cases (see, for instance, comments by Daphne Keller and by Anupam Chander). n both cases, social media networks were accused of facilitating the organization of terrorist attacks by failing to effectively combat terrorist content on their platforms. As a result, their responsibility was called into question. 

Twitter v. Taamneh n°21–1496 (SCOTUS, May 18, 2023)

In the case of Twitter v. Taamneh, a member of ISIS killed XNUMX people in a nightclub in Istanbul, Turkey. The family of one of the victims sued Twitter, Google, and Facebook on the grounds that these social networks played an essential role in the emergence of ISIS by allowing the dissemination of propaganda. They invoked the U.S. Anti-Terrorism Act (ATA), which allows U.S. citizens who have been injured in an act of international terrorism to bring a civil liability claim. In particular, a provision introduced by the Justice Against Sponsors of Terrorism Act (JASTA), passed in XNUMX, allows for holding liable anyone “who aids and abets, by knowingly providing substantial assistance, or who conspires with the person who committed such an act of international terrorism” (XNUMX U.S.C. XNUMX(d)(XNUMX)).

The plaintiffs alleged, among other things, that these companies knowingly allowed ISIS and its supporters to use their platforms and recommendation systems as recruitment tools, fundraising mechanisms, and channels for propaganda dissemination. In particular, they pointed out that the recommendation algorithms presented ISIS content to users who were likely to be interested in such content. Furthermore, they criticized the platforms for not taking sufficient measures to ensure that content posted by ISIS was removed, even though they were aware that such content was circulating on their platforms. After being rejected in the lower court, their request was granted by the Ninth Circuit Court of Appeals. However, in its decision on May 18, 2023, the Supreme Court ruled that the required conditions to hold the defendants liable were not met. While acknowledging that the platforms were aware of playing a certain role in the activities and development of ISIS, the Court held that it was not demonstrated that the defendants knowingly and substantially assisted the terrorists in the organization of the Istanbul attack.

The decision includes an interesting discussion regarding the automated classification and recommendation systems implemented by the platforms. These algorithmic systems determine the visibility of posted content based on its estimated “quality” according to certain criteria, with the goal of presenting users with content that is most likely to engage them. In their case, the plaintiffs argued that by employing these recommendation systems, the platforms were not merely providing a purely passive service but actively and substantially aiding ISIS by optimizing the dissemination of its messages. However, the Supreme Court rejected this argument. The words of Justice Clarence Thomas, who delivered the unanimous opinion of the Court, are eloquent. According to him, the recommendation algorithms are “merely part of the infrastructure” provided by the platforms. Furthermore, he adds that the algorithms appear “agnostic as to the nature of the content”, since the platforms “match any content (including ISIS’ content) with any user who is more likely to see that content” (p. 23). Therefore, in the Court’s view, the fact that these algorithmic systems directed the ISIS’ publications to certain users does not transform “passive assistance” into “active betting.” The decision highlights that this conclusion is especially true since the platforms did not take any special measures regarding ISIS content and did not even seem to have reviewed it.

According the Supreme Court, what is ultimately reproached to the platforms is a form of passivity. However, under current U.S. law, there was no obligation requiring platforms to terminate the subscription of ISIS or its supporters after discovering that they were using their services for illegal purposes. Furthermore, even if such an obligation had existed in this case, its violation could not be equated with conscious and substantial assistance to the terrorist attack within the meaning of the JASTA provisions. Simply put, the fact that malicious actors have used the services of these platforms does not imply that the platforms knowingly provided substantial assistance and therefore aided and encouraged the acts of terrorists. Deciding otherwise, as emphasized by Justice Clarence Thomas, would make any communication service provider liable simply because they had general knowledge that criminals use their services. In sum, while it can be acknowledged that the platforms did not take sufficient action against ISIS content, it cannot be concluded, in the Supreme Court’s view, that they intentionally provided substantial assistance to the terrorist organization.

On the whole, it follows from the Supreme Court’s ruling that the mere provision of an architecture equipped with algorithmic tools intended to display relevant content based on users’ profiles does not constitute providing aid and encouragement to ISIS within the meaning of Section XNUMX(d)(XNUMX). The algorithmic systems are simply “part of the infrastructure” provided by the platforms and are indifferent to the nature of the content (p. XNUMX). Eric Goldman, on his blog, criticizes the reasoning followed by the Supreme Court in this respect. Goldman rightly points out that recommendation systems are never passive or neutral, and algorithmic models differentiate in their treatment of content. He recognizes, however, that social networks did not intentionally seek to assist a terrorist organization by treating its content differently from other content.

Finally, it should be noted that the scope of the decision must be assessed with caution. In her concurring opinion, Judge Kentaji Brown Jackson points out that this decision (as well as that relating to the case Gonzalez) should not be construed as having general application and that the position of the Supreme Court may be different in other cases.

González v. Google  n°21-1333 (SCOTUS, May 18 2023)

In the Gonzalez v. Google case, the parents of an American student killed in the terrorist attacks that occurred in Paris in XNUMX also filed a liability action against Google under the JASTA law (XNUMX U.S.C. §§XNUMX(a) and (d)(XNUMX)). The plaintiffs argued that Google was directly and indirectly responsible for their daughter’s death on the grounds that YouTube (owned by Google) was a “part of the terrorist program” of ISIS, as the platform’s algorithms recommended its videos. They also alleged that Google, despite being informed of the presence of ISIS content, had not made sufficient efforts to remove it. The Ninth Circuit Court of Appeals had dismissed the claim based on the immunity provided by Section XNUMX of the Communications Decency Act of XNUMX.

The Supreme Court ruled that there was no reason to grant the plaintiffs’ request. For the reasons previously adopted and developed in the Taamneh case, it does not appear that Google can be considered as having knowingly provided substantial assistance in the considered terrorist attack. Consequently, since Google’s liability cannot be established, there is no need to discuss, according to the Supreme Court, the possible application of immunity under Section XNUMX of the Communications Decency Act. The Supreme Court does not wish to take this opportunity to rule on the potential limits of this immunity or to discuss the argument that immunity should be waived when a platform takes the initiative to recommend content through algorithms targeting users.

Would the outcome of these disputes have been the same in the European Union? 

In the European Union, platform immunity is not as broad as that guaranteed by Section 230 of the Communications Decency Act (see F. G’sell, « Les réseaux sociaux, entre encadrement et autorégulation », 2021). Since the adoption of E-Commerce Directive 2000/31/EC, hosting service providers are exempt from liability for content published by their users as long as they are not aware of the presence of illegal content (Article 14 of Directive 2000/31/EC). However, this exemption only applies if the hosting service providers act “promptly” to remove the content or deny access to it once they become aware of its illegality. The recent Digital Services Act in Article 6, has incorporated the same principle, along with clarifications often derived from case law (see F. G’sell, “  The Digital Services Act: a General Assessment“). Specifically, Recital 53 of the DSA outlines that hosting service providers are deemed to be aware of the presence of illegal content when such content has been reported and its illegality is evident without detailed legal examination (which is obviously the case with terrorist content).

Therefore, it can be imagined that if the plaintiffs had sought to hold the platforms liable in the European Union, they would have endeavored to provide evidence establishing that terrorist-related publications had been reported to the platforms without them promptly taking sufficient action to remove them.  

Another strategy to overcome platform immunity would have been to establish that the platforms played an active role in the dissemination of terrorist content. Indeed, immunity does not apply if the provider, “instead of confining itself to providing the services neutrally by a merely technical and automatic processing of the information” provided by users, “plays an active role of such a kind as to give it knowledge of, or control over, that information” (Recital XNUMX of the Digital Services Act). In this context, the plaintiffs could have attempted to leverage the argument concerning the role of algorithmic recommendation systems. However, such an argument would have been unsuccessful in providing the platforms’ active role and their knowledge of the content in question. Indeed, following caselaw on this point, the DSA provides that “the fact that the provider automatically indexes information uploaded to its service, that it has a search function or that it recommends information on the basis of the profiles or preferences of the recipients of the service is not a sufficient ground for considering that provider to have ‘specific’ knowledge of illegal activities carried out on that platform or of illegal content stored on it ” (Recital XNUMX of the Digital Services Act). Recital XNUMX of the DSA also addresses other arguments raised by the plaintiffs in Taamneh and Gonzalez by stating that knowledge of the presence of illegal content cannot be established ” solely on the ground that that provider is aware, in a general sense, of the fact that its service is also used to store illegal content.”

Finally, one could wonder about the possible impact of the Regulation 2021/784 of April 29, 2021 on addressing the dissemination of terrorist content online (referred to as the “TCO Regulation” for Terrorist Content Online). This Regulation now requires hosting service providers to take measures to prevent the dissemination of terrorist content. In particular, hosting service providers must remove terrorist content within one hour following a request from law enforcement authorities to do so (Article 3). Furthermore, Article 5 of the TCO Regulation stipulates that hosting service providers designated by regulators as being “exposed” to terrorist content must adopt specific measures, such as appropriate technical means to identify and remove terrorist content, the implementation of reporting systems, or the creation of a mechanism to raise awareness about terrorist content. It therefore appears possible to sanction platforms that do not comply with these obligations and to hold them accountable, but only if these platforms have been designated as “exposed” and have not adopted the specific measures provided for in the Regulation.

That being said, all of these arguments, regardless of their significance, would probably not have been sufficient to justify holding the platforms liable in the circumstances of the Taamneh and Gonzalez cases. It would have been necessary, indeed, to establish a direct link between the presence of terrorist content and the terrorist attacks which took place in Paris and Istanbul, a nearly impossible proof to provide. It is therefore easy to conclude that the claims in the Taamneh and Gonzalez cases would not have been more successful in Europe than they have been in the US.

Leave comments

Your email address will not be published. Required fields are marked with *