What legal challenges will Elon Musk face? 

On October 28, 2022, Elon Musk effectively took control of Twitter, immediately firing the Chairman (CEO), Chief Financial Officer and the legal director, the same one who was at the origin of the suspension of Donald Trump. Having, in the process, dissolved the board of directors, Musk is now in sole command and free to implement a long-announced libertarian agenda.

From the announcement, last April 25, of the acquisition of Twitter by Musk, the intentions of this one were clear: to modify in depth the moderation policy of Twitter, which he considered imbued with anti-conservative and progressive biases. Presenting himself as a supporter of free speech, Musk early on said he wanted to make Twitter a “politically neutral” space of freedom, in which the only limits on the freedom to express oneself online would come from the law. " By “free speech”, I simply mean that which matches the law. I am against censorship that goes far beyond the law. If people want less free speech, they will ask government to pass laws to that effect. Therefore, going beyond the law is contrary to the will of the people” (Tweet of April 26, 2022). Elon Musk recently reiterated his position, while qualifying it: “ Twitter obviously cannot become a free-for-all hellscape, where anything can be said with no consequences! In addition to adhering to the laws of the land, our platform must be warm and welcoming to all” (“Twitter obviously cannot become a hellscape where everything can be said without consequences. In addition to respecting national laws, our platform must be warm and welcoming for all”) (Tweet of October 27, 2022).

For their part, the European authorities have constantly reminded Musk that he must manage the platform in compliance with European texts. As of April 25, Thierry Breton, the European Union Commissioner for the Internal Market, warned before going to Tesla's headquarters in Austin (Texas) to have a verbal exchange. In a video which some said looked like a " hostage video Musk says he's in accordance with the obligations provided for by the Digital Services Act and assures Thierry Breton: “I agree with everything you said, really”. In recent days, the European authorities have again insisted on respect for the Digital Services Act. After Musk tweeted “ the bird is released » on October 28, Thierry Breton answered that in Europe the bird will fly according to EU rules. For her part, Margerethe Vestager insisted on the very heavy sanctions provided for by the Digital Services Act and declared that Elon Musk could be a long time haunted by the consequences of non-compliance with European Union law. In France, many people have expressed their concern since the announcement of the operation, like the signatories of a tribune published in Le Monde emphasizing that the new owner of Twitter will have to comply with French law, whatever his ideas on freedom of expression. The text thus recalls that French law criminalizes racist, anti-Semitic, sexist, anti-LGBT speech and anything that is, in general, detrimental to human dignity.

There is perhaps a certain paradox in seeing the French, and more broadly the Europeans, thus insisting on respect for the law when Musk, libertarian as he is, has never announced anything else: the law, and nothing but the law. It is true, however, that Musk speaks in a North American context that gives pride of place to freedom of expression and reduces the constraints of platforms to their simplest expression. He therefore risks discovering that acting in accordance with European rules will prove to be more complex and restrictive than he thinks.

  1. Will Elon Musk have difficulty complying with European texts? 

From a certain point of view, the European texts impose precisely on social networks what Musk has announced that he wants to do. The rule Digital Services Act, published at Official Journal of the EU on October 27, 2022, was presented as enshrining above all the principle according to which " what is illegal offline is also illegal online ". The DSA therefore includes provisions to ensure an effective fight against illegal content, which Musk said he agrees with. However, the Regulation also provides obligations that go well beyond the moderation of illegal content, which could be more difficult for an entrepreneur to act as a guarantor of freedom of expression.

  • What Musk could respect without raising objections

The DSA maintains, while rewriting it in its article 6, the principle of exemption from liability which has been present for more than 20 years in the E-Commerce Directive for all service providers hosting illegal content or activities who do not have effective knowledge of their illicit character. This exemption applies when said hosts act "promptly" to remove access to such content when they are aware of it. The DSA also maintains the absence of a general obligation to monitor hosted content. All this was already well known.

There are now added obligations intended to guarantee an effective fight against illegal content with which one can imagine that Elon Musk should be in agreement since he agrees on the need to respect the law. The DSA bases the fight against illegal content on the establishment of effective reporting mechanisms: hosts must offer accessible mechanisms for reporting questionable content (Article 16) and reports from trusted third parties must be treated as a priority. The Regulation provides, in this regard, that a host is deemed to have effective knowledge of the illegal nature of an activity or information once it has been reported and its illegal nature emerges without it being need to engage in a detailed legal review. We can also hope that Musk will respect the rule that hosts must cooperate with the authorities, whose instructions they must respect, and that they must promptly inform of the presence of content that may lead to suspicion of the commission of criminal offenses. representing a threat to the life or safety of persons.

At the same time, Elon Musk should welcome the new “Good Samaritan” rule introduced in the DSA (article 7) at the request of the platforms, which provides that they do not lose the benefit of the liability exemption. when they engage, in good faith and in a diligent manner, in investigations with a view to identifying and removing illegal content. It will, of course, be necessary to determine what constitutes moderation exercised in good faith and in a diligent manner, but one can assume that these are practices exercised with respect for the fundamental rights of users, starting with their freedom of expression. Musk, if consistent, should have no objection in principle to ensuring that Twitter's terms and conditions clearly specify the platform's restrictions on free speech (DSA Section 14): we know , moreover, that Musk only wishes to sanction activities and content that are contrary to the law, to the exclusion of others. In this context, Musk should agree to allow users to benefit from a "clear and specific statement of reasons" for moderation decisions (article 17) and the possibility of appealing against said decisions (article 20) . It should also have no difficulty in asking Twitter teams to act "diligently, objectively and proportionately", taking into account the rights and legitimate interests of all parties concerned (Article 14). It remains to be seen whether he would agree to inform users of the automated nature of certain moderation methods and to adopt reasonable measures to guarantee that the technology is "sufficiently reliable to limit the error rate as much as possible" therefore content removed without valid reason (Recital 26).

  • What Musk should have trouble sticking to

Various aspects of Digital Services Act shouldn't go smoothly for Elon Musk's Twitter. Firstly, the Regulation does not itself define illegal content and must be articulated with the laws of each Member State, which vary in their definition of illegal content. Admittedly, the platforms are already accustomed to this variety of legislation, but they have always had difficulty taking it into account.

Secondly, Musk, who has repeated that he does not want to go, in moderation, beyond the strict fight against illegal content, should not be very comfortable with the notion of "systemic risks" whose article 34 of the DSA provides that very large platforms with more than 45 million active users (which is Twitter) must assess them in order to limit them. These “systemic risks” result from the dissemination of illegal content, but also from that of legal but toxic content (“ legal but awful "), because they have a "real or foreseeable negative effect" on the exercise of fundamental rights" (human dignity, respect for private and family life, rights of the child, etc.), on "civic discourse, the processes electoral and public security”, on “gender-based violence, the protection of public health, and of minors”, the “physical and mental well-being of people”. This very broad definition of systemic risks makes it possible to ask platforms to intervene well beyond the moderation of illegal content alone and covers a very large amount of information and discourse online. It is therefore unlikely that an outspoken supporter of freedom of expression such as Elon Musk will approach this category of systemic risks with good grace, especially since the European and national authorities will be able to address very large platforms guidelines (Article 35) on how to reduce these risks. One can, indeed, imagine that the regulators recommend, in doing so, to practice a moderation of the contents going well beyond the only deletion of the illicit contents, which Musk refuses.

Finally, it remains to be seen to what extent a platform like Twitter will comply with the transparency obligations provided for by the Regulation, which requires in particular the publication of reports listing moderation practices (moderation actions, number and skills of employees responsible for moderation, reports received, deletion instructions received from the authorities, deletions and suspensions carried out, number of appeals from users). Twitter will in particular, as a very large platform, have to comply with a certain number of compliance obligations, publish precise information on the parameters of the algorithms used and even give access to a certain amount of data to European and national authorities as well as to accredited researchers.

Elon Musk, as a supporter of freedom of expression, will he, in the end, be inclined to comply with the crisis mechanism that the Commission can trigger and which allows him to demand from the largest platforms that they act in accordance with its instructions in the presence of serious circumstances (war, health crisis)?

2. Will it be difficult for Musk to comply with French law? 

Although having announced that he intends to respect the law, Elon Musk should quickly realize that the laws of foreign countries sometimes include obligations, prohibitions and incriminations much stricter than in the United States. For example, he will have to, except to take legal action like Twitter has done in the pastComply with rules set by Indian government, which regularly puts pressure on the platforms to obtain the deletion of certain publications and has just decided to appointa committee responsible for examining appeals against moderation decisions taken by the platforms, whose decisions will be binding on them. He will also have to as Evelyn Douek pointed out, get to know the Canadian Bill C-11 worn by Justin Trudeau, who equates “online businesses” with broadcasting companies. Finally, he must respect, in order to comply with the Digital Services Act, the various laws of the Member States of the European Union which prohibit, for a certain number of them, hate speech and include specific texts applicable to social networks.

In France, Elon Musk and his teams should be aware that Twitter is liable when the platform has not promptly removed manifestly illegal content that has been reported to it. In particular, they must be able to take into account the various incriminating texts provided for by French law, starting with the provisions of the law of July 29, 1881, which punishes defamation (article 29 L. 1881), insult ( article 33 L. 1881), incitement to discrimination, hatred or violence (article 24 L. 1881) or contesting the existence of crimes against humanity (article 24 bis L. 1881). They should also be aware that French law punishes the provocation or glorification of terrorism (article 421-2-5 of the criminal code) without forgetting that European Regulation 2021/784 of April 29, 2021 relating to the fight against the dissemination of terrorist content online provides that hosts must remove terrorist content within one hour upon request from the competent authorities. And that the Penal Code punishes death threats (article 222-17 of the Penal Code), the " revenge porn (article 226-2-1 of the Penal Code), the dissemination of videos showing violence suffered by a natural person or "happy slapping" (article 222-33-3 of the Penal Code) and cyber-harassment (article 222- 33-2-2 of the Criminal Code).

The new leaders of Twitter will also have to understand that under French law, false information can sometimes give rise to sanctions. Admittedly, criminal law only punishes the dissemination, in bad faith, of false news "having disturbed the public peace or being likely to disturb it" or "of a nature to undermine the discipline or the morale of the armies or to hinder the effort of war of the Nation” (article 27 of the law of 1881). But the law of December 22, 2018 relating to the fight against the manipulation of information allows, in the presence of false information disseminated during the election period and which could weigh on the sincerity of the ballot, to act in summary proceedings, in the three months preceding the election, to stop the dissemination of false information (article L. 163-2 of the electoral code).

In this context, Elon Musk's teams will have to improve the performance of the platform in the effective fight against illegal content. However, Twitter's lack of efficiency in this regard has already been noted, in France et elsewhere. The platform has undergone prosecution for lack of cooperation with legal authorities in the fight against online hate. She was, moreover, condemned by the Judicial Court of Paris, on July 6, 2021, to disclose information relating to the material and human resources implemented to combat the dissemination of various offenses (apology for crimes against humanity, incitement to racial hatred, hatred towards because of their sex, their sexual orientation or identity, incitement to violence, in particular incitement to sexual and gender-based violence, attacks on human dignity). In particular, it had to communicate a certain number of elements relating to the people affected by Twitter in the processing of reports, in particular concerning the apology for crimes against humanity and incitement to racial hatred.

3. What situation is Musk facing in the United States? 

We can assume that Elon Musk is attached, like many in the United States, to the famous section 230 of the Communications Decency Act (v. F. G'sell, “ Social networks, between supervision and self-regulation », 2021) which, since 1996, has given platforms considerable leeway by exempting them from all responsibility for their decisions in terms of content moderation. Section 230 is bolstered by constitutional safeguards: platforms are generally believed to have a constitutionally protected right to moderate the content they host at their discretion, which is part of their own freedom of expression. However, there are many who wish to reconsider this immunity. Miscellaneous bills have been filed in Congress to reform Section 230, and Joe Biden himself recently reiterated his wish to have this provision repealed.

Above all, the Federal Supreme Court accepted, a few weeks ago, to examine two cases whose solution could have very important consequences on the interpretation of section 230. In the case, González v. Google, the parents of an American student killed in the 2015 Paris terrorist attacks sued Google under a US anti-terrorism statute (18 USC §2333), which provides damages for acts of international terrorism. They argue that Google incurs liability on the grounds that YouTube (owned by Google) allowed the Islamic State to broadcast messages of a terrorist nature. In particular, they claim that YouTube was "an integral part of the terrorist program" of the Islamic State and that the platform's algorithms recommended its videos, which helped it to spread its message. They also allege that Google, although aware of the presence of such content, did not make sufficient efforts to remove it. Overall, they argue that Section 230, enacted in 1996, originally targeted traditional platform moderation functions but does not apply to the highly sophisticated algorithmic tools now used to manage content. The Ninth Circuit Court of Appeals having dismissed the claim for relief on the basis of the immunity provided by Section 230, the Supreme Court will have to answer the question of whether this text applies when the platform takes the initiative to recommend content through algorithms targeting users. In the event of a negative response, the main social networks, of which Twitter is a part, could find themselves responsible for the illegal content pushed by their algorithms, which would constitute a considerable change.

In another case before the Supreme Court, Twitter v. Taamneh, the Court will have to decide whether the fact that terrorist content appears on the platform allows it to be considered as aiding and abetting terrorism. In this case, a member of the Islamic State organization had killed 39 people in a nightclub in Istanbul, Turkey. Relatives of one of the victims sued Twitter, Google and Facebook on the grounds that these platforms would have played an essential role in the development of the Islamic State, by allowing it to disseminate propaganda. The question therefore is whether these platforms can be considered to have knowingly provided substantial assistance to acts of international terrorism because they could have taken more “meaningful” or “aggressive” measures to prevent the spread of the propaganda. It is also to determine whether these platforms could incur liability even when their services were not used in the context of the specific act of terrorism that affected the applicants. Here again, the solution given to this case should have serious consequences for the responsibility of platforms and their content management strategies.

Along with litigation over Section 230, Musk will have to adjust to state laws and consider litigation over laws passed in Florida and Texas. Indeed, in the wake of Donald Trump's "deplatformization", several Republican states have adopted state laws against moderation practices that they consider biased against the most conservative. These laws make social networks “ common carriers (public operators) which justifies on their part to respect a form of neutrality with regard to content, such as telephone services (see on this point the very clear explanations of Daphne Keller). This is how the HB20 law, passed in Texas in 2021, prohibits social networks with more than 50 million active users in the United States from removing or downgrading posts based solely on the views they express (with the exception of posts that are unlawful ), obliges platforms to communicate on moderation practices and obliges them to notify moderation decisions, to explain them, and to allow users to appeal. As for Florida law SB 7072, it prevents platforms from removing or limiting press publications on the basis of their content, prohibits them from banning electoral candidates, obliges them to notify and explain their moderation decisions, and requires content to be moderated "in a manner coherent” and transparent.

While the Fifth Circuit Court of Appeals (NetChoice v. paxton) ruled Texas law in accordance with the Constitution, the Eleventh Circuit Court of Appeals censured most of the Florida law (NetChoice v. moody) on the grounds that it is inconsistent with the First Amendment. In these circumstances, the Attorney General of Florida has taken the matter to the Supreme Court of whether or not the First Amendment allows platforms to be forced to host content they do not want. It will therefore be up to the Supreme Court, in this context (if it agrees to examine the case), to answer this question and decide if SB 7072 complies with the First Amendment. A Supreme Court decision in favor of the legislation adopted in Texas and Florida could lead other American states to legislate and further constrain the activity of the platforms.

4. What strategy for Twitter now? 

Elon Musk, who has just laid off a large part of Twitter's staff, has made numerous statements about his intention to change the way the platform moderates content. As soon as the acquisition was announced, Musk presented himself as a herald of freedom of speech, nostalgic for the beginnings of social networks. He has denounced Twitter's moderation policy, which would practice "censorship that goes far beyond the law" (" censorship that goes far beyond the law ”) and whose “strong leftist bias” he wants to correct (“strong left-wing bias”). Musk's most regularly expressed ambition is to operate Twitter as a "digital public square" ("digital town square”) offering a free expression forum where everyone is welcome. Musk isn't ruling out suspending certain accounts when necessary, but says suspensions should be temporary. He was shocked by the 2020 suspension of the New York Post's article on Hunter Biden, and called Trump's suspension "morally wrong and downright wrong" (" morally wrong and flat out stupid "). Curators whose accounts have been suspended for breaching the terms of service would be allowed to return to Twitter. Donald Trump could recover his account, as well as a large number of conservatives, activists or activists from the QAnon conspiracy movement. Even more worryingly, racist content seems having proliferated since Musk's takeover.

In addition, as of October 28, Musk announced by a tweet to want to create a council in charge of settling the questions of moderation, whose opinion will be essential to take the most important decisions of moderation and in particular to decide on the reinstatement of the suspended users. It is possible that in formulating this proposal, Musk is thinking of an instance modeled on theOversight Board Facebook/Meta, which examines appeals against moderation decisions and issues decisions that the company has undertaken to respect. In fact, Twitter is already consulting, since 2016, A " Trust and Safety Council which helps it develop its strategy, but whose role is different because it operates in a strictly advisory capacity. This council was initially made up of around forty organizations and experts. In 2020, Twitter expanded the group and created sub-groups dedicated to specific topics, such as online safety and harassment, digital rights, child sexual exploitation and suicide prevention.

Elon Musk also announced other goals. Some agree on a more effective fight against bots, even if this objective may turn out to be hard to reach. The dissemination of the algorithm in open source on GitHub was also announced. Publishing the ranking algorithm could help identify bias, though some dispute it on the grounds that these biases may come from other sources such as data or interactions present on the platform. The biases are, in fact, present and well identified: Twitter researchers said that last year, in the United States and several other countries, the accounts of right-wing users were  more pushed by the algorithm only left user accounts.

Musk also said he would change Twitter's business model, which relies heavily on advertising. Le New York Times reported in May that during a presentation to investors, Musk claimed he would cut advertising's share of the company's revenue from around 90% to less than 50%. It plans, at the same time, to increase revenue from subscriptions, by expanding the possibility of opting for the paid version of Twitter (Twitter Blue), which until now did not include any advertising content. This development of Twitter Blue is designed to free Twitter from the need to maximize user engagement by promoting the most divisive content. Fact, a new version of Twitter Blue has just seen the light of day, in which, for $7.99 per month, users can have a certification once reserved for accounts that have been the subject of free verification by Twitter. This new option also offers halved ad volume and more relevant remaining ads, as well as the ability to post longer videos.

Elon Musk's strategy is obviously the subject of reviews. Some point out that the platform risks losing users and seeing advertising revenue dwindle if it leaves racist, sexist or homophobic content online (which is not illegal in the United States). In fact, many Twitter users now choose to register on Mastodon, even if at the same time, the platform sees arrive in number of new users. At the same time, the european authorities launched two social network pilot projects in the spring that are to serve as alternatives to YouTube and Twitter: EU Voice  and EU Video. These free platforms are administered by open source software, do not practice profiling and do not include advertising.

Leave comments

Your email address will not be published. Required fields are marked with *