The Supreme Court avoided a final ruling Monday in two cases challenging state laws aimed at limiting the power of social media companies to moderate content. The ruling scuttled an effort by Republicans who had pushed the legislation as a remedy to what they say is an anti-conservative bias.
It was the latest Supreme Court case to consider — and then avoid — a major ruling on the parameters of speech on social media platforms.
State laws vary in details. Florida’s prevents platforms from permanently banning candidates for state political office, while Texas’ prohibits platforms from removing any content based on a user’s opinion.
The justices unanimously agreed to return the cases to the lower courts for analysis. Justice Elena Kagan, writing for the majority, noted that lower appeals courts had not properly analyzed First Amendment challenges to the Florida and Texas laws either.
“Overall, there is much work to be done below in both of these cases,” Justice Kagan wrote, adding, “But that work must be done under the First Amendment, which does not go without a license when social media is involved.” .
Under the narrow ruling, the state laws remain intact, but the lower court lawsuits also remain in place, meaning both laws continue to be struck down.
Although the justices voted 9-0 to return the cases to lower courts, they were split on the reasoning, with several writing separate agreements to state their positions. Justice Kagan was joined by Chief Justice John G. Roberts Jr., along with Justices Sonia Sotomayor, Brett M. Kavanaugh and Amy Coney Barrett. Judge Ketanji Brown Jackson joined, in part.
In a separate concurrence, Judge Barrett hinted at how lower courts might analyze the cases.
Judge Barrett wrote that the federal appeals court hearing the Florida case indicated that it “understood the First Amendment’s constitutional discretion protection” to be “broadly correct,” while the Texas appeals court did not.
A unanimous three-judge panel of the U.S. Court of Appeals for the 11th Circuit had largely upheld a preliminary injunction temporarily blocking the Florida law.
Instead, a divided three-judge panel of the Fifth Circuit had overturned a lower court ruling blocking the Texas law.
That the judges refrained from making any major pronouncements on the matter allowed both sides to claim victory.
Chris Marchese, director of the litigation center at NetChoice, one of the trade groups that challenged the laws, said in a statement that “the Supreme Court agreed with all of our First Amendment arguments.”
Ashley Moody, Florida’s attorney general, suggested on social media that the outcome was in the state’s favor. “While there are aspects of the decision with which we disagree, we look forward to continuing to defend state law,” he said.
The Biden administration had supported the social media companies in both cases, Moody v. NetChoice, No. 22-277, and NetChoice v. Paxton, No. 22-555.
In the majority opinion, Justice Kagan noted how quickly the Internet has evolved. Less than 30 years ago, he wrote, judges still felt the need to define the Internet in their opinions, describing it then as “an international network of interconnected computers.”
Today, he wrote: “Facebook and YouTube alone have over two billion users each.”
He described a flood of content that prompted major platforms to “snip and curate” posts. Platforms sometimes remove messages entirely or add warnings or labels, often in accordance with community standards and guidelines that help sites determine how to handle diverse content.
Because such sites can “create unparalleled opportunities and unprecedented risks,” he added, it’s no surprise that lawmakers and government agencies are struggling over how and whether to regulate them.
Government entities are typically better positioned to meet these challenges, Justice Kagan noted, but courts still play an integral role “in protecting the speech rights of these entities, as courts have historically protected the rights of traditional media.” ».
The laws at issue in those cases, statutes enacted in 2021 by Florida and Texas lawmakers, differ in which companies they cover and which activities they restrict. However, Justice Kagan wrote, both limit the platforms’ choices about what user-generated content will be shown to the public. Both laws also require platforms to justify their choices in content moderation.
Justice Kagan then gave an indication of how the majority of justices might think about how to apply the First Amendment to these types of laws.
While it was too early for the court to draw conclusions about the cases, he wrote, the underlying record suggests that some platforms, at least some of the time, engaged in expression.
“In creating certain streams, these platforms choose which third-party speech to display and how to display it,” Justice Kagan wrote. “They include and exclude, organize and prioritize – and by making millions of these decisions every day, they produce their own distinctive collections of expression.”
He added that although social media is a newer form, the “gist” is familiar. He likened the platforms to traditional publishers and publishers who curate and shape the expressions of others.
“We have repeatedly held that laws that limit their constitutional choices must meet the requirements of the First Amendment,” Justice Kagan wrote. “The principle does not change because the curated collection has moved from the physical to the virtual world.”
So far, however, judges have avoided definitively defining social media platforms’ liability for content, even as they continued to recognize the enormous power and reach of the networks.
Last year, judges refused to hold tech platforms liable for user content in two rulings — one involving Google and the other Twitter. Neither decision clarified the scope of the law that shields platforms from liability for these posts, Section 230 of the Communications Decency Act.
Florida and Texas’ controversial laws on Monday were prompted in part by some platforms’ decisions to ban President Donald J. Trump after the attack on Capitol Hill on January 6, 2021.
Supporters of the laws said it was an effort to combat what they called censorship in Silicon Valley. The laws, they added, encouraged free speech, giving the public access to all opinions.
Opponents said the laws infringed on the First Amendment rights of the platforms themselves and would turn them into cesspools of filth, hate and lies.
A ruling that tech platforms have no discretion to decide what posts to allow would have exposed users to a greater variety of views, but would almost certainly have amplified the uglier aspects of the digital age, including hate speech and misinformation. .
The two trade associations challenging the state laws — NetChoice and the Computer & Communications Industry Association — said the actions that the Court of Appeals for the Fifth Circuit called censorship of compliance with the Texas law were editorial decisions protected by the First Amendment .
The groups said social media companies are entitled to the same constitutional protections enjoyed by newspapers, which are generally free to publish without government interference.
A majority of justices strongly criticized the Fifth Circuit’s decision to overturn a lower court ruling that had blocked the Texas law.
Justice Kagan wrote that the Texas law prevented social media platforms from using content moderation standards “to remove, modify, organize, prioritize or disavow posts in their news feed.” This legislation, he wrote, blocks precisely the types of constitutional judgments that the Supreme Court has previously held to be protected by the First Amendment.
He said this particular application of the law “is unlikely to withstand First Amendment scrutiny.”
But in concurring opinions, Justices Jackson and Barrett recognized the difficulty in making sweeping statements about how free speech protections should work online.
Justice Barrett offered a hypothetical: a social media platform could be protected by the First Amendment if it set rules about the content allowed on its feed and then used an algorithm to automate the enforcement of those policies. But he said it could be less clear that the First Amendment protected software that determined, on its own, what content was harmful.
“And what about artificial intelligence, which is evolving rapidly?” she wrote “What if the owners of a platform hand over the reins to an AI tool and simply ask it to remove ‘hateful’ content?”
Olivier Sylvain, a law professor at Fordham University, said Monday’s ruling could open the door for the court or regulators to consider these more complex issues. That could include how commercial speech is handled online, such as platforms that foster biased advertising, despite the political views at the heart of Monday’s ruling.
“Texas and Florida have been gripped by an ideological political dispute that social media companies are biased against conservative views,” he said. “I’m hopeful, at least, that he’s put that into perspective and that we can start thinking about all the many questions that are much more interesting.”