In a country where every other highway billboard, it seems, is dedicated to hawking the services of an injury law firm, it’s easy to become cynical about the follies of our litigious society. But although law and morality sometimes have a more distant kinship than we might prefer, we should not allow such cynicism to obscure the real moral and political questions often underlying even frivolous lawsuits. A recent pair of lawsuits by the parents of the tragic Uvalde, Texas, school shooting are a case in point.

Aimed at three defendants—Daniel Defense (makers of the firearm used), Activision (makers of the Call of Duty, a video game played by the shooter that prominently features the firearm), and Meta (whose Instagram platform supposedly radicalized the shooter), the lawsuits look like a classic publicity stunt, or perhaps a gamble to squeeze some settlement money out of deep-pocketed defendants. Daniel Defense should be protected based on the Protection of Lawful Commerce in Arms Act, which shields gun manufacturers and dealers for liability for crimes committed with their weapons. And as for blaming a video-game company and a social media platform because a mentally disturbed user went on a killing rampage—seriously? We might be apt to think the real crime is that of greedy lawyers who would prey on grieving parents in this way.

That said, whether or not these lawsuits go anywhere in the courts, they raise important questions that are likely to become ever more pressing in the years ahead. For one thing, it simply makes sense that if minors are legally banned from purchasing a product, companies should not be able to market to them—a principle underlying the Tobacco Control Act of 2009. Accordingly, some recent or proposed legislation seeks to keep gun manufacturers from advertising to impressionable teenage boys as potential customers—including through indirect means such as product placement in popular video games. While such laws must be carefully crafted to pass First and Second Amendment scrutiny, the moral principle behind them is sound Companies bear a moral responsibility for their marketing and should not be allowed to exploit buyers likely to abuse their products.

This outdated legislation entirely fails to grapple with what the internet has become and what it will become all the more in the age of AI

The lawsuits’ targeting of Instagram, however, opens up a whole other can of worms, one that increases pressure on courts and legislators to clarify the moral and legal status of online media companies. Indeed, this is hardly the first time that victims of violence have tried to hold social media platforms accountable for aiding, abetting, or inciting the violence. The lawsuit Twitter v. Taamneh blamed Twitter for its role in a 2017 ISIS attack, but was rejected by a Supreme Court ruling last year because the plaintiffs never demonstrated that the attackers even used Twitter to plan the attack. A more interesting lawsuit around the same time, Gonzalez v. Google, however, focused on YouTube’s recommendation algorithm, arguing that it actively promoted ISIS content to users, helping to radicalize them. In this case, the court punted, putting the burden back on Congress to do something about the massive legal loophole known as Section 230.

Part of the Communications Decency Act of 1996, Section 230 provides immunity to internet companies for content published on their platforms. If I lie about you on Facebook, you can theoretically sue me for libel, but not Facebook. So far, so good. The problem is that this outdated legislation entirely fails to grapple with what the internet has become and what it will become all the more in the age of AI—an ecosystem of algorithms. Section 230 envisioned static message boards full of poorly-indexed content, but today, almost all content on the internet is aggressively curated, organized, and promoted by algorithms designed to make you consume more and more of whatever content the algorithm thinks will most engage you. If you like violence, the platform will actively feed you more violence. If you like sex, the platform will actively feed you more prurient material.

Social media companies (as well as search engines, e-commerce platforms, and much of the internet), then, do not merely host user-generated content, but they “choose” which content to display and promote. In this, they function much more like traditional publishers, like The New York Times, which can be sued for their editorial decisions. By hiding the editorial decisions behind the plausible deniability of AI algorithms, though, social media companies have been able to have their cake and eat it too, feeding content to users to maximize engagement while evading any legal responsibility for that content.

The new Uvalde lawsuits are probably too poorly targeted to knock a hole in this Section 230 protection, but they are another reminder to Congress that it is high time to revisit the moral and legal framework governing the internet. We cannot continue to give Big Tech a blank check to profit off of our children.

source