See more of the story

Opinion editor's note: Star Tribune Opinion publishes a mix of national and local commentaries online and in print each day. To contribute, click here.

•••

Watching American politicians and judges wrestle with the social and political dangers posed by the internet is a bit like watching my cats chase a laser pointer. They're very fired up about the hunt, pursuing every zig and zag with showy ferocity, but anytime they approach the target it becomes painfully clear they misunderstand the essential nature of the problem at hand (or, you know, at paw).

Set aside for a minute that the left and right disagree on what exactly are the dangers posed by social media: The left generally argues that companies like Facebook and Twitter aren't doing enough to root out misinformation, extremism and hate on their platforms, while the right insists that tech companies are going so overboard in their content decisions that they're suppressing conservative political views.

Both sides have been putting forward — and in some cases, passing — state and federal rules that compel companies to change their ways. But the leading ideas from lawmakers on both sides are variously unworkable, unconstitutional, irrelevant and unserious, many of them betraying a profound ignorance about how the internet actually works. To see why, one need look no further than the ugly digital trail left by the man charged with killing 10 people in a racist mass shooting in Buffalo, N.Y., last week. As I'll explain, the suspect's online actions and what to do about them present deep complications for both Republicans' and Democrats' theories for how to fix the internet.

Let's start with the Republicans. Last year the governors of Florida and Texas signed laws prohibiting social media companies from "censoring" users, and Republican legislators in several other states are pushing similar ideas. The Texas and Florida laws were put on hold by U.S. District Court judges who said they could be unconstitutional, but this month a U.S. Court of Appeals reinstated the Texas rule without explanation; tech industry trade organizations have appealed to the Supreme Court to undo that decision.

I could spend this whole column cataloging all the ways these laws are terrible. As the U.S. District Court judges ruled, they seem to violate the tech companies' own First Amendment rights to host or not host certain content. The laws could prompt lots of frivolous lawsuits from people who feel they've been treated unkindly by tech companies. Both laws are capricious, applying only to sites that hit a certain arbitrary threshold of users — 100 million in Florida, 50 million in Texas. Florida's law even includes a carve-out for companies that run a theme park. (The law was signed back when Florida's Republicans were friendly with Disney; now they're trying to undo the Disney exemption.)

And the laws are dangerously overbroad. While Ron DeSantis and Greg Abbott, the governors of Florida and Texas, say they want to protect conservative points of view from liberal tech executives, the laws' texts seem to prohibit tech companies from removing or down-ranking all kinds of content that has nothing to do with electoral politics. Groups that oppose the laws say that tech companies wouldn't be able to remove posts promoting suicide, animal abuse, non-obscene nudity and much else that most users simply do not want to see when they open up Facebook in the morning.

Also, hate speech. The Buffalo suspect reportedly used Google Docs to post a long manifesto promoting his ideology and explaining his motivations for the attack. Over a period of months, he posted thousands of lines of remarks on Discord documenting his preparation for the shooting.

The Texas law allows tech companies to remove content that "directly incites criminal activity or consists of specific threats of violence." Over the past couple of days I slogged through much of the shooting suspect's digital leavings, and it's clear that some of his content met this threshold — but there was whole lot of it that was ugly but not directly inciting violence. Under Republican rules, would platforms have the right to remove or refuse to amplify these ramblings? As bonkers as the "great replacement" theory is, could Facebook delete posts about it, or would it have to give it equal play as with any other political viewpoint?

Democrats might have spent the past few days focusing on the message that Republicans' legislation could lead to the proliferation of hate speech online. Instead, several saw the Buffalo attack as an opportunity to push their own misguided efforts to legislate online speech.

Rep. Debbie Wasserman Schultz, D-Fla., called for revisiting Section 230 of the Communications Decency Act, the law that holds tech companies immune from damages stemming from content posted by users. It's an example of what I mean by unserious and irrelevant: As I've argued before, Republicans and Democrats seem to have latched on to repeal of this law as if it were a silver bullet for fixing the internet.

It isn't, because many content decisions by tech companies are protected by the First Amendment. But many legal scholars say Section 230's repeal would have terrible chilling effects, intimidating platforms into pulling down lots of controversial content just to avoid litigation.

Meanwhile Sen. Tim Kaine, D-Va., tweeted to denounce Big Tech's propagation of racist theories like the great replacement.

"Who filled his head with this poison?" Kaine asked.

Big Tech wouldn't be my first answer. In fact it's unclear what role "Big Tech" played in the shooting suspect's radicalization; his manifesto is more a product of smaller tech than big — specifically the free-for-all messaging site 4chan, where he says he came upon his racist ideology and from which he has lifted many memes. And why blame Big Tech when the country's most popular cable news host and several Republican lawmakers have been openly flirting with great replacement ideology?

Even though Democrats and Republicans have opposing goals for moderating online content — one side wants more rules, the other wants fewer — both sides are advocating the same basic mechanism for fixing the internet: They want to give judges, government agencies and other officials the power to decide what tech companies and their users can and can't do online.

There are much less rash legislative ideas to try first — for instance, mandating greater transparency from social media companies so that we can better understand how and to what degree they are influencing the culture. Legislation introduced last year by Sens. Chris Coons, Amy Klobuchar and Rob Portman would require social networks to provide data to certain researchers that could shed light on social networks' content decisions and their effects. This could allow outside researchers to determine, say, whether platforms are applying their rules consistently across the political spectrum, or how the companies' algorithms are promoting — or downplaying — misinformation, extremist content and other toxic stuff online.

Sure, I don't like that a handful of tech companies have so much control over what happens in society. But opening the door to states or the federal government controlling online discourse is a far worse fate that we must endeavor to avoid.