See more of the story

Opinion editor's note: Editorials represent the opinions of the Star Tribune Editorial Board, which operates independently from the newsroom.

•••

Among the tiny ironies of the recent debt limit debate in Washington has been a simultaneous uptick in interest in expanding the federal government's reach with a new regulatory agency.

Such an agency, if brought to life, would seek to corral the threats of fast-moving technology, especially artificial intelligence. The interest is at least partly bipartisan, if a recent hearing before the Senate Judiciary Subcommittee on Privacy, Technology and the Law is evidence.

The star witness at that hearing was Sam Altman, the chief executive of OpenAI, a startup company whose ChatGPT generative language service is estimated to have become the fastest-growing consumer application in history. Altman wants regulation. The pace with which Congress can accommodate him is likely to underscore its inability to stay on top of technology trends. Better to form an agency with rule-making flexibility and focused investigative authority.

There seems to be a sense in some circles that the current moment is a second chance to do right — that society blew it by letting the internet become what it is.

We'd quibble with that. We'd bet that if objective measures could be taken, people would be found in the aggregate to have benefited from the internet far more vastly than they've suffered.

But clearly there has been a liability to laissez-faire. An example can be found in the U.S. surgeon general's advisory report last week that social media comes with a "profound risk of harm to the mental health and well-being of children and adolescents."

We've also written about the flaws of large language models, which generate text on command and are the most prominent consumer-facing applications of artificial intelligence. In short, expect them to contribute to the spread of misinformation if users don't take steps to stop it.

But that's just the tip of the iceberg. Intellectual property, privacy and economic stability are other interests tangibly threatened by the advance of AI.

Which roars forward nonetheless. Investment money is pouring into companies perceived to be in a position to benefit, with shares in one — the chipmaker Nvidia — advancing nearly 30% overnight after an earnings report last week. AI is spoken of as an epic advance on par with the agricultural and industrial revolutions. (Caveat emptor on that.)

In the face of this, what most people muster are vague principles: AI should be transparent. Its benefits should be distributed equitably. It shouldn't discriminate.

But Congress is ill-equipped to directly address those concerns and others, and the collective weight of tech issues has become such that attention should no longer be scattered among existing agencies. The creation of a nimble and dedicated watchdog is, in our view, inevitable. A starting point is the Digital Platform Commission Act of 2023 from U.S. Sens. Michael Bennet, D-Colo., and Peter Welch, D-Vt. It would form a Federal Digital Platform Commission.

Oversight agencies do not come without drawbacks. A leading fear is regulatory capture, in which cozy relationships lead to decisions that are better for interest groups than for society.

The direct cost of adding an agency is actually a blip in the broad scope of the federal government. The debt ceiling deal reached over the weekend by President Joe Biden and House Speaker Kevin McCarthy — which must yet muster support in Congress before the nation is spared a gratuitous default — would cut spending by about $55 billion next year, according to a New York Times analysis. By comparison, the commission proposed by Bennet and Welch would have a beginning appropriation of $100 million a year, ramping up to $500 million. Some regulatory agencies also are able to offset their appropriations by charging fees to industry stakeholders, thereby becoming deficit-neutral.

A greater concern is the durability of agency power going forward. Just last week, the U.S. Supreme Court curtailed Environmental Protection Agency jurisdiction over wetlands. The ruling was technical in nature, but the undercurrent — found in a concurring opinion by Justice Clarence Thomas — was an aversion to the "serious expansion of federal authority." The court also announced this month that it will hear a case that may give it the opportunity to reconsider the doctrine that allows regulatory agencies to interpret vague legislation, something Thomas and others on the court have long hoped to revisit.

Nothing much rides on this — just the government's ability to function in the modern era. It's an interesting thought when the pressing question of recent weeks has been whether it functions at all.