Experts Favor New US Agency To Govern AI 56
Long-time Slashdot reader Mr_Blank shares a report from Axios: AI experts at leading universities favor creating a federal "Department of AI" or a global regulator to govern artificial intelligence over leaving that to Congress, the White House or the private sector. That's the top-level finding of the new Axios-Generation Lab-Syracuse University AI Experts Survey of computer science professors from top U.S. research universities. The survey includes responses from 213 professors of computer science at 65 of the top 100 U.S. computer science programs, as defined by SCImago Journal rankings.
The survey found experts split over when or if AI will escape human control -- but unified in a view that the emerging technologies must be regulated. "Regulation" was the top response when asked what action would move AI in a positive direction. Just 1 in 6 said AI shouldn't or can't be regulated. Only a handful trust the private sector to self-regulate. About 1 in 5 predicted AI will "definitely" stay in human control. The rest were split between those saying AI will "probably" or "definitely" get out of human control and those saying "probably not." "No one individual is highly trusted to deal with AI issues," adds Axios. "President Biden took the top spot, with 9% of respondents -- slightly higher than Sundar Pichai, Elon Musk or Sam Altman. Mark Zuckerberg and Donald Trump drew 2% and 1%, respectively."
The survey found experts split over when or if AI will escape human control -- but unified in a view that the emerging technologies must be regulated. "Regulation" was the top response when asked what action would move AI in a positive direction. Just 1 in 6 said AI shouldn't or can't be regulated. Only a handful trust the private sector to self-regulate. About 1 in 5 predicted AI will "definitely" stay in human control. The rest were split between those saying AI will "probably" or "definitely" get out of human control and those saying "probably not." "No one individual is highly trusted to deal with AI issues," adds Axios. "President Biden took the top spot, with 9% of respondents -- slightly higher than Sundar Pichai, Elon Musk or Sam Altman. Mark Zuckerberg and Donald Trump drew 2% and 1%, respectively."
Give it a few decades (Score:1)
And these same class of enlightened fools in their ivory towers will be voting AIs should be given human rights demanding machines not be enslaved or discriminated against.
Re: (Score:2)
Re: (Score:2)
Ideally, treat creation of AGI/ASI as an act of treason
That's a great way to ensure China gets there first.
Re: (Score:2)
Re: (Score:2)
When the Chinese AI takes control of the party it will impose a zero child policy.
Don't regulate AI (Score:4, Interesting)
Re: (Score:1)
Those examples would implicate the maker of the software. OTOH we don't charge the maker of a knife used in a stabbing, so using the AI in a harmful manner falls on the perpetrator.
I think we need to go back to the distinction between string AI, weak AI, and expert systems.
With our current expert systems I agree with you 100%. Tools are tools.
Moving towards weak AI should be little to no different.
Yet clearly the "experts" are even confusing strong AI in with all the software we have now.
This is horrible!
For one, strong AI absolutely will need regulated. It would need to be treated less like a tool or a knife, and more like nuclear fuel.
For two, strong AI is definitely a very long w
Re: (Score:2)
This sounds reasonable but I suppose the question is how, in legislative language, do you differentiate between these?
Re: (Score:2)
With our current expert systems
"Expert systems" were a hot technology in the late 1980s. Experts were interviewed, and their expertise was encoded in big "if this then that" tables.
Expert systems were mostly failures and led to the second AI Winter [wikipedia.org].
Modern AI, based on neural networks, is the exact opposite of that. Instead of asking "experts", they learn from big piles of examples.
Re: (Score:3)
It's better to have the government regulating stuff, than to rely on individuals being able to sue huge corporations with massive legal budgets. That's what the government is for - to represent the interests of the people, when individuals and even groups of them are not powerful enough to do so by themselves.
The classic example is the racially biased AI. The victim of such an AI can sue the maker and the user of it, but there is considerable risk and cost involved in lawsuits. Having the government create
AI "experts" (Score:3)
Yet are they unaware how the US government works?
In order to create a new agency Congress has to create, authorize and fund it. After that it's the job of the Executive (you know, the White House) to run it and enforce it.
I know they're not the same and AI has a bunch of legitimate uses but I wonder how many of these folks had "Blockchain Expert" in their bios a few years ago?
Re: (Score:2)
Re: (Score:2)
7 out of 8 experts agree their policy fad is the most pressing one and in dire need of investment because profit$ and think of the children!
Also, some "experts" may have a vested interest in protecting certain companies by legislating a "moat" around them that helps them and hurts potential competitors.
Do you realize how ridiculous? (Score:3)
How do you regulate what people run on their computers. It is stupid and will lead to all kinds of abuse, selective enforcement, and violations.
Re: (Score:1)
> How do you regulate what people run on their computers.
Mislabel AI as "pirated movies" and it will get regulated pretty damned quickly.
Re:Do you realize how ridiculous? (Score:4, Insightful)
How do you regulate what people run on their computers. It is stupid and will lead to all kinds of abuse, selective enforcement, and violations.
That's the point.
Re: (Score:2)
How do you regulate what people run on their computers.
It isn't running AI they want to regulate. It is developing it.
The most advanced LLMs are not developed on home computers. They are developed in huge data centers with racks full of custom tensor processors, using millions of dollars in electricity.
There may be some breakthrough in either hardware or software that changes that, but right now, it looks like future generations of AI will come from big tech.
Re: (Score:2)
Re: (Score:2)
The most advanced LLMs are not developed on home computers. They are developed in huge data centers with racks full of custom tensor processors, using millions of dollars in electricity.
There is a clear trend line of rapidly declining costs and rapidly evolving capabilities in terms of training up models.
More importantly nobody has to start from scratch. Models can be continuously adapted and improved by anyone requiring far fewer computing resources than it took to train up base models.
There may be some breakthrough in either hardware or software that changes that, but right now, it looks like future generations of AI will come from big tech.
There are a series of continuous multi-faceted improvements across the board. All major CPU vendors have or are in the process of including AI instructions into their processors and a ton of new AI specif
Re: (Score:2)
There is also the question of international impact, so a department could work on rules, restrictions and technical standards, not only for the US but also in cooperation with EU and UK. Perhaps even the UN?
For example, I think we want to avoid a global AI arms race.
Re: (Score:2)
Already been happening for decades. For example, various governments produce standards for cryptography and random number generators. Both the products, the crypto implementations, and the users, can get certified. The regulation part is mostly mandating that certain organizations are certified and regularly audited, to ensure they are secure.
I don't see any issue with requiring people who want to use AI for certain tasks, like say medical diagnostics, being required to get and maintain certification. We re
The Ministry Of Artificial Intelligence (Score:4, Funny)
Call the SWAT and confiscate these tools of evil!
An Agency Wouldn't Be Independent Of Government (Score:2)
AI experts at leading universities favor creating a federal "Department of AI" or a global regulator to govern artificial intelligence over leaving that to Congress, the White House or the private sector.
Creating a federal agency wouldn't removing governing responsibility from the White House or Congress. These agencies are created through laws enacted by Congress which provide their authority to regulate in the first place. All creating an agency does is provide more manpower to deal with creating regulations without needing an act of Congress for any investigations or regulatory updates.
So while the article and perhaps the survey make it seem like the choice is between an Agency or Congress or White House
Re: An Agency Wouldn't Be Independent Of Governmen (Score:3)
Just call the agency "Turing" and make sure they have a lot of flexibility in their enforcement... and the power to create flexibility when required.
regulatory capture (Score:3)
This is an attempt to do an end run around regulation. Best to capture it from the start. I'm sure AI came up with this idea.
Re: (Score:2)
It is breathtaking that a bunch of AI "experts" in academia, who we must presume have advanced degrees, don't seem to have even a rudimentary understanding of how governments work.
Ah Yes the MIE (Score:2)
Of course they do! (Score:2)
Because control through AI is achieved by simply fiddling the programming.
They don't have to go out and actually interface with real people.
"Plays with nukes and boogers" (Score:1)
Please keep the 2% and 1% survey takers AWAY from my neighborhood.
We don't need to regulate AI (Score:2)
We need to regulate how seriously people take inaccurate, erratic, unreliable, inconsistent auto-fill bots.
Those who can... (Score:3)
Those who can, do;
Those who can't do, teach;
Those who can't teach, regulate.
Re: (Score:3)
Those who can, do;
Those who can't do, teach;
Those who can't teach, regulate.
Those who can't regulate whine.
Spit take (Score:2)
"No one individual is highly trusted to deal with AI issues," adds Axios. "President Biden took the top spot,
Pffffffftttt!!
Hard to regulate something you don’t underst (Score:1)
This is textbook on how to secure monopoly (Score:3)
Turing Police (Score:3)
Great! (Score:3)
Skip Gov't agency and go straight to privatization (Score:3)
Experts favor jobs program for experts (Score:2)
And just who would staff this tax-payer funded agency?
Timnit Gebru? Whose nominally "technical" output is indistinguishable from the gibberish one sees in unabashedly-bullshit academic disciplines like Fat Studies?
What about that google dude who thought his chatbot was the voice of God speaking to him that was in the news last year? Why not, the more the merrier.
We'd have to reserve a slot for the alphabet mafia, too. Gotta make sure the chatbots and image generators are gay enough. I can see a CFR regardin
Maybe they could call it... (Score:2)
Maybe they could call it "AI Force". Like Space Force, it could be a branch of the US military. Its motto could be "To Infirmity And Beyond" and its mascot could be Buzz Lightmillenium.
The gov would hire Bender and Zuck (Score:2)
of course they do (Score:3)
There's already laws and regulations (Score:1)
Where's the report? (Score:2)
The summary page has lots of pretty graphics, but is light on detail. I want to see the report.
In particular, one of the universities listed is one where I did my PhD (funny, in an AI-related field). I would be fascinated to know which professors answered their questions. Bet: it's not the ones in the AI field. That said, I just cannot fathom many professors of CS - of any specialty - really calling for government regulation. Surely no one who has ever had to deal with federal regulations is going to expec