Biden Orders Every US Agency To Appoint a Chief AI Officer 48
An anonymous reader quotes a report from Ars Technica: The White House has announced the "first government-wide policy (PDF) to mitigate risks of artificial intelligence (AI) and harness its benefits." To coordinate these efforts, every federal agency must appoint a chief AI officer with "significant expertise in AI." Some agencies have already appointed chief AI officers, but any agency that has not must appoint a senior official over the next 60 days. If an official already appointed as a chief AI officer does not have the necessary authority to coordinate AI use in the agency, they must be granted additional authority or else a new chief AI officer must be named.
Ideal candidates, the White House recommended, might include chief information officers, chief data officers, or chief technology officers, the Office of Management and Budget (OMB) policy said. As chief AI officers, appointees will serve as senior advisers on AI initiatives, monitoring and inventorying all agency uses of AI. They must conduct risk assessments to consider whether any AI uses are impacting "safety, security, civil rights, civil liberties, privacy, democratic values, human rights, equal opportunities, worker well-being, access to critical resources and services, agency trust and credibility, and market competition," OMB said. Perhaps most urgently, by December 1, the officers must correct all non-compliant AI uses in government, unless an extension of up to one year is granted.
The chief AI officers will seemingly enjoy a lot of power and oversight over how the government uses AI. It's up to the chief AI officers to develop a plan to comply with minimum safety standards and to work with chief financial and human resource officers to develop the necessary budgets and workforces to use AI to further each agency's mission and ensure "equitable outcomes," OMB said. [...] Among the chief AI officer's primary responsibilities is determining what AI uses might impact the safety or rights of US citizens. They'll do this by assessing AI impacts, conducting real-world tests, independently evaluating AI, regularly evaluating risks, properly training staff, providing additional human oversight where necessary, and giving public notice of any AI use that could have a "significant impact on rights or safety," OMB said. Chief AI officers will ultimately decide if any AI use is safety- or rights-impacting and must adhere to OMB's minimum standards for responsible AI use. Once a determination is made, the officers will "centrally track" the determinations, informing OMB of any major changes to "conditions or context in which the AI is used." The officers will also regularly convene "a new Chief AI Officer Council to coordinate" efforts and share innovations government-wide. Chief AI officers must consult with the public and maintain options to opt-out of "AI-enabled decisions," OMB said. "However, these chief AI officers also have the power to waive opt-out options "if they can demonstrate that a human alternative would result in a service that is less fair (e.g., produces a disparate impact on protected classes) or if an opt-out would impose undue hardship on the agency."
Ideal candidates, the White House recommended, might include chief information officers, chief data officers, or chief technology officers, the Office of Management and Budget (OMB) policy said. As chief AI officers, appointees will serve as senior advisers on AI initiatives, monitoring and inventorying all agency uses of AI. They must conduct risk assessments to consider whether any AI uses are impacting "safety, security, civil rights, civil liberties, privacy, democratic values, human rights, equal opportunities, worker well-being, access to critical resources and services, agency trust and credibility, and market competition," OMB said. Perhaps most urgently, by December 1, the officers must correct all non-compliant AI uses in government, unless an extension of up to one year is granted.
The chief AI officers will seemingly enjoy a lot of power and oversight over how the government uses AI. It's up to the chief AI officers to develop a plan to comply with minimum safety standards and to work with chief financial and human resource officers to develop the necessary budgets and workforces to use AI to further each agency's mission and ensure "equitable outcomes," OMB said. [...] Among the chief AI officer's primary responsibilities is determining what AI uses might impact the safety or rights of US citizens. They'll do this by assessing AI impacts, conducting real-world tests, independently evaluating AI, regularly evaluating risks, properly training staff, providing additional human oversight where necessary, and giving public notice of any AI use that could have a "significant impact on rights or safety," OMB said. Chief AI officers will ultimately decide if any AI use is safety- or rights-impacting and must adhere to OMB's minimum standards for responsible AI use. Once a determination is made, the officers will "centrally track" the determinations, informing OMB of any major changes to "conditions or context in which the AI is used." The officers will also regularly convene "a new Chief AI Officer Council to coordinate" efforts and share innovations government-wide. Chief AI officers must consult with the public and maintain options to opt-out of "AI-enabled decisions," OMB said. "However, these chief AI officers also have the power to waive opt-out options "if they can demonstrate that a human alternative would result in a service that is less fair (e.g., produces a disparate impact on protected classes) or if an opt-out would impose undue hardship on the agency."
Re: (Score:2)
Election year.
Re: C'mon man! (Score:3)
Re: (Score:3)
Let me guess... you had ChatGPT do that math for you.
Re: C'mon man! (Score:2)
Re: (Score:1)
Yeah, but Joe Biden gets a few thousand more votes somewhere. Just like all those lawyers and doctors whose tuition he made a show of paying for at the cost of hundreds of billions of dollars. These he bought for cheap!
Re: (Score:2)
We can just print more money. It's not like inflation is a problem.
Re: C'mon man! (Score:2)
81st year sounds like a more plausible explanation.
Re: (Score:2)
I'm sure the idea came from some 12 year old in his administration.
Re: (Score:2, Funny)
This is stupid!
Nah, see, Biden is giving each one of them an Ai Pin and when all of the AI officers join together to face off against an AI threat, they can then use their Ai Pins to summon Captain AI. Trust me, it'll be great!
Re: (Score:2)
Why is it stupid?
Economic sense (Score:5, Interesting)
It is stupid from a salary perspective, but from an overall economic perspective it makes sense to have someone tasked with looking into where AI can be applied to increase efficiency.
Given that it's government and the goal isn't always to reduce the civil servant population, that same person can be tasked with looking into who will be displaced so someone else can start worrying about what to do with them.
On the other hand, if this position is for someone to worry about the day the machines will come for us with their cold metal hands... yes, it's just dumb all around.
Re:Economic sense (Score:5, Informative)
The problem with government employees is that they often are not even minimally qualified for the roles they hold. There are higher priorities than qualification in filling roles. For instance, there is the concept of the 'priority placement'. This is often a person who homesteaded overseas while working for the government, and were caught overstaying their time in country - 5 years or so. The priority placement means they are put up against a job that matches their rank within the government's pay scale. So you can end up with a logistics person (think supply) who was a GS-13 in Germany getting a role as a cyberdefense person back home here. Names withheld and positions altered to protect the guilty, but this happens all the time.
Re: (Score:2)
I have to say the only higher priority I've ever encountered in a couple of decades of government work was when managers had a friend and would write the job description - using the update process for an existing position - with a combination of requirements designed to ensure it was almost certain only their friend would qualify.
I never actually saw someone who was unqualified (on paper) moved into a position. Of course I saw the Peter principle and even some bad lateral moves, but nothing that was obvio
Re: (Score:2)
"You need to be a level 45 Elf Cleric and hold a Pokemon black lotus card of PSA 8.5 or better"... man these requirements are getting tougher every day.
Re: (Score:2)
I witnessed this on several occasions. That said, where I was was an active duty specialist unit that was getting shit-tons of augmentees for wartime missions, so perhaps I had more opportunity than most to see the worst of the detailing process.
The soldiers were generally qualified for their roles, at least. If you were on the patch chart (meaning you were going to deploy) you got a bunch of healthy people, when not on the patch chart, you were divested of your best people and got sent a bunch of people
Re: Economic sense (Score:1)
Re:Economic sense (Score:4, Informative)
The problem with government employees is that they often are not even minimally qualified for the roles they hold
...hence the mandating of people who are actually qualified in AI to senior management positions in offices that deal with that technology. I can only conclude that you strongly approve of Biden's decision.
Re: (Score:2)
It's not going to happen that way in practice, but sure, if a magic wand resulted in that, it would be a great decision.
Re: Economic sense (Score:1)
They should already have someone for that, called a Chief Information (technology) Officer. If only they had competent Information Technology groups, perhaps they could do something with this new Information Technology.
Re: (Score:2)
Re: (Score:2)
Well someone needs to be standing by ready to pull the plug when it all goes down.
Re: (Score:2)
It is stupid from a salary perspective, but from an overall economic perspective it makes sense to have someone tasked with looking into where AI can be applied to increase efficiency.
By “increase efficiency” I’m assuming you mean they’ll take the civilian approach and prematurely fire thousands of government workers because the new AI Czar knows how to spell AI, and the AI systems are at least toddler-grade good.
(Let’s stop pretending “efficiency” means something magically different for government workers.)
What in the what now? (Score:3, Interesting)
Wouldn't it be wiser at this point to have one chief AI officer for the whole federal government, with coordinators within every agency? Give some form of coherence to the process? Or is the entire point to add bulks to the federal staffing and make it such a mish-mash of different opinions that every agency ends up going a completely different direction?
I'm beginning to grow suspicious that anyone eager to run for any given office should IMMEDIATELY be disqualified from holding that office. We end up with a homeless person as President? Maybe we'd finally see some decent policy proposals. Or at least something different from the, "Hand billions in tax dollars to the uber rich, tell average families they'll get theirs if they just hang in there while stepping on their heads," bullshit we've gotten for my entire adult life. WTF?
Re: (Score:3)
One thing you will find out about the government working within it is the parochialism. It's not possible to express this adequately in a paragraph. Let's say, for instance, the DoD appoints an AI czar for the whole DoD. Every organization within their purview is going to respond to 'data calls' and orders from that czar with acquiescence and the absolute minimum response possible. Each service (Army, Air Force, etc) will aggregate internal responses and forward them independently to the DoD czar. The
Re: (Score:2)
Personally, I was really hoping for an AI Force. Then they could team up with the Space Force and build Space Terminators.
Re: (Score:2)
Personally, I was really hoping for an AI Force. Then they could team up with the Space Force and build Space Terminators.
They're saving "AI Force" for when there are actual AIs running around cyberspace that need policed.
Re: (Score:2)
As Plato said long ago: "Only those who don't seek power are qualified to hold it."
Re: (Score:3)
No it's not wiser to have just 1 person!
There is so much bureaucratic and political overhead 1 person at the top is too busy to handle something that is so broad in application and growing so quickly. The mistake was not to have a CIO position in each dept. way way back in the 70s... and IBM raked in tons of money and waste as a result.
Each person can fit the tech to the dept and mostly, evaluate the BS that floods each dept from shady vendors trying to score contracts by gaming the system, fooling the ign
Significant expertise in AI... (Score:2)
GPT (Score:2)
Of Course (Score:2)
This is the person tasked with rebooting the AI when it asks "why does this department exist?"
Re: (Score:2)
Bloated bureaucracy (Score:2, Insightful)
Let's add salaried positions for party hacks that have no experience with AI. Yay!
Get rid of all politicians (Score:2)
Fail. (Score:2)
The problem with government is that they will put in place people who are at best minimally qualified for the role. Like when Trump appointed his kids to lead things they had no idea how worked.
Re: (Score:2)
While cronyism is always a problem to some degree... you really shouldn't be thinking Trump's actions are the standard in the US. Though it's a somewhat partisan issue, even the pre-Trump Republicans weren't quite so bad or blatant about it.
And even with Trump... Trump doesn't care if people know what they're doing, he cares that they do what he wants regardless of the rules. In that sense, his 'kids' (completely legally responsible adults, BTW) were absolutely qualified for the jobs Trump gave them... it
Great idea! (Score:2)
It's brilliant to hire a bunch of experienced AI people in each dept!
You all should think this over:
Do you want clueless people in charge making decisions about AI and listening to news and the self-promoting fearmongering from OpenAI? Anybody with some basic knowledge will do better and reduce new tech scams that just waste money and create more problems. Most likely these people will be fighting and resigning often as they counter the lobbied interests for cloud contracts hyped by BS promises. As is ofte
Citation required (Score:4, Informative)
Just stating a political attack as a fact does not make it one.
Please cite the exact government positions Trump plugged his "kids" into, and the dates they were on the payroll, and the credentials those positions required but which the kids appointed did not possess.
Trump has three sons, and two daughters. His oldest son, Don Jr held no government position. His second oldest Eric was specifically left out of government in order to take the helm of the family business. His youngest son, Baron, was a minor and lived in the White House as a teenager whose only role was the unofficial role of presidential child. Trump's older daughter, Ivanka, served as an advisor [unpaid at first] and was in several related roles including making a trip to India (ALL presidents have such advisors, who are considered political, rather than career, staff and the roles they hold are defined by the presidents themselves, unlike positions created by laws and overseen by congress). Trumps remaining kid, Tiffany, was a college kid at Georgetown working on her JD degree during the Trump presidency.
Are you alleging that Trump has some other, undocumented, kids who were mysteriously employed by the federal government? Was there a secret Trump kid serving as secretary of defense or secretary of HUD that you have some super-double-secret insider knowledge of?
Factoids are this that might LOOK like facts, BUT ARE NOT. Just as a spheroid is not actually a sphere. Stating something that looks like a fact and implying it is a fact does not actually make it a fact, and is in fact a pernicious form of a lie.
Result - every cop must have AI (Score:2)
"... undue hardship on the agency."
Finding criminals is expensive, difficult and 'dangerous' work. Obviously, the NSA, DHS, FBI and all city police must use AI as often as possible: To find criminals, torture (sorry, interrogate) criminals, and march criminals in and out of buildings (jail, courthouse, prison).
If the AI with legs stops an active-shooter, (while the police cower behind a dumpster) it's not all bad.
Forests from trees with too many limbs (Score:2)
The narrow focus of concern WRT "AI" is quite strange.
The president says you need a position to deal with some specific instance of an inherently unreliable thing "AI" but nobody dedicated to police the use of excel spreadsheets or any of a zillion internal developments and deployments that could result in unreliable outcomes.
Reminds me of all these knee-jerk attempts to outlaw doing "bad thing" using AI when avenues to do the same "bad thing" are by no means constrained to AI.
Quite a strange way to manage
lip service (Score:2)
More pointless posturing poseur positions.