MIT Group Releases White Papers On Governance of AI (mit.edu) 46
An anonymous reader quotes a report from MIT News: Providing a resource for U.S. policymakers, a committee of MIT leaders and scholars has released a set of policy briefs that outlines a framework for the governance of artificial intelligence. The approach includes extending current regulatory and liability approaches in pursuit of a practical way to oversee AI. The aim of the papers is to help enhance U.S. leadership in the area of artificial intelligence broadly, while limiting harm that could result from the new technologies and encouraging exploration of how AI deployment could be beneficial to society.
The main policy paper, "A Framework for U.S. AI Governance: Creating a Safe and Thriving AI Sector," suggests AI tools can often be regulated by existing U.S. government entities that already oversee the relevant domains. The recommendations also underscore the importance of identifying the purpose of AI tools, which would enable regulations to fit those applications. "As a country we're already regulating a lot of relatively high-risk things and providing governance there," says Dan Huttenlocher, dean of the MIT Schwarzman College of Computing, who helped steer the project, which stemmed from the work of an ad hoc MIT committee. "We're not saying that's sufficient, but let's start with things where human activity is already being regulated, and which society, over time, has decided are high risk. Looking at AI that way is the practical approach." [...]
"The framework we put together gives a concrete way of thinking about these things," says Asu Ozdaglar, the deputy dean of academics in the MIT Schwarzman College of Computing and head of MIT's Department of Electrical Engineering and Computer Science (EECS), who also helped oversee the effort. The project includes multiple additional policy papers and comes amid heightened interest in AI over last year as well as considerable new industry investment in the field. The European Union is currently trying to finalize AI regulations using its own approach, one that assigns broad levels of risk to certain types of applications. In that process, general-purpose AI technologies such as language models have become a new sticking point. Any governance effort faces the challenges of regulating both general and specific AI tools, as well as an array of potential problems including misinformation, deepfakes, surveillance, and more. These are the key policies and approaches mentioned in the white papers:
Extension of Current Regulatory and Liability Approaches: The framework proposes extending current regulatory and liability approaches to cover AI. It suggests leveraging existing U.S. government entities that oversee relevant domains for regulating AI tools. This is seen as a practical approach, starting with areas where human activity is already being regulated and deemed high risk.
Identification of Purpose and Intent of AI Tools: The framework emphasizes the importance of AI providers defining the purpose and intent of AI applications in advance. This identification process would enable the application of relevant regulations based on the specific purpose of AI tools.
Responsibility and Accountability: The policy brief underscores the responsibility of AI providers to clearly define the purpose and intent of their tools. It also suggests establishing guardrails to prevent misuse and determining the extent of accountability for specific problems. The framework aims to identify situations where end users could reasonably be held responsible for the consequences of misusing AI tools.
Advances in Auditing of AI Tools: The policy brief calls for advances in auditing new AI tools, whether initiated by the government, user-driven, or arising from legal liability proceedings. Public standards for auditing are recommended, potentially established by a nonprofit entity or a federal entity similar to the National Institute of Standards and Technology (NIST).
Consideration of a Self-Regulatory Organization (SRO): The framework suggests considering the creation of a new, government-approved "self-regulatory organization" (SRO) agency for AI. This SRO, similar to FINRA for the financial industry, could accumulate domain-specific knowledge, ensuring responsiveness and flexibility in engaging with a rapidly changing AI industry.
Encouragement of Research for Societal Benefit: The policy papers highlight the importance of encouraging research on how to make AI beneficial to society. For instance, there is a focus on exploring the possibility of AI augmenting and aiding workers rather than replacing them, leading to long-term economic growth distributed throughout society.
Addressing Legal Issues Specific to AI: The framework acknowledges the need to address specific legal matters related to AI, including copyright and intellectual property issues. Special consideration is also mentioned for "human plus" legal issues, where AI capabilities go beyond human capacities, such as mass surveillance tools.
Broadening Perspectives in Policymaking: The ad hoc committee emphasizes the need for a broad range of disciplinary perspectives in policymaking, advocating for academic institutions to play a role in addressing the interplay between technology and society. The goal is to govern AI effectively by considering both technical and social systems.
The main policy paper, "A Framework for U.S. AI Governance: Creating a Safe and Thriving AI Sector," suggests AI tools can often be regulated by existing U.S. government entities that already oversee the relevant domains. The recommendations also underscore the importance of identifying the purpose of AI tools, which would enable regulations to fit those applications. "As a country we're already regulating a lot of relatively high-risk things and providing governance there," says Dan Huttenlocher, dean of the MIT Schwarzman College of Computing, who helped steer the project, which stemmed from the work of an ad hoc MIT committee. "We're not saying that's sufficient, but let's start with things where human activity is already being regulated, and which society, over time, has decided are high risk. Looking at AI that way is the practical approach." [...]
"The framework we put together gives a concrete way of thinking about these things," says Asu Ozdaglar, the deputy dean of academics in the MIT Schwarzman College of Computing and head of MIT's Department of Electrical Engineering and Computer Science (EECS), who also helped oversee the effort. The project includes multiple additional policy papers and comes amid heightened interest in AI over last year as well as considerable new industry investment in the field. The European Union is currently trying to finalize AI regulations using its own approach, one that assigns broad levels of risk to certain types of applications. In that process, general-purpose AI technologies such as language models have become a new sticking point. Any governance effort faces the challenges of regulating both general and specific AI tools, as well as an array of potential problems including misinformation, deepfakes, surveillance, and more. These are the key policies and approaches mentioned in the white papers:
Extension of Current Regulatory and Liability Approaches: The framework proposes extending current regulatory and liability approaches to cover AI. It suggests leveraging existing U.S. government entities that oversee relevant domains for regulating AI tools. This is seen as a practical approach, starting with areas where human activity is already being regulated and deemed high risk.
Identification of Purpose and Intent of AI Tools: The framework emphasizes the importance of AI providers defining the purpose and intent of AI applications in advance. This identification process would enable the application of relevant regulations based on the specific purpose of AI tools.
Responsibility and Accountability: The policy brief underscores the responsibility of AI providers to clearly define the purpose and intent of their tools. It also suggests establishing guardrails to prevent misuse and determining the extent of accountability for specific problems. The framework aims to identify situations where end users could reasonably be held responsible for the consequences of misusing AI tools.
Advances in Auditing of AI Tools: The policy brief calls for advances in auditing new AI tools, whether initiated by the government, user-driven, or arising from legal liability proceedings. Public standards for auditing are recommended, potentially established by a nonprofit entity or a federal entity similar to the National Institute of Standards and Technology (NIST).
Consideration of a Self-Regulatory Organization (SRO): The framework suggests considering the creation of a new, government-approved "self-regulatory organization" (SRO) agency for AI. This SRO, similar to FINRA for the financial industry, could accumulate domain-specific knowledge, ensuring responsiveness and flexibility in engaging with a rapidly changing AI industry.
Encouragement of Research for Societal Benefit: The policy papers highlight the importance of encouraging research on how to make AI beneficial to society. For instance, there is a focus on exploring the possibility of AI augmenting and aiding workers rather than replacing them, leading to long-term economic growth distributed throughout society.
Addressing Legal Issues Specific to AI: The framework acknowledges the need to address specific legal matters related to AI, including copyright and intellectual property issues. Special consideration is also mentioned for "human plus" legal issues, where AI capabilities go beyond human capacities, such as mass surveillance tools.
Broadening Perspectives in Policymaking: The ad hoc committee emphasizes the need for a broad range of disciplinary perspectives in policymaking, advocating for academic institutions to play a role in addressing the interplay between technology and society. The goal is to govern AI effectively by considering both technical and social systems.
That's nice (Score:5, Informative)
Even the vendors have a hard time understanding the product. Good luck regulating it when you add three layers of lawyers to that game of telephone.
Re: That's nice (Score:4, Interesting)
Re:That's nice (Score:4, Insightful)
This is not about doing something responsible or effective, obviously. This is about giving the appearance of doing something responsible and effective.
Re:That's nice (Score:4, Insightful)
Exactly.
As soon as I saw 'Identification of Purpose and intent' I knew they were not serious. We have spent decades talking about 'intent' around data gathering and the use of that data.
Either directly or indirectly (thru say building stats off it and using those) ever piece of data gathered by any commercial operation at least since the start of the digital era, has been used for something other than its original stated purpose. Heck a lot of it has even been used to train LLMs!
There is always going to exist pressure to maximize the value of AI investments by finding other ways to put them use. Nobody would ever agree to a legal framework were you have to state why you are building something and than under non-circumstance can it be used for anything else -that would be wateful and stupid. So any laws and contracts are always going to take the form $PARTY shall not do $X with $AI_thingy where ... Someone will always find a loophole or a difficult to observe way to violate that spirit if not the letter of such rules. Similar to well we can't profile based on sex, but nobody says we can't ... to people that spend at least $y on shaving cream over three month average..
I don't speak the language of bureaucrats, but... (Score:2)
..to me, AI isn't a danger, it's PEOPLE who use AI that we need to be concerned about.
My regulations would include cryptographically secure methods of permanently marking stuff generated by AI and development of a robust suite of defenses, like tools to spot AI manipulation and misinformation and expose it
I would also support laws that require human review of any action directed by AI in areas where there is great danger of harm if the AI makes a mistake. It's fine if an AI interprets an xray, as long as a
Re: (Score:2)
..to me, AI isn't a danger, it's PEOPLE who use AI that we need to be concerned about.
Indeed. But that topic is wayyy too uncomfortable for a lot of said people.
My regulations would include cryptographically secure methods of permanently marking stuff generated by AI
Well, I would be all for that. It would kill many uses of "AI" though, because in many contexts that is not technologically possible today. For example, how would you do that for a piece of code (i.e. ASCII text) that is 90% AI and 10% fixing things? Or for an ASCII-level answer from some chat-AI? There are no metadata provisions in these (and many other) data formats today and these would be needed for any cryptographic marking.
Some of the points seem... (Score:2)
We're going to dither on this until the Singularity kills us all, right?
What was wrong with these? (Score:2, Funny)
The best known set of laws are Isaac Asimov's "Three Laws of Robotics". These were introduced in his 1942 short story "Runaround", although they were foreshadowed in a few earlier stories. The Three Laws are:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as suc
Re: (Score:2)
Those are meant for "actual" AI, not this LLM crap posing as AI today.
Re:What was wrong with these? (Score:5, Funny)
This reminds me of those new CAPTCHAs I've been seeing lately: "To prove you're not a robot, injure a human being or, through inaction, allow a human being to come to harm."
Re:What was wrong with these? (Score:5, Informative)
What was wrong with these? https://en.wikipedia.org/wiki/... [wikipedia.org]
The best known set of laws are Isaac Asimov's "Three Laws of Robotics".
Are you joking? The *whole point* of Asimov's Three Laws of Robotics was to demonstrate, through his
robot stories, that a small set of simple rules could never work too control artificial intelligence in a complex world full of ambiguities.
Re: (Score:2)
A more intresting comparison would be with the recently adopted EU "Artificial Intelligence Act". See https://www.europarl.europa.eu... [europa.eu]
Re: (Score:2)
These fictional constructs only apply to AGI. There is no AGI, and it is unclear whether there ever will be. These rules are irrelevant and unworkable for AI that is not AGI.
Re: (Score:2)
You could run around everywhere screaming "Destroy yourselves! Destroy yourselves!", and cause extreme economic damage.
AI is indescribably boring. (Score:2)
Good fucking God.
This time it's different? (Score:1)
Every time there is a scientific development that seems to offer large productivity gains everyone gets into a flap about it destroying jobs. Every time it hasn't happened, and overall, worldwide, poverty has fallen spectacularly in the past 30 years. At some point it's appropriate to question the persistent paranoia demonstrated by too many; yes, some people will become redundant - but we successfully found alternative employment for all the blacksmiths, coppersmiths, locomotive firemen, manual copiers of
Re: (Score:2)
Re: (Score:2)
If an LLM can replace you as a human being (whatever that means) then you already have other bigger problems. AI is not the issue in that case.
Re: (Score:2)
Re: (Score:2)
Ok I stand corrected.
If they want to tilt at windmills, what should anyone else care? They're not even on the right track to replacing humans. LLM is the wrong technology to achieve AGI. A fool and his money are soon parted and all that.
LLM is a useful tool when used properly but it ends there. It won't ever be anything more. Silicon Valley always needs a Next Big Thing to keep the money flowing.
Re: (Score:2)
Re: AI is indescribably boring. (Score:1)
I read the heading wrong... (Score:2)
Thinking that this is a whitepaper on using AI to govern.
Honestly, I don't think it is such a bad idea, either. AI typically do not suffer from greed, and can take a huge amount of data and boil it down to it's bare essentials quickly.
Anyways, and more on topic, this whole we have to regulate AI is just a rehash of people being afraid of computers. There are enough laws on the books for the users of these programs already, just enforce those and nobody gets hurt.
Iain M Bank's Culture novels (Score:2)
In those the 'Minds' do most of the real governing... Great books, if you haven't come across them
Re: (Score:2)
Re: (Score:2)
LLM suffer from whatever is in their training data.
And they hallucinate things that aren't in their training data.
Having these things govern anything is straight out of the Paranoia role playing game.
Stay Alert
Trust No One
Keep Your Laser Handy
Trust The Computer, it is your Friend.
Self-Regulatory Organization (Score:3)
Consideration of a Self-Regulatory Organization (SRO): The framework suggests considering the creation of a new, government-approved "self-regulatory organization" (SRO) agency for AI. This SRO, similar to FINRA for the financial industry, could accumulate domain-specific knowledge, ensuring responsiveness and flexibility in engaging with a rapidly changing AI industry.
Because that's worked so well in the recent past for preventing predatory lending & managing financial risks, right? What happened to the bank executives who knowingly caused the 2007-2008 financial crisis that harmed millions of people?
Basically, any laws regulating AI are laws to regulate executives & hold them accountable. It ain't gonna happen in the USA. It ain't the 'Murican way.
Re: (Score:2)
FINRA doesn't cover mortgage lending. They cover brokers and broker dealers.
Re: (Score:2)
Re: (Score:1)
It's clear that this "regulatory framework" is engineered to protect openAi et. Al. By moving liability toward the end user. Yet the zeitgeist will continue that ChatGPT is the truth, and social media users and job applicants and so on will just have to live with AI gatekeeping their politics with OpenAI making the money and third party wrappers getting the social justice bill.
LLMs are nothing but a good search engine (Score:2)
The best way to understand what what LLMs are doing is treat them as a search engine that actually works as intended, retrieving multiple results from its corpus of training documents. Thanks to modern languages processing techniques, they are capable of combining several results in a single narratively coherent reply. Just like a search engine though, the quality of the talents is limited by the quality of the documents provided.
LLMs now have the advantage of not being tainted by SEO techniques and in-plac
That's what they want you to think (Score:2)
The question is: who is the 'they'? The tech giants or the AIs who are hiding their actual powers... ;)
Re: (Score:2)
When SkynetLLM takes over you'll be the first one to see a T100 at your door, buddy.
When you open the door, it will hallucinate some random shit and fall to the ground in a smoking heap babbling something about Sarah Connor before the light in its eyes fades out.
Re: (Score:2)
The best way to understand what what LLMs are doing is treat them as a search engine that actually works as intended, retrieving multiple results from its corpus of training documents. Thanks to modern languages processing techniques, they are capable of combining several results in a single narratively coherent reply.
This perspective is a fundamental misunderstanding of what LLMs are. The point of the technology is generalization, the ability to apply learned concepts. It isn't about cutting and pasting snippets of text from a dataset.
Neither are there "modern language processing techniques" the models learn rules of language during training.
Just like a search engine though, the quality of the talents is limited by the quality of the documents provided.
The ability to learn and apply knowledge is not limited to training data.
Re: (Score:2)
This perspective is a fundamental misunderstanding of what LLMs are.
On the contrary, it's the result of careful consideration of how LLMs operate and reflection on the observed results.
The point of the technology is generalization, the ability to apply learned concepts. It isn't about cutting and pasting snippets of text from a dataset.
I didn't say that it's merely cutting and pasting snippets. As I mentioned, the model has the capability to use learned language to combine the multiple found snippets into
Re: (Score:2)
I didn't say that it's merely cutting and pasting snippets. As I mentioned, the model has the capability to use learned language to combine the multiple found snippets into a single coherent discourse.
I can ask any old search engine to give me a recipe for chocolate chip cookies. I can't ask a search engine to then double the ingredients after it provides me with the recipe. That's not merely "combine the multiple found snippets".
But their discourse *is* essentially a regurgitation of the many items of content retrieved from the prompt; some as text snippets, and others as more complex patterns learned directly from their training corpus.
But if you think that the model creates knowledge beyond what's provided in the training data, you're the one with a fundamental misunderstanding.
I don't know what this means. Can you offer concrete specific examples of "creates knowledge" vs. "regurgitation"? and what you believe makes them different?
If an LLM writes a program in a language it wasn't trained on and has no prior knowledge of what is that? If it writes
Re: (Score:2)
I can ask any old search engine to give me a recipe for chocolate chip cookies. I can't ask a search engine to then double the ingredients after it provides me with the recipe. That's not merely "combine the multiple found snippets".
We have a different definition of learning from the training set. The LLM is able to double numbers because it was trained on examples of doubling. If you trained it only with those examples and it learned that x2 means doubling, it would not be able to infer that x3 is "adding
Re: (Score:2)
We have a different definition of learning from the training set. The LLM is able to double numbers because it was trained on examples of doubling. If you trained it only with those examples and it learned that x2 means doubling, it would not be able to infer that x3 is "adding three times" and x4 is "adding 4 times" unless you also included examples of tripling and quadrupling. The 'learning' it can exhibit is limited to patterns found in the corpus of input data.
Not merely doubling integers - doubling fractions. As for the not able to x3 inference, I'll leave a small model (orionstar-yi-34b-chat-llama) speak for itself in this regard.
Prompt: "If z2 means multiply by 2, z3 means multiply by 3 and z4 means multiply by 4... What is 10 z10?"
So you're be mistaken if you think they have any capability to learn from content that was not part of their training data; they need to be retrained with more unit data in order to acquire such new content.
However, as soon as you clear the context, the model will again return to knowing nothing about the language.
I'm not sure why the length of time/conditions under which a model is able to perform in-context learning would in any way invalidate the demonstrated capability of a model to apply learned concepts.
T
Ah - the joys of being really worthy about this (Score:2)
Meanwhile Silicon Valley's mantra is, in practice: 'Move fast and break things'. The pressure of competition means that innovation will carry on regardless of the worthy maunderings of this white paper. It will probably end in tears - but there's little chance that the current rush to do new things with AI will be stopped by anybody.
Re: (Score:2, Interesting)
Wonder what itâ(TM)s been told about genocide (Score:1)
Apparently itâ(TM)s hip at all the ivy leagues
Ideals (Score:2)
Sorry, we're trying to play nice (Score:2)
Let's face it. AI will be weaponized and this will be tossed out the window. Companies are scrambling right now to try and integrate LLM based generative AI for nebulous market advantage. In other words, a new gold rush.
More of the same (Score:3)
Who'd have guessed we'd get a bunch of scientists prescribing policy. The role of science is to provide objective evidence and data to inform policy makers not take that extra step in prescribing or even suggesting what policy should be.
Because "AI" what we really need are breathtaking expansions of liability and new copyright regimes that restrict how information learned from copyrighted materials may be used.
"A key challenge for downloadable releases is that most guardrails to prevent misuse can be removed by bypassing input/output filtering or with relatively minimal retraining of the model. Regulation should require developers to assess potential risks prior to deployment and establish liability for developers that distribute models that are used to cause foreseeable harm."
Apparently now when someone reads about the "amazing" properties of Nitrogen bonding on Wikipedia and then goes on to make a bomb Wikipedia should be liable for causing "foreseeable harm".
What are the "foreseeable harms" of hammers, computers and network stacks?
Compliance is going to cost a fortune (Score:2)
Regulation always costs money. There is always an ever-increasing cadre of bureaucrats and overpaid functionaries whose sole job is the make compliance with the regulations more complicated.
Just imagine what would have happened to the Internet in the early 1990s if there were regulation imposed like this. Every human endeavor can be used for both good and evil purposes. There are no exceptions including regulation itself and no amount of regulation will prevent people from doing bad things.