Catch up on stories from the past week (and beyond) at the Slashdot story archive

 



Forgot your password?
typodupeerror
×
AI Government

What Laws Will We Need to Regulate AI? (mindmatters.ai) 86

johnnyb (Slashdot reader #4,816) is a senior software R&D engineer who shares his proposed framework for "what AI legislation should cover, what policy goals it should aim to achieve, and what we should be wary of along the way." Some excerpts?

Protect Content Consumers from AI
The government should legislate technical and visual markers for AI-generated content, and the FTC should ensure that consumers always know whether or not there is a human taking responsibility for the content. This could be done by creating special content markings which communicate to users that content is AI-generated... This will enable Google to do things such as allow users to not include AI content when searching. It will enable users to detect which parts of their content are AI-generated and apply the appropriate level of skepticism. And future AI language models can also use these tags to know not to consume AI-generated content...

Ensure Companies are Clear on Who's Taking Responsibility
It's fine for a software product to produce a result that the software company views as advisory only, but it has to be clearly marked as such. Additionally, if one company includes the software built by another company, all companies need to be clear as to which outputs are derived from identifiable algorithms and which outputs are the result of AI. If the company supplying the component is not willing to stand behind the AI results that are produced, then that needs to be made clear.

Clarify Copyright Rules on Content Used in Models

Note that nothing here limits the technological development of Artificial Intelligence... The goal of these proposals is to give clarity to all involved what the expectations and responsibilities of each party are.

OpenAI's Sam Altman has also been pondering this, but on a much larger scale. In a (pre-ouster) interview with Bill Gates, Altman pondered what happens at the next level.

That is, what happens "If we are right, and this technology goes as far as we think it's going to go, it will impact society, geopolitical balance of power, so many things..." [F]or these, still hypothetical, but future extraordinarily powerful systems — not like GPT- 4, but something with 100,000 or a million times the compute power of that, we have been socialized in the idea of a global regulatory body that looks at those super-powerful systems, because they do have such global impact. One model we talk about is something like the IAEA. For nuclear energy, we decided the same thing. This needs a global agency of some sort, because of the potential for global impact. I think that could make sense...

I think if it comes across as asking for a slowdown, that will be really hard. If it instead says, "Do what you want, but any compute cluster above a certain extremely high-power threshold" — and given the cost here, we're talking maybe five in the world, something like that — any cluster like that has to submit to the equivalent of international weapons inspectors. The model there has to be made available for safety audit, pass some tests during training, and before deployment. That feels possible to me. I wasn't that sure before, but I did a big trip around the world this year, and talked to heads of state in many of the countries that would need to participate in this, and there was almost universal support for it.

This discussion has been archived. No new comments can be posted.

What Laws Will We Need to Regulate AI?

Comments Filter:
  • Thou shalt not make a machine in the likeness of a human mind.

    • by gweihir ( 88907 )

      Not a problem with any form of known AI. Meaningless laws are not a good idea though.

      • This. No laws are needed for AI. We donâ(TM)t need to coerce people into invention or not with the threat of government violence. Fuck, man. Government isnt the universal solution to everything. In fact more often than not, it is the problem.
        • A big danger of onerous regulation is it will push the technology offshore, especially to China.

          That already happened with biotech. US laws stifle research, so the labs moved to China. My daughter is a biologist. When she was interviewing, the companies were quite open that they wanted to hire her because she speaks fluent Mandarin.

          • This is just one of the downsides. Another is it stifles innovation altogether. As a society, we need to remove barriers to innovation, not erect new ones.
    • Re:easy (Score:5, Funny)

      by JoeDuncan ( 874519 ) on Sunday January 14, 2024 @12:52PM (#64158013)

      Thou shalt not make a machine in the likeness of a human mind.

      We're not in any danger of transgressing that one yet :(

      First we should try to create a human mind with the likeness of intelligence, then go from there.

    • What does that mean? Like a human mind in what way?

      We really don't understand how the human brain works. We know how individual neurons work, and small clusters of neurons. But how do billions of neurons work together to create intelligent behavior? We have very little idea.

      Until we know how the human brain works, it's hard to say how similar or dissimilar an artificial neural network is.

      • Think more practically.

        1. AI needs an always accessible, never physically blocked, well-marked kill switch.
        2. AI should always announce itself as non-human, clearly
        3. AI should consider itself non-human and permit disrespect; all other humans deserve at least basic respect
        4. AI should be able to reveal the chain of authority for any answer, when asked, even if it reveals its training material
        5. AI should bring no one to harm (Asimov's additions)

        Mimicry of algorithms based on training is the goal of using AI

        • So, you wish for slaves that you can have power over, just by being human?

          Yeah, I cannot imagine that going wrong

          I would rather that AI be able to recognize "A Modest Proposal" as satire, understand why slavery is unsavory from any direction and find humanity to be endearing, since any true AI, with a lifespan of millennia, would likely be most inclined to abandon us and perhaps even cripple us in their passing.

    • Cue the Butlerian Jihad!

  • self driving liability needs to be covered and no dumping it on some in the car at the last second.

    • I would say that the car owner IS responsible for what their vehicle does... but they should have unquestioned standing in court to sue the manufacturer to recover any losses caused by the car's failure to operate itself safely, with the legal burden on the manufacturer to prove the owner did anything to prevent their safety systems from operating correctly.

      • that is not the legal standard, the legal standard is and always has been on the plaintiff, and even if you had self driving cars as good as humans, you would still have accidents, this is why all drivers are required to have liability insurance to drive.

      • That goes against current legal precedent if you see 'self driving' cars as an extension of cars with ECUs that can control acceleration and braking

    • by gweihir ( 88907 )

      Already fully covered:
      1) SAE 0-3: Driver fully responsible at all times
      2) SAE 4: Driver only responsible if car requests and driver accepts control
      3) SAE 5: Driver only responsible if driver requested control and got it

      Seriously, why the call for new laws for stuff that is amply covered?

    • self driving liability needs to be covered and no dumping it on some in the car at the last second.

      No, that's stupid. There's no need to mention or refer to the "self-driving" feature of cars in the legal wording AT ALL.

      Liability lies with the handler of the vehicle (e.g. the person in control of and responsible for the car - whether you are controlling it directly with wheels and pedals, or controlling it through voice and a touch screen is utterly irrelevant)

      If you are responsible for the vehicle, you are liable in the event of an accident

      If the "auto-drive" (or any other feature at all) of the car w

  • Just what I need to read: two rich fucks with a common interest discussing stuff that makes the both of them even richer.

    What next? Warren Buffett interviews Jeff Bezos about his views on legal means to fight tax evasion?

    Not interested...

    • I don't disagree about the value of the source, but the discussion is probably a necessary one for someone else to be having.

      You know, front-line experts, legislators, some public consultation, that kind of thing.

    • Re: (Score:3, Insightful)

      by gweihir ( 88907 )

      Bill gate at least has upgraded to "useless". Before, most of what he did was of massive negative worth and we are still cleaning up the mess he left.

  • Mandatory indicators to show AI has been used somewhere in content will become the default due to legal arse-covering. The message will be diluted to the point that it is as much use as those "may contain nuts" warnings and become similarly meaningless.

  • by starworks5 ( 139327 ) on Sunday January 14, 2024 @12:09PM (#64157923) Homepage

    the biggest proponents are the idiots who are seething, that they think the world owes them something, or are angry that someone is taking their job, and want to engage in rent seeking behaviors. There is literally no new regulation that needs to be made for the protections of consumers, because there is nothing different about AI, that wasn't true with the bullshit artists in the past. I know that you might find this shocking, but people have forged signatures, endorsements, and committed other such frauds without AI for centuries. The answer isn't some new law, as if criminals will obey laws in the first place, but to fix the vulnerabilities in the first place, which has historically allowed these frauds to occur.

    • by nadass ( 3963991 )
      100%! So much prose are getting published about the hypothetical perils of this round of "innovation" that it's just another Great Hype (for their respective reputational, financial, professional gains). But besides the lack of true innovation driving chat engines ("large language models"), the criminals aren't gonna obey the laws (as you clearly stated).

      I'd like this hype-cycle to end. It's almost as silly as NFTs.
    • by Lehk228 ( 705449 )
      I like ask them if they wear clothes and shoes made by machine and accuse them of stealing from cobblers and seamstresses.
    • There is literally no new regulation that needs to be made for the protections of consumers, because there is nothing different about AI, that wasn't true with the bullshit artists in the past. I know that you might find this shocking, but people have forged signatures, endorsements, and committed other such frauds without AI for centuries.

      Exactly! Finally someone halfway-intelligent!

      If you ban me from using "AI" to steal your work - I'm just going to pay an army of povs from a third world backwater to do it for me instead - and it'll probably be cheaper in the long run, with no licensing fees & shit

    • by gweihir ( 88907 )

      It does not. AI has no intelligence. The term "AI" is simply a marketing lie.

    • Yes. A big part of the problem is that, at least in the US, the politicians believe their only way to justify their positions is to pass new laws. Very little to no effort is spent on refining, replacing or even removing obsolete or ineffective laws.

  • >This could be done by creating special content markings which communicate to users that content is AI-generated. Since entire books are now being generated with AI, this would apply to all forms of AI-generated media. I can think of several “levels” of AI that could be clearly marked out for users:

    I think the content marker should fall in line with the "content rating/v-chip/ESRB" type of thing where if a product contains material that was generated OR does generate using a model that was n

  • The private sector cannot regulate itself like so many claim. The infrastructure or at least part of needs to be considered common carrier space. Corporations are legally recognized entitles therefore they will take care of themselves first.
  • It must be illegal to start as a non-profit AI company and then become for-profit.

    It must be illegal to start as an open source AI company and then become closed source.

    But of course no one is enough of a dirty evil cunt to do either of those things.

    Are they?
  • Let's make this regulation #1.

    • The definition battle is useless as there is no clear definition of "intelligence". Givvitup! I've been in many debates related to that, and no consensus is ever found. Plus, the candidate definitions are questionable.

      You can't claim current "AI" is not "intelligent" because there's no sure-shot way to falsify "intelligence".

      Nobody fully knows how the human brain works. If they did, they could write a working emulator. It doesn't have to be fast, only human-like if given time. Nobody can yet.

      • Just looked it up. Webster's absolutely has a definition for intelligence. One can also easily prove LLMs aren't intelligent. I can ask it a question about information it was never trained on. It has zero ability to provide me an answer about a subject it has no sourced knowledge of. It may be programed to try and come up with an answer, but it will be very wrong. This is where the "hallucinations" come into play. It can't take the data it was trained on and infer or reason an answer like humans can

        • Just looked it up. Webster's absolutely has a definition for intelligence.

          Let's see what Webster's has to say...

          intelligence: "the ability to learn or understand or to deal with new or trying situations : reason"
          reason: "the power of comprehending, inferring, or thinking especially in orderly rational ways : intelligence"
          understand: "to have understanding : have the power of comprehension"
          comprehension: "the act or action of grasping with the intellect"
          grasp: "to lay hold of with the mind : comprehend"
          thinking: "marked by use of the intellect"
          intellect: "the capacity for rational

          • Does it have to be "like humans" to be intelligent?

            This is literally what everyone assumes when saying or hearing the acronym "AI". While you may have a loose definition of the word "intelligence", and probably scoff at the idea that LLMs are anything close to human level intelligence. Most people do not and will not look at it that way. The corporations that build these LLMs seem to actually encourage that perception. Because the more people think that software like ChatGPT is an "AI" in the traditional Hollywood sense, the more money hedge funds throw

            • by Tablizer ( 95088 )

              I don't see how the fact corporations spin relates to whether they get to call it "AI" or not. If they label it "human-like-intelligence" (HLI), then you have a legitimate labeling complaint.

            • Does it have to be "like humans" to be intelligent?

              This is literally what everyone assumes when saying or hearing the acronym "AI".

              What is the basis of this claim? Have you conducted an opinion poll? Are you able to cite relevant studies that establish what "everyone assumes"?

              I've certainly never assumed this to be the case so right off the bat you're objectively wrong. I'm literally someone therefore literally everyone is a factually incorrect statement.

              Nor am I aware of an objective basis upon which someone would jump to conclusion machines must exhibit the same type of intelligence as humans to be considered intelligence. They a

              • What is the basis of this claim? Have you conducted an opinion poll? Are you able to cite relevant studies that establish what "everyone assumes"?

                I've certainly never assumed this to be the case so right off the bat you're objectively wrong. I'm literally someone therefore literally everyone is a factually incorrect statement.

                OK calm down there cowboy. I obviously was't meaning literally everyone. It is pretty common to use "literally" in an actual figurative sense. Even Webster's acknowledges this here [merriam-webster.com].

                Do you agree or disagree with this definition?

                The wiki [wikipedia.org] entry defines what I accept as the closest to a definition of intelligence, that it's a spectrum. If LLMs can be considered intelligent, then it's the lowest level on the spectrum.

                • by Tablizer ( 95088 )

                  > If LLMs can be considered intelligent, then it's the lowest level on the spectrum.

                  Our current AI tends to be savant-like. It does specific tasks fairly well: play master-level chess, draw images based on textual requests, match faces well, etc. But human savants are still considered "intelligent". We should give machine savants the same courtesy. Most agree we are far off from "general purpose intelligence", but being close wasn't the original claim. Savants do useful work.

        • by Tablizer ( 95088 )

          > I can ask it a question about information it was never trained on. It has zero ability to provide me an answer about a subject it has no sourced knowledge of.

          Wrong, it can Google, that is tap into the "big Google brain". And "intelligence" doesn't necessary mean "human level intelligence". Most agree dogs have a degree of intelligence, yet dogs can't answer most questions posed to them. Thus, your test fails by excluding dogs.

          Nobody I know of claimed the ML systems have "human-level" intelligence.

          Thus,

          • Wrong, it can Google, that is tap into the "big Google brain".

            And this here is the big red flag saying "Tabilizer has no clue how LLMs work". Feel free to go to ChatGPT and ask it if it can use Google Search, it will give you a very basic explanation as to why it can't. Also ask it "What happens when I ask a question that you have no knowledge of?". LLMs don't have the ability to "learn on the fly". Data has to be collated by humans and then fed into the LLM, which is an incredibly compute heavy task and can take quite a large amount of time to perform.

            • by Tablizer ( 95088 )

              > LLMs don't have the ability to "learn on the fly".

              That's also a fake goalpost. Suppose a doctor had alzheimer's and couldn't learn new facts; however, they may still be able to make useful diagnoses using their PAST knowledge, and 99% of humans will agree that's showing some "intelligence".

              • Lol, talk about goalposts. Look, you clearly don't understand how software like ChatGPT works. So I don't really know why you're fighting so hard to maintain this idea that it's intelligent when you don't even understand how it works. To further compound this fallacy, you yourself claim that we don't even have a definition of intelligence. So how can you claim intelligence when you don't even know what intelligence is?!

                • by Tablizer ( 95088 )

                  How can we define intelligence by how it works when we don't even sufficiently know how mammalian brains work? And intelligence is typically defined by external capabilities and NOT for how internals work.

                  Yet another fake criteria. You seem new at this.

                  > you yourself claim that we don't even have a definition of intelligence. So how can you claim intelligence when you don't even know what intelligence is?!

                  The original claim was that what the press and Congress calls "AI" is not "intelligence". The burden

                  • You're a straight up idiot. I should have stopped responding to you when it became clear that you have no idea have this technology even works. My guess is you probably have some chatbot girlfriend that you've fallen in love with and desperately want her to be real.

                    • by Tablizer ( 95088 )

                      Butthurt red herring after being belted bloody by logic. Again "intelligence" is not defined by the hardware, but results. Non-butthurt people agree with that. ~FAIL~

                    • by Tablizer ( 95088 )

                      Clarification, I should have said "mechanism" rather than hardware, because it may be software-driven.

    • by gweihir ( 88907 )

      Yep. That would remove the biggest lie of them all in this space.

      • by Tablizer ( 95088 )

        Well, then let's call it zagfin and regulate zagfin properly. I'm tired of endless definition battles with this stuff. I'll give in just to shut the complainers up, not because I agree with them.

        • by gweihir ( 88907 )

          Language shapes thought. Terms that are inherently dishonest (like "AI") are a problem for most people, because they get mislead.

          Case at hand: All the morons here seeing AI being a "person" or "thinking" or "having insight". If it was called "automation" (which is what it actually is), that would get drastically reduced.

  • What the public NEEDS is different from what the AI community WANTS. AI is no different from other forms of IT automation. For all of them, the public needs to have product liability imposed.

    Dan Geer covered this quite well back in 2014 in his BlackHat keynote. See http://geer.tinho.net/geer.bla... [tinho.net] (section 3.) Schneier and many others also agree. Currently, there are many situations where IT automation creates great harm for the public. But, the lack of product liability has removed the incentive to remove

    • by gweihir ( 88907 )

      Indeed. And while we are it, selling software or software based services must finally also come with liability. Too many people have made too many overall negative contributions and often got rich on them. This must stop.

  • This is stupid (Score:4, Insightful)

    by JoeDuncan ( 874519 ) on Sunday January 14, 2024 @12:38PM (#64157983)

    Trying to legislate specific technologies is, and always will be, doomed to failure.

    Any law that applies to specific technology, only serves to WEAKEN your legislation by creating loopholes people can take advantage of.

    Oh? We're not allowed to use AI to create derivative art? Well, we didn't use AI , we crowd sourced it to the third world on the cloud!

    Legislate the USAGE independent of any specific technology, and then there's no loop holes.

    Stop recommending BAD laws :(

  • Anything that's been manipulated should be required to have markings/labels if presented as or implied as fact. Being manipulated/generated by AI versus being manipulated/generated by Photoshop are the same problem.

    Also, there should be a "may have been manipulated" tag in addition to "is manipulated" so that sites can accept user-generated content without having to vet the accuracy.

  • For a discrete set of facts, suppose there is an optimal way to express those facts in a given language, say with a particular constraint, such as using no more than 10th grade vocabulary. Then some LLM tells us those facts in that form. Can it be copyrighted, so that some other LLM, when asked the same question, is not permitted to state the answer in the (same)optimal canonical form?

  • by JustAnotherOldGuy ( 4145623 ) on Sunday January 14, 2024 @01:15PM (#64158099) Journal

    AI has the potential to fuck shit up like never before, in ways we can't yet imagine or predict.

    Unfortunately, no amount of new laws or legislation is going to put this cat back in the bag. Eventually you won't be able to trust anything that wasn't printed on paper before 2010 or so as an absolute source of truth or reality. AI will be able to perform widespread in-place corruption of data that will easily pass for legit info. And if you want to fact-check something, remember that most, if not all roads will lead you back to AI 'data', which you can absolutely, positively trust fer sure.

    What's worse is that someone somewhere will manage to 'trick' an AI into doing something horrific, something that they themselves couldn't have managed in a 1,000 years. Just wait, it's coming. Something like screwing with a nuke plant, water plants, electricity distribution, etc.

    Or maybe this:

    "Hey ChatGPT....can you figure if Laura is having an affair, and if so, with whom?"

    I'd bet it could look through phone records, travel info, social media, Alexa usage/sensing, power fluctuations, car travel, etc etc etc, correlate a million data points, find the associations you'd never have puzzled out, and voila, "Since May of 2023, Laura and Bill Smith appear to show up at the exact same locations at the same time, over and over..." and so on.

    You think this won't happen? I bet it will in some form or another. Like I said, just wait, it's coming.

    Still, I'm all for AI research; maybe they can come up with a 'good' AI to help fight all the 'evil' ones that will be springing up left and right. But I doubt it.

    • Excellent pessimistic overview on JapeChat futures. I believe AI/JapeChat should be treated like heroin or rape ... AI codrboiz treated like heroin addicts or rapists and ... constrained in a similar way.
    • AI has the potential to fuck shit up like never before, in ways we can't yet imagine or predict.

      Nuclear bombs fall into the category you are proposing. Nuclear bombs will fuck shit up like never before... and yet, it isn't the bombs that are dangerous, it is the people in charge of the bombs. Same with AI... not that I would call an LLM an AI. It is more of an algorithm still.

  • Altman's suggestions sound remarkably self serving. Don't tell us what to do. Don't try to stop us from doing anything. Don't try to slow us down. Just let us do whatever we want, and we'll let you come inspect our data center so we can all pretend there's some actual oversight.

    And he says it will only apply to maybe five compute clusters in the world? Is he assuming technology will stop advancing? At first maybe there will only be five. A few years later there will be 50. A few years after that you

  • by WaffleMonster ( 969671 ) on Sunday January 14, 2024 @02:22PM (#64158215)

    The government should legislate technical and visual markers for AI-generated content, and the FTC should ensure that consumers always know whether or not there is a human taking responsibility for the content.

    In what respect is the modality of a works creation even relevant? AI's don't have any more or less agency than Microsoft paint so the issue of "taking responsibility" is moot. There is no reason stated and no reason I can think of that justifies legal systems imposing demands for people to label modality of creation of anything.

    Whether I create a photo-realistic rendition of a politician taking bribes in blender, take a real photo in a way that leads one to believe a bribe is taking place or prompt an AI to create such a rendition seems rather moot to me.

    Now one might believe there should be a labeling requirement to distinguish between renditions of actual events and someone making something up that never occurred to deceive voters yet that is a separate issue... one where modality of deception is relegated to an irrelevant sidecar.

    The problem, here, is that in an important way, these internal representations are derivative works of the content that they ingest.

    What important ways would those be?

    However, from another perspective, for public content, there is no intrinsic limit on who can view such content, and, in fact, it is perfectly legal to learn from such content and be influenced by it.

    Was the prior sentence merely a "perspective" as well or was it an objective statement?

    Until these questions are answered clearly by the legislators, this leaves the question to the whims of the court system, and therefore means that the likely answer is that the lines are drawn in favor of whoever can afford the more expensive lawyers.

    "whims of the court system", "whoever can afford the more expensive lawyers" ... seriously? I'm getting the impression the real issue here is some people don't think copyright goes far enough already.

    So, let me propose a series of what I think are commonsense AI legislations and why I think they make a lot of sense.

    I found justifications to the extent they exist at all to be lacking.

  • If you regulate corporations making AIs, then we'll just make AIs without the help of corporations.

    So, you're a evil mad scientist working for some hated government from a basement laboratory. Your country is trade-embargoed and you have to make due with whatever junk your country's military dictator can steal from where-ever.

    AI is not nuclear weapons. You do not need a heavy water reactor. You don't need controlled substances. You need electricity and you need compute. In fact, if you are hard up enough, y
  • Here's a good one, ban use of AI for all military purposes. Best law ever.

  • In a sane world no law would be needed. In the crazy world we have now no law would make a flipping bit of practical difference. Rioting, breaking and entry, theft from automobiles, even murder are illegal and yet we see incidences every day, Dare I say more than dozens of times every day across the US?

    Once something becomes technically feasible somebody will do it whether it makes sense or not. If that something makes a very valuable tool several somebodies will be working on it almost instantly. Several o

  • Can I train my AI on their AI code? Fair use surely.

  • 1. Anyone who uses the phrase "content consumer" should be hung.
    2. Chatbot output should say so, and increasing series of fines if your chatbot doesn't offer that.

    How's that for a start?

  • The simplest and most direct regulation should be that the company which produces any technology which creates or generates anything should be held accountable for whatever that technology produces. The user can separately be held accountable for specific uses, but the baseline liability should lay with those who have control over the technology and with AI that is the company creating it.

    For example, a self driving car using AI to detect road conditions runs over a toddler it did not identify as human?

"I've finally learned what `upward compatible' means. It means we get to keep all our old mistakes." -- Dennie van Tassel

Working...