Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Electronic Frontier Foundation AI Software The Military United States Technology

EFF: Google Should Not Help the US Military Build Unaccountable AI Systems (eff.org) 110

The Electronic Frontier Foundation's Peter Eckersley writes: Yesterday, The New York Times reported that there is widespread unrest amongst Google's employees about the company's work on a U.S. military project called "Project Maven." Google has claimed that its work on Maven is for "non-offensive uses only," but it seems that the company is building computer vision systems to flag objects and people seen by military drones for human review. This may in some cases lead to subsequent targeting by missile strikes. EFF has been mulling the ethical implications of such contracts, and we have some advice for Google and other tech companies that are considering building military AI systems.
The EFF lists several "starting points" any company, or any worker, considering whether to work with the military on a project with potentially dangerous or risk AI applications should be asking:

1. Is it possible to create strong and binding international institutions or agreements that define acceptable military uses and limitations in the use of AI? While this is not an easy task, the current lack of such structures is troubling. There are serious and potentially destabilizing impacts from deploying AI in any military setting not clearly governed by settled rules of war. The use of AI in potential target identification processes is one clear category of uses that must be governed by law.
2.Is there a robust process for studying and mitigating the safety and geopolitical stability problems that could result from the deployment of military AI? Does this process apply before work commences, along the development pathway and after deployment? Could it incorporate the sufficient expertise to address subtle and complex technical problems? And would those leading the process have sufficient independence and authority to ensure that it can check companies' and military agencies' decisions?
3.Are the contracting agencies willing to commit to not using AI for autonomous offensive weapons? Or to ensuring that any defensive autonomous systems are carefully engineered to avoid risks of accidental harm or conflict escalation? Are present testing and formal verification methods adequate for that task?
4.Can there be transparent, accountable oversight from an independently constituted ethics board or similar entity with both the power to veto aspects of the program and the power to bring public transparency to issues where necessary or appropriate? For example, while Alphabet's AI-focused subsidiary DeepMind has committed to independent ethics review, we are not aware of similar commitments from Google itself. Given this letter, we are concerned that the internal transparency, review, and discussion of Project Maven inside Google was inadequate. Any project review process must be transparent, informed, and independent. While it remains difficult to ensure that that is the case, without such independent oversight, a project runs real risk of harm.
This discussion has been archived. No new comments can be posted.

EFF: Google Should Not Help the US Military Build Unaccountable AI Systems

Comments Filter:
  • Screw EFF (Score:5, Funny)

    by Marlin Schwanke ( 3574769 ) on Thursday April 05, 2018 @11:38PM (#56390815)
    We’re all working really hard to burn the world down. How dare that pack of libtard snowflakes get in the way of our fun. Bring on the SkyNet!
    • Re: (Score:2, Insightful)

      by Anonymous Coward

      Well if you can get China and Russia to stop improving their own militaries than by all means don't let the US do it. There is a big fallacy today that says if the US didn't build advanced weapons then there would no reason for others to create their own. The world is heading towards a melt down and I want the US to have the weapons needed to come out on top. The US already has to put up with people clamoring for total transparency in it's intelligence and counter intelligence agencies. These are the same m

      • Re: (Score:2, Insightful)

        by AmiMoJo ( 196126 )

        It's not that the US develops those weapons, as much as they US gets involved in a lot of other countries. Reducing that would be a good start, but unfortunately it looks unlikely under the current administration.

        Advanced weapons don't make a huge difference really. The US still has enough nukes to maintain MAD. No missile shield is reliably enough to defend against that arsenal, and the same goes for current Russian ICBMs. All this stuff about hypersonic nuclear cruise missiles and torpedo drones is largel

        • by Whibla ( 210729 )

          Advanced weapons don't make a huge difference really.

          Yet! At some point they will, but that day is unforeseeably in our 'sci-fi future'.

          The US still has enough nukes to maintain MAD. No missile shield is reliably enough to defend against that arsenal, and the same goes for current Russian ICBMs. All this stuff about hypersonic nuclear cruise missiles and torpedo drones is largely posturing, adding nuclear warheads to technologies developed for other kinds of warfare.

          The US has roughly 4000 nukes, Russia has maybe a few hundred more, so in a pure numbers sense I'd agree with you, it's definitely MAD.

          It's not just about absolute numbers though, as, roughly a decade ago, the US started upgrading the fuses in their W76's (a set of 8 independently targeted 100 kilotonne warheads launched on a trident missile from a submarine) to increase their accuracy. This improvement doesn't violate the te

        • Advanced weapons can be very useful in a non-nuclear war. Some of the stories I've heard about what US forces were capable of in the 1991 Gulf War were quite impressive. They're better now.

      • âHackersconectâ I am frank Costello! One of the top agent in hackersconect. We usually don't give advert as our website is in serious upgrade so most of you that are having issues signing up for a hacker and haven't gotten a response from our site you can always mail us for continuity of you job taken or job deals. We take jobs of all kinds!! Very important question you ask yourself when you need a hacker. "Does this hacker own a website?? If they do, fine you good to go.
      • by Rakarra ( 112805 )

        The world is heading towards a melt down and I want the US to have the weapons needed to come out on top.

        Those two statements are self-contradicting. I always liked Carl Sagan's quote about the US and Soviet Union when we were worried about who had the bigger weapons:

        Imagine, a room, awash in gasoline. And there are two implacable enemies in that room. One of them has 9,000 matches. The other has 7,000 matches. Each of them is concerned about who’s ahead, who’s stronger. Well, that's the kind of situation we are actually in. The amount of weapons that are available to the United States and the Sovi

    • by Anonymous Coward

      You can't effectively limit government with laws, policies, or agreements over the long term. All of western legal history, not to mention the total disfiguration of the U.S. Constitution over centuries, should make that clear. If you build destructive technology for the government, someone or some group in the government will be working overtime to make sure it gets used in the worst possible way.

      Don't help the government do anything, ever. Engage your local communities. Help your neighbors. Start a busine

  • The entire lost boils down to "if we can't do it perfectly, we should just let other countries do it".

    What an insanely naive position.

    • by mentil ( 1748130 )

      Honestly I'm waiting for Tencent killbots. Bonus points if they're actually remotely piloted by Chinese gamers a la 'Toys'.

    • It may be naive, but do you think "we should spearhead research, even if it's prone to corruption" is a better idea? They both suck, really, and the latter is morally dubious ("I can shoot first") or equally naive ("It's going to be used for good/defense only against Evil") . And being part of the research-spearheading country, makes one reap both pros and cons of the tech.
      • They both suck, really, and the latter is morally dubious

        By that estimation pretty much all military (and much non-military) R&D is morally dubious. All kinds of technology and knowledge has the potential to be abused.

        Morality is all well and good, but pragmatism matters too. I'd rather be "morally dubious" and alive than morally virtuous and dead.

  • 1. no
    2. no
    3. no
    4. no

    Just look at current international agreements and how often they are ignored.

    • by Dutch Gun ( 899105 ) on Friday April 06, 2018 @12:52AM (#56390987)

      Can these questions be answered in the affirmative for any advanced weapons system? Seems sort of an impossibly high bar they've set.

      • Correct, unilateral disarmament is the goal, because, if we stopped being such a threat, everyone would love us.

      • Can these questions be answered in the affirmative for any advanced weapons system?

        Yes. Mutual restrictions on weapons work when the weapons are big, or require lots of infrastructure, and are easy to monitor.

        There have been two reasonably successful examples:

        1. Nukes.
        2. Battleships

        Battleships were restricted in the Washington Naval Treaty [wikipedia.org]. There was some cheating, but it mostly worked pretty well. But not well enough to prevent WW2.

        • by mentil ( 1748130 )

          AI projects can be physically small, have a computing cluster no more conspicuous than a typical cryptocoin mining or HPC operation, and can be carried out inside a mountain that's off-grid and air-gapped. Multiple legal authorities haven't managed to keep the Pirate Bay down for more than a year or so, what chance is there for a completely offline project?

          Once the software is written it can be copied to thousands of different secure locations. Sure, eventually they'd need to test it on actual robots, but i

        • by mwvdlee ( 775178 )

          Nukes still exist and recently Trump has been making it very clear that they are still easily within reach should some crazy nutjob get (re-)ellected.
          He may not have acted on his threats yet, but it's clear that he could and is stupid enough he might actually do so.
          So... no... mutual restrictions even on these kinds of weapons have not worked.

        • Battleships were restricted in the Washington Naval Treaty. There was some cheating, but it mostly worked pretty well. But not well enough to prevent WW2.

          It should also be noted that the Treaty limits on BB's resulted in the rise of the CV as the capital ship....

          IOW, all that really happened as a result of the Washington Naval Treaties is that WW2 was fought with different weapons/tactics than WW1....

      • by TimSSG ( 1068536 )
        I think using real and accurate AI to sound a missile warning system would be a good thing in most peoples' view. Tim S.

        Can these questions be answered in the affirmative for any advanced weapons system? Seems sort of an impossibly high bar they've set.

  • Answers to the questions: 1. No. Nothing international is binding, nor can it ever be. 2. Maybe, but it wouldn't change anything so it's not worth the expense. Let some universities throw donor dollars at the question, argue it philosophically, and encourage their snowflakes to skip class and protest it while it gets implemented anyway. 3. Not in a free market. 4. That's not how our government works when it comes to secret programs. And if you want ethics, the folks who leaked this info should go to jail
  • by Anonymous Coward

    Why is the EFF involved here?

    Which of my online freedoms is being infringed???

  • by Anonymous Coward on Friday April 06, 2018 @01:04AM (#56391007)

    When is it acceptable to help the military? There are a lot of applications that could be used for surveillance and non-offensive purposes, but could also be used to attack or kill people. As a civilian researcher developing technology with military funding, it's not clear how the work will eventually be used.

    I was involved with a project that was funded by a US military office. To remain anonymous, I won't say exactly where my funding came from or what project I was working on, but I've seen calls to fund this research from the Air Force Office of Scientific Research and also from the Army Research Laboratory. Atmospheric wind shear can be exploited by aircraft to converse power through dynamic soaring. During the day, when the surface is being heated by solar radiation, the aircraft can fly in thermals and other areas of ascent in the planetary boundary layer, usually in the lowest 1-2 km, and exploit static soaring. Autonomous systems such as drones can use this information in planning their flight path and conserve power, which allows them to stay in flight longer and extend the missions they can carry out.

    Although there are civilian uses for this technology, my work was funded by a grant from the US military. I had no role in designing the project or soliciting funding, but I was employed with funds from the grant. There are non-violent uses for this technology, even in military applications. But they can also be used to attack people.

    Drones could be used to deliver supplies including food or medical supplies. Drones could be used to locate people in search-and-rescue missions. Drones could be used by the Coast Guard to patrol smugglers bringing contraband and drugs in the US. Drones could also be used to patrol the southern border of the US and would probably be quite a bit more useful than a wall. They could be used to gather surveillance of enemy combatants who may pose a risk both to US troops and civilians, to allow people time to evacuate or find shelter. None of these are violent, and many of these applications are not controversial at all. However, drones can also carry weapons and be used to attack and kill people.

    As a researcher, I have no control over how my research is used by the military. I can use the results in other projects for civilian use to benefit people. A meteorologist might use drones to collect data around severe thunderstorms to improve weather forecasting and provide better warnings to people. This technology could be used to extend the flight of those drones and help gather data that can save lives. However, the research is funded by the military, and the military could use it to kill people.

    Is it wrong to accept the funding and conduct research that can benefit civilians but can also be used for harm? Most technology can be used for non-violent purposes that are overwhelmingly beneficial to people. Even nuclear weapons could be used to benefit humanity if, for example, they were used to destroy a large near Earth asteroid that might collide with Earth. As a researcher, I have no control over how the military would use the results of my work. But that work could be used for both beneficial and harmful purposes. Is it wrong to accept that funding and do research for the military? When is it acceptable to do research with military funding and when is it not? Where do you draw the line?

    • Re: (Score:3, Funny)

      by Anonymous Coward

      >Where do you draw the line?

      Obviously the line should have been drawn in 1960's when the precursor network to the Internet, ARPANET, was funded by the military ARPA by diverting a million dollars from a ballistic missile defense program for its development. If only back then the military hadn't paid universities and companies to develop the technologies used commonly for the internet today, cyberwar or cyberterrorism, or cybercrime wouldn't be things now.

    • by raymorris ( 2726007 ) on Friday April 06, 2018 @02:24AM (#56391143) Journal

      You mentioned a lot of non-violent uses of technology that has been funded by the military, and military resources being used to deliver food, medical supplies, and other relief. That's all true and good. Versus violent uses, you say, which are bad.

      ALSO there are countries who want to wipe us out. There are countries with the ability to kill millions of Americans. What has happened before will happen again - there will be a country who *wants* to attack us and *can*. The US response to Japanese surprise attack at Pearl Harbor was very much violent - as it needed to be. They were bombing us - by surprise, pretending to negotiate trade agreements with us while their ships were underway to attack us. Swift and violent action to protect ourselves was the right action, and the only option.

      I most certainly don't agree with every use of the US military. I AM very glad for its primary use - being a massive deterrent to anyone who might think about attacking us. You may think "no military would ever attack the United States". That's true, at the moment. But why? Why wouldn't North Korea, or Iran, Russia, or China*, send bombers to the US? Because we would crush them, that's why. The REASON we don't have to fight off an attack today is precisely because of our military capability.

      That's the main use of a superpower military - making an attack on us inconceivable by simply having the *capability* to win decisively and quickly if we were attacked. That's a good thing. I don't want our country to be defenseless, a tempting target. Our capacity for overwhelming violence is a large part of why other countries don't initiate violence against us or our friends.

      * The situation with China specifically is a bit more complex at the moment. Trade is important to them, and they have some significant military power. They have also noticed that they can attack us via cyber warfare and we don't treat it as an attack, we let them get away with that.

      • by Anonymous Coward

        You mentioned a lot of non-violent uses of technology that has been funded by the military, and military resources being used to deliver food, medical supplies, and other relief. That's all true and good. Versus violent uses, you say, which are bad.

        ALSO there are countries who want to wipe us out. There are countries with the ability to kill millions of Americans. What has happened before will happen again - there will be a country who *wants* to attack us and *can*. The US response to Japanese surprise attack at Pearl Harbor was very much violent - as it needed to be. They were bombing us - by surprise, pretending to negotiate trade agreements with us while their ships were underway to attack us. Swift and violent action to protect ourselves was the right action, and the only option.

        I most certainly don't agree with every use of the US military. I AM very glad for its primary use - being a massive deterrent to anyone who might think about attacking us. You may think "no military would ever attack the United States". That's true, at the moment. But why? Why wouldn't North Korea, or Iran, Russia, or China*, send bombers to the US? Because we would crush them, that's why. The REASON we don't have to fight off an attack today is precisely because of our military capability.

        That's the main use of a superpower military - making an attack on us inconceivable by simply having the *capability* to win decisively and quickly if we were attacked. That's a good thing. I don't want our country to be defenseless, a tempting target. Our capacity for overwhelming violence is a large part of why other countries don't initiate violence against us or our friends.

        * The situation with China specifically is a bit more complex at the moment. Trade is important to them, and they have some significant military power. They have also noticed that they can attack us via cyber warfare and we don't treat it as an attack, we let them get away with that.

        The US ability to wage war is a contradiction in of itself. The US _IS_ engaged in conflicts all over the world. You're confusing things a bit though. It's called the arms race and there absolutely are rules, research or not. One core component you're over looking is the separation of Military and civilian. There are a _lot_ of companies that exist purely to service the Military (Boston Dynamics for example). And yes, export controls are a thing. Fact that Google, Facebook or any other company would

        • by Rakarra ( 112805 )

          Disclaimer - I absolutely do not agree with Cyberwar being in any way equivocal to traditional war. No one dies in a ddos. I do however agree with controls on technology as I do the use of Military force, domestically.

          Well, there's cyberwar, and there's espionage. Most of what has been done has been espionage, usually for economic gain, not military gain. But a cyberwar would be, say, electronically attacking infrastructure, such that it would be more difficult to coordinate and attack in a real war, and that WOULD result in casualties that can be linked to the cyberwar.

    • Most technology can be used for non-violent purposes that are overwhelmingly beneficial to people.

      In addition, violence itself is merely another tool, one which can be put to good purpose. Military forces are important tools of public policy. They can be used to end horrific suffering and they can be used to maintain peace, by explicitly threatening anyone who would break the peace with violent consequences.

      The underlying assumption of your post seems to be that military capability is an unalloyed evil. I'll grant that in an ideal world it would be completely unnecessary, but that is not the world in which we live. If we're concerned about misuse of military power, it seems to me that the armed forces already have more than enough capability to have us shaking in our boots, and it's not clear to me that adding AI to the mix (assuming the AI doesn't get out of control) significantly changes anything.

      To make military forces "safe", we need to (a) ensure that they remain subject to civilian control and (b) ensure that civilian control acts responsibly. I'll grant that we seriously undermined (b) in the 2016 election, but that's a repairable problem.

  • We call them soldiers and policemen.

  • by Anonymous Coward

    And exactly WHO thinks China isn't working on this crap every minute of the day to undermine our Republic?

    • by Anonymous Coward

      China doesn't need to do anything to undermine the US. They are watching a nation where one half hates the other half more than they hate any external threat. They look at the self-declared exemplar of democracy and a free press, and think, "not for us." With 1.5 billion people, they literally just need to keep manufacturing, trading, and not getting involved in multi-trillion dollar foreign adventures. The inevitable outcome is just an exercise in mathematics.

  • by Darkling-MHCN ( 222524 ) on Friday April 06, 2018 @02:26AM (#56391145)

    The internet itself is based on intellectual property paid for and developed by the US military. GPS systems which are at the core of many computer applications we all love and use is run off a system originally used developed for the US military... there's hundreds more examples.

    Technology is just that. It can be used for multiple purposes, very often the original intent of the technology can end up being used in completely ways, meaning that technology intended for military use can end up becoming something like the world wide web and technology not intended for military use can end up being used to take lives e.g.chlorine gas is used widely within industry for thousands of purposes... other than gassing people in idlib

  • To completely render all electronics useless on the planet... one big solar flare from the sun will do that... set us back 100 years. Take this rout and thats where it ends...
  • At least UBER is not doing the research on target acquisition.

  • 4.Can there be transparent, accountable oversight from an independently constituted ethics board or similar entity with both the power to veto aspects of the program and the power to bring public transparency to issues where necessary or appropriate?

    What in the world are they smoking (and can I get some, it must be goooood shiiiit)? In what reality do they believe that the design of military systems is subject to veto from a non-democratically- accountable entity? From where does this board derive any mandate to be making public policy?

    I'm not against the goal here of having some ethics review. But there's a large gap between 'there should be an ethics board' and 'some dudes in Silicon Valley self-appointed themselves to veto the decisions of our elect

  • by Slicker ( 102588 ) on Friday April 06, 2018 @09:13AM (#56392017)

    AI driven defensive and offensive weapons systems are crucial to the survival of any power in the future world. We need to redouble efforts into making them more efficient. It's as simple as if we don't, they will and we will be lost to what little time is left for mankind to be written into any history books.

    That said, we could focus hard solving the problems of differentiating between legitimate and illegitimate targets. We could focus on systems to save lives and win the hearts and minds of local populations. The only way an enemy is truly defeated is if you either killed them all (which is possible) or win over their hearts and minds (which is harder).

    Above all, it would be extremely beneficial to focus on non-lethal weapons systems. For example, small drones with tranquilizer darts or slime bombs that make an area so slick that enemy troops cannot traverse enabling a battle win by maneuver. Catch enemy soldiers with nets... Whatever it is--war technologies require extreme innovation and creativity, be they lethal or not. The non-lethal approaches add the advantages of:

    1. Capturing provides people to interrogate, leading to information that is key to more wins.
    2. Non-harm is far more effective at winning the hearts and minds of an enemy.
    3. Non-harm is far better for Public Relations.
    4. Non-harm is morally superior, when and where it is reasonably possible.

  • The opinion of EFF on this topic is not a surprise. But probably other interest groups also stated their opinion? Would be interesting to get a wider range of views on this.
  • The US government is going to find someone to help them with AI object recognition and target assistance. If it's not a new tech titan, it will be an establish defense contractor. A better implementation is safer for everyone. Both accuracy and speed of response are important in weapons systems.

    As the AI becomes less effective, the risk of bad outcomes increases: collateral damage, misidentified innocents, and missed opportunities on real targets.

    While I firmly believing that automating kill authority is ve

  • The military still hasn't solved the problem of handing weapons to people like William Calley Jr. Robots are a ways down on my list of concerns.

  • EFF used to be about protecting technological freedom. Now they're worried that users of technology have too much freedom. This means that at some point in the past, EFF won! (Slashdot, why didn't you report on this earlier?)

  • Look, this is the future of warfare. Drag your heels on that one as much as you like and find yourself in the same position as the old fleet admirals that felt big battle ships were the way to go.

    Airplane and carrier killed the battleship. Its done. Its an inferior weapons platform. If you had a choice going into war between having a bunch of battleships or a bunch of carriers with planes, trained pilots etc... you're going for the carriers or you're going to lose horribly.

    same deal with the AI systems. If

E = MC ** 2 +- 3db

Working...