Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Government United States Microsoft

Congress Bans Staff Use of Microsoft's AI Copilot (axios.com) 32

The U.S. House has set a strict ban on congressional staffers' use of Microsoft Copilot, the company's AI-based chatbot, Axios reported Friday. From the report: The House last June restricted staffers' use of ChatGPT, allowing limited use of the paid subscription version while banning the free version. The House's Chief Administrative Officer Catherine Szpindor, in guidance to congressional offices obtained by Axios, said Microsoft Copilot is "unauthorized for House use."

"The Microsoft Copilot application has been deemed by the Office of Cybersecurity to be a risk to users due to the threat of leaking House data to non-House approved cloud services," it said. The guidance added that Copilot "will be removed from and blocked on all House Windows devices."

This discussion has been archived. No new comments can be posted.

Congress Bans Staff Use of Microsoft's AI Copilot

Comments Filter:
  • by Deimos24601 ( 904979 ) on Friday March 29, 2024 @09:07PM (#64355194)
    The real problem is that it only lies SOME of the time.
  • That didn't work for Samsung's Android value-add.

    Microsoft are now mandating that everyone go out and buy a new keyboard for an intrusive feature their government has already outlawed.

    • They'll pry my IBM Model M from my cold, dead hands...

    • Don't be so sure about these exclusion rules lasting forever. The leaking is an issue for all corporates not just governments. And it's an issue for all the large AI tool vendors. Sooner or later there will be a industry wide security standard involving keeping all AI interactions on approved private servers. This will allow Microsoft to offer a specially configured version of Windows which the governments and corporates will want to use and pay for.
    • As long as that key can be remapped, I'm actually in favor. I could use a new key for some games I play.

      If it's just yet another key that I have to buy a more expensive keyboard for so I can at least switch it off in the keyboard firmware, I pass.

  • The ones that need it the most.

    • by caseih ( 160668 )

      Pretty sure politicians of a certain party have been using it for a while, based on the ridiculous things their legislators tend to say these days. They even react like ChatGPT when the absursidy of some of their statements is pointed out to them, doubling down on their ridiculousness.

      • Well, of course. Do as I say, not as I do. You know, the Christian way. Or the Jewish way. Or the (insert religion here) way.

      • Pretty sure politicians of a certain party have been using it for a while, based on the ridiculous things their legislators tend to say these days. They even react like ChatGPT when the absursidy of some of their statements is pointed out to them, doubling down on their ridiculousness.

        You mean the politicians who think that putting on a dress magically turns you into a girl? I know, right??

  • Windows is a threat (Score:5, Interesting)

    by RitchCraft ( 6454710 ) on Friday March 29, 2024 @11:42PM (#64355410)

    "The Microsoft Copilot application has been deemed by the Office of Cybersecurity to be a risk to users due to the threat of leaking House data to non-House approved cloud services," - Hell, that describes Windows perfectly as well. Just replace "Microsoft Copilot" with "Microsoft Windows".

  • The free version leaks from cloud to cloud, but limited use of the "paid" version, as well all know what a bang-up job Microsoft does with security, especially with cloud computing...

    https://arstechnica.com/securi... [arstechnica.com]

    Great idea of Congressional policy trusting the paid version...paying for the same shitty security. The enshittification is now of Congress following Microsoft's example.

    JoshK.

    • The free version leaks from cloud to cloud, but limited use of the "paid" version, as well all know what a bang-up job Microsoft does with security, especially with cloud computing...

      The problem with leaking information has nothing to do with traditional security issues. The problem is that any query can become data that modifies or trains the model.

      All sane companies already have policies about what can be included in an external gen AI query. For example, generic code questions are fine, but copying any portion of any proprietary code is forbidden.

      • And you think AI can tell the difference? This is a landmine waiting to explode as soon as an AI uses OSS code to answer a question for someone who then wants to use that code in his CSS project.

      • The problem with leaking information has nothing to do with traditional security issues. The problem is that any query can become data that modifies or trains the model.

        All sane companies already have policies about what can be included in an external gen AI query. For example, generic code questions are fine, but copying any portion of any proprietary code is forbidden.

        Right. And to be clear, the danger scenario would be someone enters some sensitive data in a prompt, uses the resulting output, the sensitive data becomes part of the model fine-tuning, and a different user gets that same sensitive data in another result, "Sensitive" doesn't have to mean classified data. It could be something embarrassing, like a strategy email dissing an opponent.

    • by hjf ( 703092 )

      The paid version of ChatGPT leaks like crazy. "Fine tuning" jobs straight up steal your training data.

      I was recently trying it for a chatbot for a call center company's client. I was training a model, but only trained it with the prompt:

      "You're a support operator. The client will ask you general customer support questions and you'll respond according to your training". I gave it the minimum amount of examples it requires (10 i think) with questions such as "what's my account balance" and answers like "I'm u

  • I was thinking about something to come up with, but in the end I couldn't find any. In the end, at last something that makes sense I guess. /s

  • by Tom ( 822 )

    For similar reasons (I work on projects with serious security demands) I've gone down the rabbit hole to get local LLMs working and I'm pretty happy now, but it was quite a journey.

    We now have stuff like Ollama and LM Studio that can run models locally, open models that have sufficiently large rolling windows, and things like privateGPT as a glue to feed in your own documents. Or Anything LLM if you want an all-in-one solution (though in my tests it didn't quite work as well).

    We're getting there. In a few y

    • Local LLMs can cover only some use cases. Many use cases, such as those that might be used in the crafting of laws, or doing research on current events or data, wouldn't work well with a local LLM. The problem is that in these cases, the source data is not local, even if the LLM is.

      • by Tom ( 822 )

        Yes, there are some use cases where you want the LLM to essentially be a search engine on steroids. In that case, you need one that's online and vacuums up the Internet every so often. Essentially Google 2.

        But for a lot of use cases a model that is occasionally updated will do just fine.

  • At first glance this appears obvious: you don't want any amateur stuff in coding for cybersecurity. Most of the net is comprised of this, and LLMs are for the most part trained on the net. But a good programmer can do quite a bit with refreshing their memory via LLMs, then actually working on whatever problem they have to solve.

    The major hurdle, at least as far as I see, is that institutions have to make blanket statements to prevent people (those few? hope so) from getting lazy, especially under time co

No spitting on the Bus! Thank you, The Mgt.

Working...