Slashdot is powered by your submissions, so send in your scoop

 



Forgot your password?
typodupeerror
×
Privacy Security

Slack AI Can Be Tricked Into Leaking Data From Private Channels (theregister.com) 9

Slack AI, an add-on assistive service available to users of Salesforce's team messaging service, is vulnerable to prompt injection, according to security firm PromptArmor. From a report: The AI service provides generative tools within Slack for tasks like summarizing long conversations, finding answers to questions, and summarizing rarely visited channels.

"Slack AI uses the conversation data already in Slack to create an intuitive and secure AI experience tailored to you and your organization," the messaging app provider explains in its documentation. Except it's not that secure, as PromptArmor tells it. A prompt injection vulnerability in Slack AI makes it possible to fetch data from private Slack channels.

This discussion has been archived. No new comments can be posted.

Slack AI Can Be Tricked Into Leaking Data From Private Channels

Comments Filter:
  • I expect it will take something like 10 years, maybe more, for these services to become reasonably secure.

  • by Ed Tice ( 3732157 ) on Wednesday August 21, 2024 @11:36AM (#64723936)
    Within any company, there are always one or two experts in certain subject areas who are the only ones who can field certain questions. If you train an LLM on this data and then ask it a similar question, the answer is always a near verbatim reiteration of the training data simply because there aren't many answers upon which to draw. And that is either an actual data leak or something that feels like one.
  • ...what's the point of having the AI being able to read comments if they're not able to use said comments? We're beginning to see the problem with humans, and understand that the issues that we face are inherent to US, and not the machines.

  • ... "Nothing on social media is private."

    Please repeat this every time you want to share sensitive information with someone. On social media, "private" just means temporarily hidden from some users.
  • by mukundajohnson ( 10427278 ) on Wednesday August 21, 2024 @12:50PM (#64724222)

    It's a pain point for LLMs. For ideal results, you want the model itself finetuned on all of your corporate data. It's expensive. The major complication here is that you can't "turn off" certain segments of data with permissions.

    So, the next best thing is RAG via an embedding library, which is IMO pretty ugly and doesn't get you great responses. With RAG you can apply user context to the lookup process and block content based on user permissions. It gets tedious since this is usually a completely separate process from what content you're integrating with.

    Overall, it's not very hard to guarantee content security during application design. It's a simple rule. If the process has access to the data, then anyone using the process has access to the data.

    • Why can't you associate a provenance with each piece of data and carry it along through all intermediate results to "poison" them and eliminate poisoned results?

So... did you ever wonder, do garbagemen take showers before they go to work?

Working...