Slack AI Can Be Tricked Into Leaking Data From Private Channels (theregister.com) 9
Slack AI, an add-on assistive service available to users of Salesforce's team messaging service, is vulnerable to prompt injection, according to security firm PromptArmor. From a report: The AI service provides generative tools within Slack for tasks like summarizing long conversations, finding answers to questions, and summarizing rarely visited channels.
"Slack AI uses the conversation data already in Slack to create an intuitive and secure AI experience tailored to you and your organization," the messaging app provider explains in its documentation. Except it's not that secure, as PromptArmor tells it. A prompt injection vulnerability in Slack AI makes it possible to fetch data from private Slack channels.
"Slack AI uses the conversation data already in Slack to create an intuitive and secure AI experience tailored to you and your organization," the messaging app provider explains in its documentation. Except it's not that secure, as PromptArmor tells it. A prompt injection vulnerability in Slack AI makes it possible to fetch data from private Slack channels.
Such a surprise (Score:2)
I expect it will take something like 10 years, maybe more, for these services to become reasonably secure.
Specialized LLM will always have this problem (Score:3)
So then... (Score:2)
...what's the point of having the AI being able to read comments if they're not able to use said comments? We're beginning to see the problem with humans, and understand that the issues that we face are inherent to US, and not the machines.
What happens in Vegas ends up in (Score:1)
...VegasGPT
Repeat after me... (Score:2)
Please repeat this every time you want to share sensitive information with someone. On social media, "private" just means temporarily hidden from some users.
Data isolation (Score:3)
It's a pain point for LLMs. For ideal results, you want the model itself finetuned on all of your corporate data. It's expensive. The major complication here is that you can't "turn off" certain segments of data with permissions.
So, the next best thing is RAG via an embedding library, which is IMO pretty ugly and doesn't get you great responses. With RAG you can apply user context to the lookup process and block content based on user permissions. It gets tedious since this is usually a completely separate process from what content you're integrating with.
Overall, it's not very hard to guarantee content security during application design. It's a simple rule. If the process has access to the data, then anyone using the process has access to the data.
Re: (Score:2)