Slack trains machine learning models on user messages, documents, and other content without explicit permission. Training is opt-out, which means your private information will be stolen by default. Worse, you have to ask your organization’s Slack administrator (HR, IT, etc.) to send an email to the company asking it to stop. (You can’t do it yourself.) Welcome to the dark side of the new gold rush for artificial intelligence training materials.
Corey Quinn, a senior director at DuckBill Group, discovered the policy in an introduction to Slack’s Privacy Principles and posted it on X (via PCMag). The section reads as follows (emphasis ours): ” Develop artificial intelligence/machine learning models,our system Analyze customer data (For example Messages, content and files) submitted to Slack and Other information (including usage information) as defined in our Privacy Policy and your Customer Agreement.
The opt-out process requires that you complete all work to protect your data. According to the Privacy Statement, “To opt out, please have your organization or workspace owner or primary owner contact our Customer Experience team at feedback@slack.com and provide your workspace/organization URL and topic Run ‘Slack Global Model Opt-Out Request’.’ Once the opt-out is complete, we will process your request and respond.
Sorry, Slack, what the hell are you doing with user DMs, messages, files, etc.? I’m sure I didn’t read the article correctly. pic.twitter.com/6ORZNS2RxC
— Corey Quinn (@QuinnyPig) May 16, 2024
The company responded to Quinn’s message on Generating) machine learning models.
It’s unclear how long ago the Salesforce-owned company included this tidbit in its terms. Saying customers can opt out, when “customers” does not include employees working within the organization, is misleading at best. They have to ask the people who handle Slack access in their businesses to do this – I hope they will oblige.
Inconsistencies in Slack’s privacy policy add to the confusion. One section states, “Slack does not have access to the underlying content when developing AI/ML models or otherwise analyzing customer data. We have various technical measures in place to prevent this from happening. However, the machine learning model training policy appears to be inconsistent with This statement is contradictory and leaves a great deal of room for confusion.
Furthermore, Slack’s web page marketing its premium generative AI tool reads: “Work without worries. Your data is your data. We don’t use it to train Slack AI. Everything is built on Slack’s secure foundation Run on a facility that meets the same compliance standards as Slack itself.
In this case, the company is talking about its premium generative artificial intelligence tools, separate from the machine learning models it trains without explicit permission.However, as PCMag points out that implying that all of your data is unaffected by AI training is, at best, a highly misleading statement, since companies can obviously choose which AI models are covered by that statement.
Engadget tried contacting Slack through multiple channels, but had not received a response as of press time. We will update this story if we hear back.
3 Comments
Pingback: Slack has been scanning your messages to train its artificial intelligence model – Tech Empire Solutions
Pingback: Slack has been scanning your messages to train its artificial intelligence model – Mary Ashley
Pingback: Slack has been scanning your messages to train its artificial intelligence model – Paxton Willson