On the heels of ongoing issues around how big tech is appropriating data from individuals and businesses in the training of AI services, a storm is brewing among Slack users upset over how the Salesforce-owned chat platform is charging ahead with its AI vision.
The company, like many others, is tapping its own user data to train some of its new AI services. But, it turns out that if you don’t want Slack to use your data, you have to email the company to opt out.
And the terms of that engagement are tucked away what appears to be an out-of-date, confusing privacy policy that no one was paying attention to. That was the case with Slack, until a miffed person posted about them on a community site hugely popular with developers, and then that post went viral…which is what happened here.
It all kicked off last night, when a note on Hacker News raised the issue of how Slack trains its AI services, by way of a straight link to its privacy principles, no additional comment was needed. That post kicked off a longer conversation — and what seemed like news to current Slack users — that Slack opts users in by default to its AI training, and that you need to email a specific address to opt out.
That Hacker News thread then spurred multiple conversations and questions on other platforms: There is a newish, generically named product called “Slack AI” that lets users search for answers and summarize conversation threads, among other things, but why is that not once mentioned by name in on that privacy principles page in any way, even to make clear if the privacy policy applies to it? And why does Slack reference both ‘global models’ and ‘AI models’?
Between people being confused about where Slack is applying its AI privacy principles, and people being surprised and annoyed at the idea of emailing to opt-out — at a company that makes a big deal of touting that “Your control your data” — Slack does not come off well.
The shock might be new, but the terms are not. According to pages on the Internet Archive, the terms have been applicable since at least September 2023. (We have asked the company to confirm.)
Per the privacy policy, Slack is using customer data specifically to train “global models,” which Slack uses to power channel and emoji recommendations and search results. Slack tells us that its usage of the data has specific limits.
“Slack has platform-level machine learning models for things like channel and emoji recommendations and search results. We do not build or train these models in such a way that they could learn, memorize, or be able to reproduce some part of customer data,” a company spokesperson told TechCrunch. However, the policy does not appear to address the overall scope and the company’s wider plans for training AI models.
In its terms, Slack says that if customers opt out of data training, they would still benefit from the company’s”globally trained AI/ML models.” But again, in that case, it’s not clear then why the company is using customer data in the first place to power features like emoji recommendations.
The company also said it doesn’t use customer data to train Slack AI.
“Slack AI is a separately purchased add-on that uses Large Language Models (LLMs) but does not train those LLMs on customer data. Slack AI uses LLMs hosted directly within Slack’s AWS infrastructure, so that customer data remains in-house and is not shared with any LLM provider. This ensures that customer data stays in that organization’s control and exclusively for that organization’s use,” a spokesperson said.
Some of the confusion is likely to be addressed sooner rather than later. In a reply to one critical take on Threads from engineer and writer Gergely Orosz, Slack engineer Aaron Maurer conceded that the company needs to update the page to reflect “how these privacy principles play with Slack AI.”
Maurer added that these terms were written at the time when the company didn’t have Slack AI, and these rules reflect the company’s work around search and recommendations. It will be worth examining the terms for future updates, given the confusion around what Slack is currently doing with its AI.
The issues at Slack are a stark reminder that, in the fast-moving world of AI development, user privacy should not be an afterthought and a company’s terms of service should clearly spell out how and when data is used or if it is not.