AI monitoring employee comms for ‘thought crimes’ in Slack & more

Share via:


A number of large US companies are using AI monitoring systems to analyse employee communications in popular business apps like Slack, Teams, and Zoom

One AI model claims to be able to analyse the content and sentiment of both text and images posted by employees, reports CNBC.

Some of these tools are being used in relatively innocuous ways – like assessing aggregate employee reactions to things like new corporate policies

“It won’t have names of people, to protect the privacy,” said Aware CEO Jeff Schumann. Rather, he said, clients will see that “maybe the workforce over the age of 40 in this part of the United States is seeing the changes to [a] policy very negatively because of the cost, but everybody else outside of that age group and location sees it positively because it impacts them in a different way.”

But other tools – including another offered by the same company – can flag the posts of specific individuals.

Aware’s dozens of AI models, built to read text and process images, can also identify bullying, harassment, discrimination, noncompliance, pornography, nudity and other behaviors.

Chevron, Delta, Starbucks, T-Mobile, and Walmart are just some of the companies said to be using these systems. Aware says it has analysed more than 20 billion interactions across more than three million employees.

While these services build on non-AI based monitoring tools used for years, some are concerned that they have moved into Orwellian territory.

Jutta Williams, co-founder of AI accountability nonprofit Humane Intelligence, said AI adds a new and potentially problematic wrinkle to so-called insider risk programs, which have existed for years to evaluate things like corporate espionage, especially within email communications.

Speaking broadly about employee surveillance AI rather than Aware’s technology specifically, Williams told CNBC: “A lot of this becomes thought crime.” She added, “This is treating people like inventory in a way I’ve not seen” […]

Amba Kak, executive director of the AI Now Institute at New York University, worries about using AI to help determine what’s considered risky behavior.

“It results in a chilling effect on what people are saying in the workplace,” said Kak, adding that the Federal Trade Commission, Justice Department and Equal Employment Opportunity Commission have all expressed concerns on the matter, though she wasn’t speaking specifically about Aware’s technology. “These are as much worker rights issues as they are privacy issues.” 

An additional concern is that even aggregated data may be easily de-anonymized when reported at a granular level, “such as employee age, location, division, tenure or job function.”

Photo by Andres Siimon on Unsplash

FTC: We use income earning auto affiliate links. More.



Source link

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Popular

More Like this

AI monitoring employee comms for ‘thought crimes’ in Slack & more


A number of large US companies are using AI monitoring systems to analyse employee communications in popular business apps like Slack, Teams, and Zoom

One AI model claims to be able to analyse the content and sentiment of both text and images posted by employees, reports CNBC.

Some of these tools are being used in relatively innocuous ways – like assessing aggregate employee reactions to things like new corporate policies

“It won’t have names of people, to protect the privacy,” said Aware CEO Jeff Schumann. Rather, he said, clients will see that “maybe the workforce over the age of 40 in this part of the United States is seeing the changes to [a] policy very negatively because of the cost, but everybody else outside of that age group and location sees it positively because it impacts them in a different way.”

But other tools – including another offered by the same company – can flag the posts of specific individuals.

Aware’s dozens of AI models, built to read text and process images, can also identify bullying, harassment, discrimination, noncompliance, pornography, nudity and other behaviors.

Chevron, Delta, Starbucks, T-Mobile, and Walmart are just some of the companies said to be using these systems. Aware says it has analysed more than 20 billion interactions across more than three million employees.

While these services build on non-AI based monitoring tools used for years, some are concerned that they have moved into Orwellian territory.

Jutta Williams, co-founder of AI accountability nonprofit Humane Intelligence, said AI adds a new and potentially problematic wrinkle to so-called insider risk programs, which have existed for years to evaluate things like corporate espionage, especially within email communications.

Speaking broadly about employee surveillance AI rather than Aware’s technology specifically, Williams told CNBC: “A lot of this becomes thought crime.” She added, “This is treating people like inventory in a way I’ve not seen” […]

Amba Kak, executive director of the AI Now Institute at New York University, worries about using AI to help determine what’s considered risky behavior.

“It results in a chilling effect on what people are saying in the workplace,” said Kak, adding that the Federal Trade Commission, Justice Department and Equal Employment Opportunity Commission have all expressed concerns on the matter, though she wasn’t speaking specifically about Aware’s technology. “These are as much worker rights issues as they are privacy issues.” 

An additional concern is that even aggregated data may be easily de-anonymized when reported at a granular level, “such as employee age, location, division, tenure or job function.”

Photo by Andres Siimon on Unsplash

FTC: We use income earning auto affiliate links. More.



Source link

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Website Upgradation is going on for any glitch kindly connect at office@startupnews.fyi

More like this

Telegram reportedly ‘inundated’ with illegal and extremist activity

A New York Times analysis of more than...

Bluesky grows to 9M+ users

Bluesky keeps growing: The company announced that as...

Indie App Spotlight: ‘FitBee’ helps you track your nutrition...

Welcome to Indie App Spotlight. This is a weekly...

Popular

Upcoming Events

Startup Information that matters. Get in your inbox Daily!