X users are still complaining about arbitrary shadowbanning

Share via:


Users of Elon Musk-owned X (formerly Twitter) continue complaining the platform is engaging in shadowbanning — aka restricting the visibility of posts by applying a “temporary” label to accounts that can limit the reach/visibility of content — without providing clarity over why it’s imposed the sanctions.

Running a search on X for the phrase “temporary label” shows multiple instances of users complaining about being told they’ve been flagged by the platform; and, per an automated notification, that the reach of their content “may” be affected. Many users can be seen expressing confusion as to why they’re being penalized — apparently not having been given a meaningful explanation as to why the platform has imposed restrictions on their content.

Complaints that surface in a search for the phrase “temporary label” show users appear to have received only generic notifications about the reasons for the restrictions — including a vague text in which X states their accounts “may contain spam or be engaging in other types of platform manipulation”.

The notices X provides do not contain more specific reasons, nor any information on when/if the limit will be lifted, nor any route for affected users to appeal against having their account and its contents’ visibility degraded.

“Yikes. I just received a ‘temporary label’ on my account. Does anyone know what this means? I have no idea what I did wrong besides my tweets blowing up lately,” wrote X user, Jesabel (@JesabelRaay), who appears to mostly post about movies, in a complaint Monday voicing confusion over the sanction. “Apparently, people are saying they’ve been receiving this too & it’s a glitch. This place needs to get fixed, man.”

“There’s a temporary label restriction on my account for weeks now,” wrote another X user, Oma (@YouCanCallMeOma), in a public post on March 17. “I have tried appealing it but haven’t been successful. What else do I have to do?”

“So, it seems X has placed a temporary label on my account which may impact my reach. ( I’m not sure how. I don’t have much reach.),” wrote X user, Tidi Grey (@bgarmani) — whose account suggests they’ve been on the platform since 2010 — last week, on March 14. “Not sure why. I post everything I post by hand. I don’t sell anything spam anyone or post questionable content. Wonder what I did.”

The fact these complaints can be surfaced in search results means the accounts’ content still has some visibility. But shadowbanning can encompass a spectrum of actions — with different levels of post downranking and/or hiding potentially being applied. So the term itself is something of a fuzzy label — reflecting the operational opacity it references.

Musk, meanwhile, likes to claim defacto ownership of the baton of freedom of speech. But since taking over Twitter/X the shadowbanning issue has remained a thorn in the billionaire’s side, taking the sheen off claims he’s laser-focused on championing free expression. Public posts expressing confusion about account flagging suggest he’s failed to resolve long-standing gripes about random reach-sanctions. And without necessary transparency on these content decisions there can be no accountability.

Bottom line: You can’t credibly claim to be a free speech champion while presiding over a platform where arbitrary censorship continues to be baked in.

Last August, Musk claimed he would “soon” address the lack of transparency around shadowbanning on X. He blamed the problem being hard to tackle on the existence of “so many layers of ‘trust & safety’ software that it often takes the company hours to figure out who, how and why an account was suspended or shadowbanned” — and said a ground-up code rewrite was underway to simplify this codebase.

But more than half a year later complaints about opaque and arbitrary shadowbanning on X continue to roll in.

Lilian Edwards, an Internet law academic at the University of Newcastle, is another user of X who’s recently been affected by random restrictions on her account. In her case the shadowbanning appears particularly draconian, with the platform hiding her replies to threads even to users who directly follow her (in place of her content they see a “this post is unavailable” notice). She also can’t understand why she should be targeted for shadowbanning.

On Friday, when we were discussing the issues she’s experiencing with visibility of her content on X, her DM history appeared to have been briefly ‘memoryholed’ by the platform, too — with our full history of private message exchanges not visible for at least several hours. The platform also did not appear to be sending the standard notification when she sent DMs, meaning the recipient of her private messages would need to be manually checking to see if there was any new content in the conversation, rather than being proactively notified she had sent them a new DM.

She also told us her ability to RT (i.e repost) others’ content seems to be affected by the flag on her account which she said was applied last month.

Edwards, who has been on X/Twitter since 2007, posts a lot of original content on the platform — including lots of interesting legal analysis of tech policy issues — and is very obviously not a spammer. She’s also baffled by X’s notice about potential platform manipulation. Indeed, she said she was actually posting less than usual when she got the notification about the flag on her account as she was on holiday at the time.

“I’m really appalled at this because those are my private communications. Do they have a right to down-rank my private communications?!” she told us, saying she’s “furious” about the restrictions.

Another X user — a self professed “EU policy nerd”, per his platform biog, who goes by the handle @gateklons — has also recently been notified of a temporary flag and doesn’t understand why.

Discussing the impact of this, @gateklons told us: “The consequences of this deranking are: Replies hidden under ‘more replies’ (and often don’t show up even after pressing that button), replies hidden altogether (but still sometimes showing up in the reply count) unless you have a direct link to the tweet (e.g. from the profile or somewhere else), mentions/replies hidden from the notification tab and push notifications for such mentions/replies not being delivered (sometimes even if the quality filter is turned off and sometimes even if the two people follow each other), tweets appearing as if they are unavailable even when they are, randomly logging you out on desktop.”

@gateklons posits that the recent wave of X users complaining about being shadowbanned could be related to X applying some new “very erroneous” spammer detection rules. (And, in Edwards’ case, she told us she had logged into her X account from her vacation in Morocco when the flag was applied — so it’s possible the platform is using IP address location as a (crude) signal to factor into detection assessments, although @gateklons said they had not been travelling when their account got flagged.)

We reached out to X with questions about how it applies these sort of content restrictions but at the time of writing we’d only received its press email’s standard automated response — which reads: “Busy now, please check back later.”

Judging by search results for “temporary label”, complaints about X’s shadowbanning look to be coming from users all over the world (who are from various points on the political spectrum). But for X users located in the European Union there’s now a decent chance Musk will be forced to unpick this Gordian Knot — as the platform’s content moderation policies are under scrutiny by Commission enforcers overseeing compliance with the bloc’s Digital Services Act (DSA).

X was designated as a very large online platform (VLOP) under the DSA, the EU’s content moderation and online governance rulebook, last April. Compliance for VLOPs, which the Commission oversees, was required by late August. The EU went on to open a formal investigation of X in December — citing content moderation issues and transparency as among a longer list of suspected shortcomings.

That investigation remains ongoing but a spokesperson for the Commission confirmed “content moderation per se is part of the proceedings”, while declining to comment on the specifics of an ongoing investigation.

As you know, we have sent Requests for Information [to X] and, on December 18, 2023, opened formal proceedings into X concerning, among other things, the platform’s content moderation and platform manipulation policies,” the Commission spokesperson also told us, adding: “The current investigation covers Articles 34(1), 34(2) and 35(1), 16(5) and 16(6), 25(1), 39 and 40(12) of the DSA.”

Article 16 sets out “notice and action mechanism” rules for platforms — although this particular section is geared towards making sure platforms provide users with adequate means to report illegal content. Whereas the content moderation issue users are complaining about in respect to shadowbanning relates to arbitrary account restrictions being imposed without clarity or a route to seek redress.

Edwards points out that Article 17 of the pan-EU law requires X to provide a “clear and specific statement of reasons to any affected recipients for any restriction of the visibility of specific items of information” — with the law broadly draft to cover “any restrictions” on the visibility of the user’s content; any removal of their content; the disabling of access to content or demoting content.

The DSA also stipulates that a statement of reasons must — at the least — include specifics about the type of shadowbanning applied; the “facts and circumstances” related to the decision; whether there was any automated decisions involved in flagging an account; details of the alleged T&Cs breach/contractual grounds for taking the action and an explanation of it; and “clear and user-friendly information” about how the user can seek to appeal.

In the public complaints we’ve reviewed it’s clear X is not providing affected users with that level of detail. Yet — for users in the EU where the DSA applies — it is required to be so specific. (NB: Confirmed breaches of the pan-EU law can lead to fines of up to 6% of global annual turnover.)

The regulation does include one exception to Article 17 — exempting a platform from providing the statement of reasons if the information triggering the sanction is “deceptive high-volume commercial content”. But, as Edwards points out, that boils down to pure spam — and literally to spamming the same spammy content repeatedly. (“I think any interpretation would say high volume doesn’t just mean lots of stuff, it means lots of more or less the same stuff — deluging people to try to get them to buy spammy stuff,” she argues.) Which doesn’t appear to apply here.

(Or, well, unless all these accounts making public complaints have manually deleted loads of spammy posts before posting about the account restrictions — which seems unlikely for a range of factors, such as the volume of complaints; the variety of accounts reporting themselves affected; and how similarly confused-sounding users’ complaints are.)

It’s also notable that even X’s own boilerplate notification doesn’t explicitly accuse restricted users of being spammers; it just says there “may” be spam on their accounts or some (unspecified) form of platform manipulation going on (which, in the latter case, walks further away from the Article 17 exemption, unless it’s also platform manipulated related to “deceptive high-volume commercial content”, which would surely fit under the spam reason so why even bother mentioning platform manipulation?).

X’s use of a generic claim of spam and/or platform manipulation slapped atop what seem to be automated flags could be a crude attempt to circumvent the EU law’s requirement to provide users with both a comprehensive statement of reasons about why their account has been restricted and a way to for them to appeal the decision.

Or it could just be that X still hasn’t figured out how to untangle legacy issues attached to its trust and safety reporting systems — which are apparently related to a reliance on “free-text notes” that aren’t easily machine readable, per an explainer by Twitter’s former head of trust and safety, Yoel Roth, last year, but which are also looking like a growing DSA compliance headache for X — and replace a confusing mess of manual reports with a shiny new codebase able to programmatically parse enforcement attribution data and generate comprehensive reports.

As has previously been suggested, the headcount cuts Musk enacted when he took over Twitter may be taking a toll on what it’s able to achieve and/or how quickly it can undo knotty problems.

X is also under pressure from DSA enforcers to purge illegal content off its platform — which is an area of specific focus for the Commission probe — so perhaps, and we’re speculating here, it’s doing the equivalent of flicking a bunch of content visibility levers in a bid to shrink other types of content risks — but leaving itself open to charges of failing its DSA transparency obligations in the process.

Either way, the DSA and its enforcers are tasked with ensuring this kind of arbitrary and opaque content moderation doesn’t happen. So Musk & co are absolutely on watch in the region. Assuming the EU follows through with vigorous and effective DSA enforcement X could be forced to clean house sooner rather than later, even if only for a subset of users located in European countries where the law applies.

Asked during a press briefing last Thursday for an update on its DSA investigation into X, a Commission official pointed back to a recent meeting between the bloc’s internal market commissioner Thierry Breton and X CEO Linda Yaccarino, last month, saying she had reiterated Musk’s claim that it wants to comply with the regulation during that video call. In a post on X offering a brief digest of what the meeting had focused on, Breton wrote that he “emphasised that arbitrarily suspending accounts — voluntarily or not — is not acceptable”, adding: “The EU stands for freedom of expression and online safety.”

Balancing freedom and safety may prove to be the real Gordian Knot. For Musk. And for the EU.





Source link

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Popular

More Like this

X users are still complaining about arbitrary shadowbanning


Users of Elon Musk-owned X (formerly Twitter) continue complaining the platform is engaging in shadowbanning — aka restricting the visibility of posts by applying a “temporary” label to accounts that can limit the reach/visibility of content — without providing clarity over why it’s imposed the sanctions.

Running a search on X for the phrase “temporary label” shows multiple instances of users complaining about being told they’ve been flagged by the platform; and, per an automated notification, that the reach of their content “may” be affected. Many users can be seen expressing confusion as to why they’re being penalized — apparently not having been given a meaningful explanation as to why the platform has imposed restrictions on their content.

Complaints that surface in a search for the phrase “temporary label” show users appear to have received only generic notifications about the reasons for the restrictions — including a vague text in which X states their accounts “may contain spam or be engaging in other types of platform manipulation”.

The notices X provides do not contain more specific reasons, nor any information on when/if the limit will be lifted, nor any route for affected users to appeal against having their account and its contents’ visibility degraded.

“Yikes. I just received a ‘temporary label’ on my account. Does anyone know what this means? I have no idea what I did wrong besides my tweets blowing up lately,” wrote X user, Jesabel (@JesabelRaay), who appears to mostly post about movies, in a complaint Monday voicing confusion over the sanction. “Apparently, people are saying they’ve been receiving this too & it’s a glitch. This place needs to get fixed, man.”

“There’s a temporary label restriction on my account for weeks now,” wrote another X user, Oma (@YouCanCallMeOma), in a public post on March 17. “I have tried appealing it but haven’t been successful. What else do I have to do?”

“So, it seems X has placed a temporary label on my account which may impact my reach. ( I’m not sure how. I don’t have much reach.),” wrote X user, Tidi Grey (@bgarmani) — whose account suggests they’ve been on the platform since 2010 — last week, on March 14. “Not sure why. I post everything I post by hand. I don’t sell anything spam anyone or post questionable content. Wonder what I did.”

The fact these complaints can be surfaced in search results means the accounts’ content still has some visibility. But shadowbanning can encompass a spectrum of actions — with different levels of post downranking and/or hiding potentially being applied. So the term itself is something of a fuzzy label — reflecting the operational opacity it references.

Musk, meanwhile, likes to claim defacto ownership of the baton of freedom of speech. But since taking over Twitter/X the shadowbanning issue has remained a thorn in the billionaire’s side, taking the sheen off claims he’s laser-focused on championing free expression. Public posts expressing confusion about account flagging suggest he’s failed to resolve long-standing gripes about random reach-sanctions. And without necessary transparency on these content decisions there can be no accountability.

Bottom line: You can’t credibly claim to be a free speech champion while presiding over a platform where arbitrary censorship continues to be baked in.

Last August, Musk claimed he would “soon” address the lack of transparency around shadowbanning on X. He blamed the problem being hard to tackle on the existence of “so many layers of ‘trust & safety’ software that it often takes the company hours to figure out who, how and why an account was suspended or shadowbanned” — and said a ground-up code rewrite was underway to simplify this codebase.

But more than half a year later complaints about opaque and arbitrary shadowbanning on X continue to roll in.

Lilian Edwards, an Internet law academic at the University of Newcastle, is another user of X who’s recently been affected by random restrictions on her account. In her case the shadowbanning appears particularly draconian, with the platform hiding her replies to threads even to users who directly follow her (in place of her content they see a “this post is unavailable” notice). She also can’t understand why she should be targeted for shadowbanning.

On Friday, when we were discussing the issues she’s experiencing with visibility of her content on X, her DM history appeared to have been briefly ‘memoryholed’ by the platform, too — with our full history of private message exchanges not visible for at least several hours. The platform also did not appear to be sending the standard notification when she sent DMs, meaning the recipient of her private messages would need to be manually checking to see if there was any new content in the conversation, rather than being proactively notified she had sent them a new DM.

She also told us her ability to RT (i.e repost) others’ content seems to be affected by the flag on her account which she said was applied last month.

Edwards, who has been on X/Twitter since 2007, posts a lot of original content on the platform — including lots of interesting legal analysis of tech policy issues — and is very obviously not a spammer. She’s also baffled by X’s notice about potential platform manipulation. Indeed, she said she was actually posting less than usual when she got the notification about the flag on her account as she was on holiday at the time.

“I’m really appalled at this because those are my private communications. Do they have a right to down-rank my private communications?!” she told us, saying she’s “furious” about the restrictions.

Another X user — a self professed “EU policy nerd”, per his platform biog, who goes by the handle @gateklons — has also recently been notified of a temporary flag and doesn’t understand why.

Discussing the impact of this, @gateklons told us: “The consequences of this deranking are: Replies hidden under ‘more replies’ (and often don’t show up even after pressing that button), replies hidden altogether (but still sometimes showing up in the reply count) unless you have a direct link to the tweet (e.g. from the profile or somewhere else), mentions/replies hidden from the notification tab and push notifications for such mentions/replies not being delivered (sometimes even if the quality filter is turned off and sometimes even if the two people follow each other), tweets appearing as if they are unavailable even when they are, randomly logging you out on desktop.”

@gateklons posits that the recent wave of X users complaining about being shadowbanned could be related to X applying some new “very erroneous” spammer detection rules. (And, in Edwards’ case, she told us she had logged into her X account from her vacation in Morocco when the flag was applied — so it’s possible the platform is using IP address location as a (crude) signal to factor into detection assessments, although @gateklons said they had not been travelling when their account got flagged.)

We reached out to X with questions about how it applies these sort of content restrictions but at the time of writing we’d only received its press email’s standard automated response — which reads: “Busy now, please check back later.”

Judging by search results for “temporary label”, complaints about X’s shadowbanning look to be coming from users all over the world (who are from various points on the political spectrum). But for X users located in the European Union there’s now a decent chance Musk will be forced to unpick this Gordian Knot — as the platform’s content moderation policies are under scrutiny by Commission enforcers overseeing compliance with the bloc’s Digital Services Act (DSA).

X was designated as a very large online platform (VLOP) under the DSA, the EU’s content moderation and online governance rulebook, last April. Compliance for VLOPs, which the Commission oversees, was required by late August. The EU went on to open a formal investigation of X in December — citing content moderation issues and transparency as among a longer list of suspected shortcomings.

That investigation remains ongoing but a spokesperson for the Commission confirmed “content moderation per se is part of the proceedings”, while declining to comment on the specifics of an ongoing investigation.

As you know, we have sent Requests for Information [to X] and, on December 18, 2023, opened formal proceedings into X concerning, among other things, the platform’s content moderation and platform manipulation policies,” the Commission spokesperson also told us, adding: “The current investigation covers Articles 34(1), 34(2) and 35(1), 16(5) and 16(6), 25(1), 39 and 40(12) of the DSA.”

Article 16 sets out “notice and action mechanism” rules for platforms — although this particular section is geared towards making sure platforms provide users with adequate means to report illegal content. Whereas the content moderation issue users are complaining about in respect to shadowbanning relates to arbitrary account restrictions being imposed without clarity or a route to seek redress.

Edwards points out that Article 17 of the pan-EU law requires X to provide a “clear and specific statement of reasons to any affected recipients for any restriction of the visibility of specific items of information” — with the law broadly draft to cover “any restrictions” on the visibility of the user’s content; any removal of their content; the disabling of access to content or demoting content.

The DSA also stipulates that a statement of reasons must — at the least — include specifics about the type of shadowbanning applied; the “facts and circumstances” related to the decision; whether there was any automated decisions involved in flagging an account; details of the alleged T&Cs breach/contractual grounds for taking the action and an explanation of it; and “clear and user-friendly information” about how the user can seek to appeal.

In the public complaints we’ve reviewed it’s clear X is not providing affected users with that level of detail. Yet — for users in the EU where the DSA applies — it is required to be so specific. (NB: Confirmed breaches of the pan-EU law can lead to fines of up to 6% of global annual turnover.)

The regulation does include one exception to Article 17 — exempting a platform from providing the statement of reasons if the information triggering the sanction is “deceptive high-volume commercial content”. But, as Edwards points out, that boils down to pure spam — and literally to spamming the same spammy content repeatedly. (“I think any interpretation would say high volume doesn’t just mean lots of stuff, it means lots of more or less the same stuff — deluging people to try to get them to buy spammy stuff,” she argues.) Which doesn’t appear to apply here.

(Or, well, unless all these accounts making public complaints have manually deleted loads of spammy posts before posting about the account restrictions — which seems unlikely for a range of factors, such as the volume of complaints; the variety of accounts reporting themselves affected; and how similarly confused-sounding users’ complaints are.)

It’s also notable that even X’s own boilerplate notification doesn’t explicitly accuse restricted users of being spammers; it just says there “may” be spam on their accounts or some (unspecified) form of platform manipulation going on (which, in the latter case, walks further away from the Article 17 exemption, unless it’s also platform manipulated related to “deceptive high-volume commercial content”, which would surely fit under the spam reason so why even bother mentioning platform manipulation?).

X’s use of a generic claim of spam and/or platform manipulation slapped atop what seem to be automated flags could be a crude attempt to circumvent the EU law’s requirement to provide users with both a comprehensive statement of reasons about why their account has been restricted and a way to for them to appeal the decision.

Or it could just be that X still hasn’t figured out how to untangle legacy issues attached to its trust and safety reporting systems — which are apparently related to a reliance on “free-text notes” that aren’t easily machine readable, per an explainer by Twitter’s former head of trust and safety, Yoel Roth, last year, but which are also looking like a growing DSA compliance headache for X — and replace a confusing mess of manual reports with a shiny new codebase able to programmatically parse enforcement attribution data and generate comprehensive reports.

As has previously been suggested, the headcount cuts Musk enacted when he took over Twitter may be taking a toll on what it’s able to achieve and/or how quickly it can undo knotty problems.

X is also under pressure from DSA enforcers to purge illegal content off its platform — which is an area of specific focus for the Commission probe — so perhaps, and we’re speculating here, it’s doing the equivalent of flicking a bunch of content visibility levers in a bid to shrink other types of content risks — but leaving itself open to charges of failing its DSA transparency obligations in the process.

Either way, the DSA and its enforcers are tasked with ensuring this kind of arbitrary and opaque content moderation doesn’t happen. So Musk & co are absolutely on watch in the region. Assuming the EU follows through with vigorous and effective DSA enforcement X could be forced to clean house sooner rather than later, even if only for a subset of users located in European countries where the law applies.

Asked during a press briefing last Thursday for an update on its DSA investigation into X, a Commission official pointed back to a recent meeting between the bloc’s internal market commissioner Thierry Breton and X CEO Linda Yaccarino, last month, saying she had reiterated Musk’s claim that it wants to comply with the regulation during that video call. In a post on X offering a brief digest of what the meeting had focused on, Breton wrote that he “emphasised that arbitrarily suspending accounts — voluntarily or not — is not acceptable”, adding: “The EU stands for freedom of expression and online safety.”

Balancing freedom and safety may prove to be the real Gordian Knot. For Musk. And for the EU.





Source link

Disclaimer

We strive to uphold the highest ethical standards in all of our reporting and coverage. We StartupNews.fyi want to be transparent with our readers about any potential conflicts of interest that may arise in our work. It’s possible that some of the investors we feature may have connections to other businesses, including competitors or companies we write about. However, we want to assure our readers that this will not have any impact on the integrity or impartiality of our reporting. We are committed to delivering accurate, unbiased news and information to our audience, and we will continue to uphold our ethics and principles in all of our work. Thank you for your trust and support.

Website Upgradation is going on for any glitch kindly connect at office@startupnews.fyi

More like this

Tech leaders recommend colleagues for Trump’s cabinet

Some tech investors and executives have been trying...

How This This Third-Time Founder Has Streamlined Logistics For...

SUMMARY Founded in 2022, Traqo is a third-party logistics...

Popular

Upcoming Events

Startup Information that matters. Get in your inbox Daily!