The ‘algospeak’ code phrases TikTokkers use to submit about intercourse, self-harm
Earlier this yr, influencer Julia Fox found herself in hot water for commenting on a cryptic TikTok video.
“Gave a lady mascara, and it will need to have been so good that she determined that her and her buddy ought to each strive it with out my consent,” TikToker Conor Whipple mentioned within the clip.
A confused Fox responded, “Idk why however I don’t really feel unhealthy for you lol.”
Whipple shot again, “You don’t really feel unhealthy that I used to be sexually assaulted?”
Unbeknownst to Fox, the time period “mascara” is used as a stand-in for “intercourse” on social media. TikTokers use it as a code phrase to debate sexual assault whereas circumventing the app’s censorship of delicate and specific content material.
“Mascara” is only one of many coded phrases in an rising web language — “algospeak” — that makes an attempt to outsmart the algorithm.
It’s a linguistic curiosity of the digital age. Typically unrelated phrases like “mascara” stand in for an additional phrase completely. Different instances, phrases that sound like different phrases are subbed in, like “seggs” for “sex.”
Whereas algospeak offers TikTokers a strategy to to talk their minds with out worry of censorship, it’s additionally permitting probably harmful content material to unfold below the radar.

Algospeak codewords are permitting children to entry content material about self-harm, suicide, and consuming issues with out social media safeguards — or their dad and mom — understanding about it.
The phenomenon is nothing knew, in accordance with AI skilled Vince Lynch.
“It was extra for unlawful actions the place folks have been promoting issues on the web they weren’t alleged to be promoting, and so they have been looking for their market,” Lynch, who’s the CEO of IV.AI, instructed The Publish. “It has at all times been a factor, but it surely’s rising rather more in mainstream audiences.”
He mentioned the unlawful unique animal market was an early hotbed of proto-algospeak, whereas “snow” and “snowboarding” have been well-liked Craigslist codewords for cocaine within the aughts.

Algospeak is especially taking root on TikTok, the place content material moderation is pervasive. In line with the company’s own data, TikTok eliminated 110 million movies from simply July to September of final yr — round 1% of all content material posted to the app.
Roughly half of the movies have been mechanically eliminated by synthetic intelligence that detects prohibited content material. That’s why intelligent content material creators have been changing phrases, like swapping “kill” out for “unalive.”
TikTokers additionally report rampant “shadow banning” on the platform. Though the app is notoriously opaque about how its algorithm chooses which content material to amplify, many content material creators complain that their movies are artificially suppressed.
Whereas shadow-banned content material isn’t eliminated, customers report far much less engagement on movies that debate sure matters. TikToker Lindsay Makes Movies posts movies about autism and neurodivergence. She just lately took to the platform to complain about her content material being suppressed.

“If I’m speaking about these ideas however I’m not utilizing the proper censored phrase, then the movies aren’t going to get to the folks they have been supposed for,” Lindsay mentioned. “If there have been a useful resource that will inform us what the issue phrases are and what the alternative phrases are, it might be an enormous assist.”
In reality, lists do exist — and, like Lindsay thought, “autistic” is supposedly among the many phrases TikTok suppresses.
SeanVV, a self-described “educator of Massive Tech phrases of service and privateness insurance policies” created a six web page list of hashtags that he claims are “banned or throttled” by TikTok’s algorithm.
It consists of every part from #selfharm, #rape, and #QAnon to #single, #teen, and #instagram. As a result of so many phrases are supposedly suppressed, TikTokers discover themselves in a guessing recreation.

Many are erring on the aspect of utilizing algospeak, even in contexts the place it may not be mandatory.
“Yt people” has been extensively utilized in movies in regards to the supposed annoying habits of white folks. “Le$bean” has generally taken the place of lesbian in movies. And, when TikTok tried to crack down on Covid-19 associated content material, “panini” and “panoramic” grew to become stand-ins for the phrase “pandemic.”
According to a survey carried out by TELUS Worldwide, 51% of web customers say they’ve observed so-called “algospeak” on-line. Amongst Gen Z, that quantity soars as much as 72%. Roughly one in three Gen Zers mentioned they’ve used the newfangled web language themselves.

However algospeak is usually a actually harmful factor when it’s used to bypass a few of TikTok’s extra critical content material moderation guidelines.
In line with TikTok’s community guidelines, the app does “not permit content material depicting, selling, formalizing, or glorifying actions that might result in suicide, self-harm, or disordered consuming.”
Accordingly, customers who search “suicide” on the platform are redirected to the suicide disaster hotline. And those that search for “anorexia” are given the Nationwide Consuming Dysfunction Affiliation’s cellphone quantity.
However these measures imply nothing when algospeak code phrases circumvent them completely. Poisonous sub-communities have unified round cryptic, coded language.
“[Algospeak] is permitting these teams to search out one another extra shortly, which I feel may be actually harmful particularly when these persons are in search of one thing that’s dangerous to themselves,” Lynch warned.

As an illustration, TikTok is flooded with consuming dysfunction content material masked by algospeak. “Anorexic” has been shortened to “ana” in disturbing videos of emaciated ladies. Self-starving TikTokers commerce extreme dieting tips with the tag “WIEIAD,” quick for “what I eat in a day.”
In the meantime, self-harm related content is commonly tagged with “cutt1ng.” And suicidal TikTokers discuss “unaliving” themselves somewhat than killing themselves.

Algospeak is a very horrifying prospect contemplating TikTok has thousands and thousands of impressionable younger customers. The minimal age requirement of 13 is well circumvented by preteens who lie about their date of beginning.
Though he says the know-how behind content material moderation is innovating quickly, Lynch worries algospeak makes content material moderation a relentless catch-up recreation.
“The extra regulation that occurs on a platform, the extra methods folks will discover methods of getting round it,” he mentioned.