June 19, 2020

TikTok rife with racist, antisemitic content aimed at children – Research

An example of antisemitic content spread through TikTok.
(photo credit: screenshot)
Far-right extremists have found a new social media platform to spread hateful content online, taking to the relatively new TikTok platform, which is primarily used by young children, a new study revealed.

The study was written by Gabriel Weimann, professor of Communication and senior researcher at the Institute for Counter Terrorism (ICT), professor emeritus at the University of Haifa and guest professor at the University of Maryland, and Natalie Masri, a research assistant and graduate student at ICT – and is going to be published in the academic journal Studies in Conflict and Terrorism.

Weimann is an expert in the online activities of terrorists, and has been studying them since 1998. While he previously focused on other groups, such as Islamist terrorists, he recently shifted to studying the far Right. This, he told The Jerusalem Post, is because of how widespread it has become in recent years.

“The far Right is now the most dominant source of terrorism in the world,” Weimann explaind. “More people are killed by the far Right today than by Islamist terrorists.”
In addition, the far Right has also gone unchecked for too long, he added, as it has only recently begun to be considered terrorism.

“The far Right has traditionally been protected from being defined as terrorism because they were protected by free speech and politics, among other things,” Weimann said. “But recently they started going beyond hate speech and have started committing actual, public attacks.”

The idea of the far Right using online platforms to spread hate is nothing new, and has been going on for at least three decades. However, most only think about their activities on mainstream social media platforms such as Facebook, Twitter and Instagram, not to mention sites rife with such content, such as 4chan.

TikTok, however, has been largely overlooked.

Weimann discovered the presence of the far Right on TikTok by accident, with research assistant and co-author Masri having found it while doing research online.

“I was shocked when she told me, because I thought it was only for kids. Even my granddaughter uses TikTok,” Weimann told the Post. “But after I saw what she found and we looked into it, I decided to spend several months surveying the app.”

Developed by Chinese company ByteDance, TikTok is a relatively new social media app, having launched in 2016 in China and in 2017 abroad. The app allows users to upload and view short lip synced videos with an easy to use interface.

While many users seek only to upload humorous and entertaining memes, others use the app to spread hateful messages, with many users sharing neo-Nazi and white supremacist propaganda or calling for violence against Jews and people of color.

However, it’s the fastest-growing app and social media platform in the world. It’s the seventh most downloaded app of the decade with over two billion downloads, and it boasts a user base in the hundreds of millions.

This, Weimann explained, is one of the reasons why the far Right’s presence on TikTok is especially dangerous when compared to Facebook and Twitter.

Another reason is the age of its users.

“The app is marketed towards young children, from the age of 4-5 to 15-16. A very innocent and gullible audience,” he told the Post.

Though the app’s Terms of Service specify that all users must be aged 13 or older, many are still clearly younger. In addition, 41% of its users are aged 16-24.

This exposure to hateful content at such a young age is especially dangerous, because it has the potential to glorify hate crimes and seduce impressionable youths into developing extremist views, or even committing violent acts themselves.

“There is a direct correlation between the rise of far-right extremism and the far Right’s presence online,” Weimann explained. “All of the far-right individuals who committed terrorist attacks against synagogues – like in Halle, Pittsburgh and Poway – and mosques – like in Christchurch – were active online, uploading extremist content and being exposed to the extremist content of others. Some of these terrorists became heroes, too. Brenton Tarrant, who was behind the 2019 Christchurch mosque shooting in New Zealand, videoed and livestreamed his attack, and he’s now celebrated.”

While there are unlikely to be adults streaming their attacks on TikTok in the manner Tarrant did on Facebook Live, Weimann explained that this isn’t their goal in the first place.

“It’s a different audience. That means they [the far Right] are trying to seduce those that are the future fighters,” he told the Post. “This is not about adults who will commit attacks within a day or two of seeing such content, but more about recruiting and seducing the next generation of far-right extremists. The seduction is a gradual and sophisticated process, and all those extremists we find today as adults were exposed to extremist content online.”

The study found that a majority of far-right hate speech on the app was related to antisemitism and Holocaust denial. This included videos of Nazi rallies with a variety of antisemitic messages . 

An example of antisemitic content spread through TikTok.
 (Photo credit: Screenshot)
Other videos were centered on racism and white supremacy, including a video of a young boy saying “white people need to start owning some black people, so take your black friends and sell them.” Another video turns Tarrant into the intro for a fictional video game called Brenton Tarrant’s Extreme Mosque Shooter, with the option to “start” and “load game” alongside a picture of Tarrant.

Another trend the study found was that many accounts were named after far-right attackers and slogans. This includes accounts named after individuals such as Brenton Tarrant, by organizations such as the Klu Klux Klan, and using codes, such as including the number 88, a white supremacist numerical code referring to “Heil Hitler.”

One way to trace the spread of hate on the app is by seeing the amount of results and views associated with a particular hashtag. For example, while trying to search “AdolfHitler” or “KKK” will bring up the result “no results found – this phrase may be associated with hateful behaviour,” one can use hashtags of the same phrases.

“Unlike most other online platforms that are, in a way, Western and owned by Western companies, TikTok is a Chinese company, so it’s harder for them to be responsible with regulating their content,” Weimann explained. “They aren’t pressured the same way Americans can pressure Facebook to moderate content.”

Though TikTok’s Terms of Service state that users may not upload any content that is inflammatory, offensive, hateful, discriminatory, racist, sexist, antagonistic or criminal, the app has yet to enforce this.

And yet, it isn’t as if TikTok is incapable of regulating the content on the platform.

“The company is very careful about any content concerning China,” he said. “If you upload any content whatsoever that is against the Chinese government, it will be taken down. They are very careful about this. But they aren’t careful about anything else.”

In order to pressure the platform into regulating itself, Weimann suggests a combination of political and economic pressure. Though political pressure against China is difficult, economic pressure has precedence, with TikTok having already been forced to pay a $5.7 million fine to the US government in 2019 after having illegally collected personal information of children under the age of 13.

This combination of political and economic pressure has worked before. One example is from 2018, when the social media platform Tumblr banned adult content after a combination of factors including being unappealing for advertisers, a US federal law making websites liable for knowingly assisting or facilitating illegal sex trafficking and restrictions on content put in place by Apple.

Weimann agreed that something like this could happen again, but the first and most important step in getting TikTok to self-regulate is to raise awareness.

“We need to create awareness about the dangers of TikTok, but that isn’t easy because most people think it’s just a clean app for kids,” he told the Post. “So the first step is to create global awareness. TikTok is not that clean.

“After that, solutions will come by themselves. But one measure will not be enough, multiple measures will be needed. And if we combine these measures, we can make TikTok improve its online content.”

Source: Jpost