October 29, 2019

Understanding online antisemitism: towards a new qualitative approach

By Matthias J. Becker

When the antisemitic Pittsburgh shooting happened roughly a year ago, on 27 October 2018, mainstream media drew attention to the assailant’s activities online before he committed his crimes. The Guardian noted that the ‘suspect railed against Jews and Muslims on site used by “alt-right”. Robert Bowers appears to have used the platform Gab to accuse Jews of bringing “evil Muslims” into US’. And only recently, on 9 October 2019, in Halle (Germany), the gunman referred to an antisemitic conspiracy theory on Amazon-owned platform Twitch, insinuating that Jews aim at destroying the German culture before he tried to kill roughly 80 people celebrating Yom Kippur at a synagogue.

From the perspective of interdisciplinary internet studies, these are troubling, but not astonishing observations. For several years now, antisemitic, racist and misogynist attitudes have been disseminated especially on right-wing supremacist platforms, but also on subchannels of mainstream social media, such as Reddit, Twitter, Facebook or YouTube. It should not come as a surprise that there are correlations, past and present, between hate speech and hate crime. The Nazi crimes would not have been possible without an omnipresent antisemitic discourse in all parts of German society throughout history. Today, the way users in a web milieu continuously frame and evaluate certain individuals and/or groups (that in their eyes represent a rejected out-group) has an impact on their treatment of the latter – in dialogue contexts, but potentially also in analogue ones.

Pittsburgh and Halle are not the only examples in which individuals have expressed hate speechbefore committing related crimes. With regard to the terror attacks on the two mosques in Christchurch, New Zealand, the synagogue in Poway, close to San Diego, and recently on the Walmart in El Paso, the assailants exhibited a proximity to the right-wing extremist platform 8chan. It is highly probable that it was through a permanent engagement with antisemitic (and other hateful and exclusionary) outlooks in such a virtual environment, that the assailants were radicalised in the first place.

The transition of hate speech into hate crime is apparent in all these events. We also see an imitation effect, a mutual influence among lonely wolves taking up arms. Of course, such correlations are not happening in a monocausal frame – they always depend on the predispositions and needs of the individual, but it is striking that all the assailants are part of the same ideological spectrum and web milieu.

Triggered by terrorist attacks in the last twelve months, the media, politicians and the public have started to pay more attention to antisemitism and other forms of hate ideologies online. For far too long, these internet trends have been ignored. In the political sphere, decision-makers have not seen the danger of radicalising language use there, despite the huge impact of the web, especially on the younger generations who tend to experience and engage with new ideas primarily online. Today, language use on the internet determines the way young people think and feel, but neither the research community nor decision-makers have caught up with this fact.

Questions about the nature, consequences and challenges of the digitalisation of society have risen too slowly up the research agenda. There are few major studies on the nature of virtual forms of antisemitism, and antisemitism research in combination with internet studies is still a fringe phenomenon.

In applied linguistics, discourse analysis, and communication science, the web has been (mis)understood as a new version of conventional media (like print media or the TV). Consequently, research methods and tools have not been adapted to the new challenges presented in the virtual world. The web 2.0 follows its own rules. It allows a rapid, uncontrollable, and decentralised dissemination of hatred the dynamics and consequences of which are almost completely unknown at this stage. Linguistic studies that try to capture these current trends are regularly based on randomly selected – and thus, hardly representative – data sets.

Next to the failure to reach data saturation, the applied methods tend to be outdated especially because they do not take into account the three-dimensionality of the virtual sphere: Content online is determined by links to other websites establishing various forms of intertextuality. This comes along with multiple semiotic codes and a symbolic language use (through abbreviations, word play, allusions, etc.) that people in mainstream societies repeatedly use as soon as they express hate speech. All these aspects cannot be addressed by conventional research designs.

In light of the centrality of the internet and the dynamics of radicalisation, the lack of adequate studies is highly worrying. We are being confronted with fundamental changes in society, triggered by digitalisation, but we lack reliable studies into the presence of antisemitism (and other hate ideologies) on the internet, either in the US or Europe. Internet researchers tend to deal with the topics of antisemitism and hate speech by measuring the amount of racist and antisemitic hate speech online. This quantitative focus gives us an overview of the presence of slurs e.g. on social media platforms, and trends in their use. The problem, however, is that the basis for such studies is a rather limited understanding of what actually constitutes hate speech. Hate speech is not realised by slurs and insults only. Antisemitic hate speech is realised as soon as the speaker reproduces a certain stereotype, such as Jewish greed, power, child murder, etc. It goes without saying that stereotypes – as well as any other conceptual entities – can be conveyed by various verbal patterns.

Another way researchers have tried to make the vast amount of digital text more transparent to us is the use of so-called ‘automated sentiment analyses’. Here, not only key words (such as the aforementioned slurs), but also the emotional dimensions of words are taken into account. What kind of feelings does user X express with regard to Jews? Are there negative emotional words like hate, fear or despise? Albeit being much more elaborated than only searching key words, the problem about such an approach lies in the presumption that people who exhibit an antisemitic viewpoint express this thinking in a direct way. Again, in contrast to political debate in general, hate ideologies in Western societies are largely conveyed implicitly and without directly mentioning emotions, since related utterances would harm the self-image of the speaker. Instead, speakers use irony, or they just waive all forms of emotional expression.

The observation that language use comes to us in coded ways, and cannot be anticipated before analysing current web discourses in qualitative detail, leads to the conclusion that so far, results coming from internet studies (even those using sophisticated tools for understanding dynamics online) are flawed, giving us only the tip of the iceberg of hate speech. Despite these respectable efforts to understand and measure hate speech online – the internet still is a terra incognita.

What is meant by ‘coded speech’ or ‘implicitness’? While conducting detailed analyses of antisemitic hate speech, one comes across countless forms of more or less coded words and phrases. With regard to proportions, it can be stated that in contemporary Germany, for example, only the smallest part of antisemitism is conveyed explicitly. One example: A speaker who believes Jews to be greedy can say that explicitly – or he/she can imply that idea via allusions or indirect speech acts. When, in the context of the German culture of memory, someone asks: ‘Who is holding out his hand once again?’, readers can easily infer that the speaker relates to Jews and the construct that Jews greedily exploit the Holocaust. However, the speaker mentioned neither Jews nor the stereotype itself.

Other examples of the implicit or coded reproduction of antisemitic stereotypes are forms of word play – such as USrael (implying the well-known construct of an alleged conspiracy between the US and Israel) or Zionazis and iSSrael (indicating that Israel is the new Nazi state); or allusions such as East coast lobby (which suggests the existence of a powerful Jewish elite controlling US and European policies and economies). The slogan Jews will not replace us, coined at Charlottesville, implies that – by triggering mass immigration – Jews conspire to destroy the white, Christian culture. This myth has also been reproduced in Hungary, with regard to the philanthropist Soros. His very name has become a code, standing for an alleged Jewish conspiracy against mainstream Hungarian society.

Another example of contemporary online indirect antisemitism is the far-right conspiracy theory QAnon, implying a secret plot against Trump and American democracy. According to that myth, a global banking elite is pulling the strings of a deep state. When QAnon introduces the idea of this elite it is communicating the idea of disguised Jewish omnipotence (and immorality).

All these circumlocutions represent a large part of antisemitic web discourse on mainstream and fringe websites. In consequence, explicit references to an antisemitic stereotype do not have to occur in order to express an antisemitic idea. The verbal package might seem to be unproblematic but the underlying content is the same as the more explicit forms of antisemitic discourse. And that makes indirect or coded antisemitism particularly dangerous, escaping the scrutiny that might fall on a statement like Jews control the world. However, as stated above, much research so far as been based exclusively on slurs or on explicit statements, and so has not taken into account these indirect, coded or allusive forms of antisemitic communication.

There are many reasons people use coded speech. It is a way of creating intimacy between speaker and readers, a sharing of knowledge other participants in the discourse might not have. This makes coded speech especially popular among young people who wish to distance themselves from older generations. Furthermore, in Germany as well as in many other Western countries, indirect utterances avoid the messenger being pinned to the message. He/she can always deny the content if an unpleasant situation arises. This is crucial because the accusation of being antisemitic or racist might lead to the exclusion of the speaker from his/her peer group. Hence, implicit patterns of language use function like a ‘communicative protective measure’ to protect the speaker’s self-image.

Another reason language is so often coded is that language is always changing. Words and meaning are the product of a permanent metamorphosis. Shifts in antisemitic discourse, especially, are routinely described by experts through the metaphor of a chameleon. For example, the aforementioned rhetorical question insinuating that Jews exploit the Holocaust reproduces the age-old stereotype of Jewish greed. However, speakers do not speak about an alleged Jewish request for money, but about an instrumentalisation of the Holocaust which represents the updated version of the old stereotype, adapted to current or recent historical events. With regard to Israel-related antisemitism, there are a host of communicative detours. Stereotypes that e.g. in the 19th century, were directly projected onto the Jewish out-group in one’s country, are now being transferred onto the Jewish state. In many cases, framing Israelis via antisemitic stereotypes has reportedly led to the exclusion and homicide of Jews in western societies, e.g. recently in France.

The web 2.0 has additionally catalysed the metamorphosis of antisemitic hate speech. The reason for this is that the active participation of web users – the interactionality based on bi-directional exchange – has revolutionised society, inducing bottom-up processes in knowledge transfer (formerly directed by conventional media). The impact of the web can be compared to the invention of the printed book. The production and development of language increases every minute and the few years in which the internet has existed has been only a blink in the history of languages as well as of political discourse. It is still impossible to predict how future language and communication patterns will develop – the changes are far too swift and peripherally organised.

The infancy of the internet means that its norms are in flux. In analogue contexts, the expression of ideas and interaction between members of society – in TV talk shows, in parliamentary debates, on university campuses and in school classes – are based on written and unwritten rules and norms. These norms, culturally and socially developed (and embedded in applicable law) barely apply to the internet. On websites, web users often do not abide by netiquettes (i.e. guidelines for the interaction on a website). Providers or moderators do not regularly check the compliance of web users with regard to these guidelines. The prosecution of hate speech online is only sporadically put into practice since many cases are not reported. Also, a country’s laws may not be clear or strict enough in such matters. Next to the prosecution of individuals or groups by law enforcement agencies, the closure of user profiles or even whole platforms has become more and more common in recent years (as recently, after the El Paso shooting, with regard to the platform 8chan in the US). However, the latter only leads to efforts by individuals and groups to find new providers, so that they can continue their activities in other parts of the web. Therefore, the internet represents a vacuum in which institutions that usually regulate interaction and guarantee the well-being of individuals barely have any authority. Under such conditions, individuals and groups can cultivate destructive drives and have the freedom to behave in a way that would be illegitimate and even illegal in analogue contexts.

The uncontrollability of the most important platform for information acquisition and political debate should concern policy-makers more than it does. A study by the Labour Force Survey (LFS) found ‘virtually all [UK] adults aged 16 to 44 years in the UK were recent internet users (99 per cent) in 2019’.[vi] There have been positive as well as negative implications. In contrast to past social movements, in the digital era, separated people and movements can reach out to one another – beyond social, cultural, socio-economic and other borders. The digital era, however, also allows for destructive attitudes to become more visible, and for individuals with such outlooks to interact, determine their respective outlooks and reinforce alliances. In short, the infrastructure of the web facilitates radicalising interaction between different societal and political milieus.

The internet has communicative conditions – i.e. the conditions that result from the characteristics of the medium and that determine the behaviour patterns of web users – and these also shape how online antisemitic hate speech is constructed. Among these conditions, anonymity and velocity/spontaneity of communication are perhaps the most well-known. In other words, web users, practically, are invisible and through the rapid exchange of ideas, tend to not think about what they write or be aware of how many people they potentially reach with their statements. These are perfect conditions for open articulation, for emotionally charged and polemic exchanges, and for explicit discrimination against certain groups that, in the offline world, would lead to sanctions. Such a climate guarantees a potentially permanent accessibility of hate speech, and a reciprocal confirmation among users e.g. with regard to antisemitic discrimination.

The vital importance of the Internet as a catalyst and trendsetter for societal processes, and the unique communicative conditions that enable radicalisation, underlines the urgent need to critically examine online discourses, to anticipate future trends and to combat hate speech. I have suggested that a new qualitative approach is needed to generate knowledge of coded and allusive language use in various contexts if we are to map and combat current expressions of Jew-hatred. This approach involves researchers systematically examining web comments or postings on relevant websites, based on their linguistic and cultural knowledge. The aim of qualitative content analyses is the structuring of discourse in manifest and latent content, or explicit and implicit units. What antisemitic stereotypes are reproduced through which verbal and visual patterns? Do users of a certain website still refer to Rothschild, or do they use abbreviations or other allusions to refer to the stereotype of Jewish financial power? Only when researchers are able to define the repertoire of current antisemitic hate speech, can they move to a second stage quantitative analyses, now armed with categories based on the findings of the qualitative study.

The ongoing confrontation with hatred online is key to develop strategies to combat antisemitism. Research and education must now integrate internet studies, antisemitism studies and the use of a mixed methods research approaches, marrying qualitative and quantitative methods. It only is a question of time until the second quantitative step will be covered by machine learning tools. It will, however, always remain our task to understand and name the varieties of implicitness which will be far too complex and ambivalent for machine learning and AI tools to process.

Matthias Becker is a research fellow at the Center for Research on Antisemitism at TU Berlin and the Center for German and European Studies (HCGES) at University of Haifa. After conducting several research projects on the use of antisemitic language in political and media campaigns he is now examining antisemitism in the British mainstream. In this essay he argues for the development of a new qualitative approach to understanding what he claims is still largely terra incognita: the ugly world of online antisemitic hate speech.