- 978 Downloads
This chapter explores what happens to communities or members of ethnic or religious groups when they are targeted by online racist abuse. Two key dimensions are reported. Firstly, the dynamics of response are outlined among six highly targeted Australian communities regarding their first-hand experiences of managing racism and hate online. Secondly, two empirical case studies are discussed where members of faith communities decided that the level of the threat posed by the online harassment to which they were exposed, required them to act. This chapter is about the everyday Internet users who suddenly discover they have wandered into the Dark Side.
What happens to communities or me mbers of ethnic or religious groups when they are targeted by online racist abuse? In this chapter, we explore two dimensions of this question in the Australian context. Firstly, we explore the dynamics of response among six highly targeted Australi an communities—Indigenous, African, Middle Eastern, Chinese, Jewish and Muslim—as individuals from these communities share their experiences of managing racism and hate online. Secondly, we report on two empirical case studies from Sydney, NSW, where members of faith communities, Muslim and Jewish, decided that the level of the threat posed by the online harassment to which they were exposed, required them to act. This chapter is about the everyday Internet users who suddenly discover they have wandered into the Dark Side.
Targets of Hate: Six Groups of Australian Online Users
As part of the research strat egy for the CRaCR Project, six focus groups were convened in 2016 with Aus tralian nationals of Indigenous, African, Middle Eastern, Chinese, Jewish and Muslim backgrounds. These groups were identified as the most highly target ed communities in the CRaCR online survey undertaken in 2013. We report here the outcomes of these focus groups with members of target communities, and then reflect on their implications.
Participants in focus groups from “target” communities
Muslim—mix of ethnicity including Pakistan, Eritrea, Egypt, Bangladesh and Somalia
Jewish—mix of ethnicity including South African, German, Russian and Israeli
Arab/Middle East—Syrian and Lebanese
African: Congolese/West African/Sierra Leonean
Chinese (mix of Mandarin and Cantonese speakers)
The Indigenous Australian Community
Indigenous Australians today are the surviving descendants of the many na tions that populated the Australian continent prior to the seizure of the country by the British Crown in the period after 1770. It is estimated that the Indigenous population at the time of the declaration of British Crown settlement in 1788 stood at about 750,000, speaking over 700 languages.
The 2011 Census identified about 550,000 people as being of Indigenous descent (with estimates of up to 770,000 to include those who had not answered the Census question); the community had a median age of 21 years (Australian Bureau of Statistics 2012). The 2016 Census found that the “typical Aboriginal and/or Torres Strait Islander” was 23 years old, compared with the “typical Aussie” in the general population who was 38 years old.
In general, the Indigenous population has poorer health outcomes, poorer education outcomes and far greater incarceration than the general population. While many have adopted European lifestyles and become more middle class, significant numbers remain impoverished. Discrimination against Indigenous people remains widespread, with very high rates of reported racism.
Indigenous Australians are the most harassed group online, and yet social media is very popular among them. All our respondents had a Facebook page, while two also had Instagram. Facebook is their main source of news. It provides them with access to their social networks and links them to local buying and selling information on sites like eBay. Most have links to Aboriginal cultural pages, and they u se Facebook to alert their networks to issues of domestic violence and assault. They also use the Internet to research issues affecting Aboriginal people and have high levels of self-confidence about their Internet skills. The Internet serves them as a means to reinforce Black community, share information and, in some cases, discuss Aboriginal art. One respondent, a practicing Christian, would report anti-Aboriginal memes and also Muslim slandering of Christians.
Although the Internet brings myriad benefits, it does expose Indigenous people to a flow of racist memes and abuse based on stereotypical hatreds. This includes false claims about Indigenous mendacity, alcoholism and their access to Commonwealth benefits. Race, poverty and life on the edge of society regularly confront Indigenous people via social media and mainstream media comment threads. One participant referred to refugees getting financial support while Indigenous people lived in overcrowded and ramshackle accommodation on controlled debit cards (BasicCard). Another used Facebook to highlight the impact of methamphetamine (ice) on young Aboriginal people, sending it to his networks as a warning. Yet, many a lso used Facebook to simply “hang out with family” and friends. Many said they will not mix with people who refuse to put their faces on Facebook. “If you’ve got nothing to hide, you’d have your face there,” commented one participant.
One of the most controversial issues discussed by the group concerned the online ga me S urvival Island 3, created by a Russian programmer Krisitna Fedenkova in 2015, and available on Apple’s Appstore, Google Play and YouTube gaming. This game, it was claimed, requires gamers to hunt down and kill Aboriginal people (PV Reviews 2015).
When alerted to the existence of this game, a number of group members were so upset they contributed to a Change.org petition, “Killing Indigenous Australians is not a game,” to have it removed. It was only after 90,000 people had signed that Apple and Google, then Amazon, removed the game (Mantle 2015).
So, what happens when hate speech arrives in their space? The group immediately picked up on the psychological gains of hate speakers: “They’ve got that little knife going in and they’re twisting that knife,” said one participant. Members of this focus group believe that Facebook should be the entity responsible for removing the hate.
Facebook should be accountable for removing that person so they can’t have an account. They should be named and shamed, although when you report, they [the haters] make up a fake [identity] … and come back.
When they are harassed, tricked or intimidated, members of this group did u se Facebook devices to “report, delete, block” other people and their posts. They also mentioned being able to brush it off:
Overall there’s not that much [abuse and racism] … somebody might put a smartarse comment on Aborigin als of Australia or something, and you sort of think, well you’re an idiot, and then you move on. The first five people [to respond] might really give it to them … [then] they back right off.
Reflecting on how racism has worsened, one member said:
Since computers have started the racists have become bolder. When I was young they would still have a go at me, but at least I could stand up for myself. Now they’re invisible attackers.
Another man commented:
For me, it seems to hurt more because you can’t have your say back. I’ve got very upset and very depressed and cried. It’s made me feel terrible.
Yet, despite the dangers, the Internet provides some safety. A number of groups agreed that sites like Cultural Surviv al Australia provide an affirmative environment to explore their culture and participate in conversations about their traditional world and its contemporary meanings. (Cultural Survival is a global website where First Nation people can find support for their struggles to survive. Its cultural affirmation marks it as a crucial player, and one that is well protected from cyber racism.) Group members also mentioned that they belong to a variety of local, private, online Aboriginal groups that discuss familial, political and cultural issues. Importantly, posting a comment on these sites often results in supportive feedback that acknowledges their contribution and helps refine their perspectives.
Even though the participants were quite active in their communities, they were not able to identify resources that might defend them on the Internet, nor Indigenous organisations that could act for them online. Their informal networks are strong, but their involvement in formal networks far weaker, suggesting that supportive intervention might need to come from elsewhere.
Australia’s African Community
African immigration to Australia has grown quite substantially in the past 20 years, and is one of the more significant changes to occur since the end of the White Australia Policy in the 1970s. Before 1976, the intake was primarily from South Africa and White (42% of all African-born residents in 2006) or from Mauritius (7.3% in 2006) and Egypt (13.5% in 2006) and Christian or Jewish (Australian Bureau of Statistics 2008). By 2006, there were almost 250,000 African-born people living in Australia, accounting for 5.4% of the total overseas-born population. Communities whose members mainly (90% or more) arrived after 1996 include those from Liberia, Sierra Leone and Sudan. Participants in our focus group were from Sierra Leone and the Democratic Republic of the Congo.
The group was highly tech nology-savvy, quoting their use of Facebook (nearly all), Instagram, WhatsApp, Viber, Imo, Snapchat and LinkedIn. They accessed news through proprietary sites (like the Sydney Morning Herald and CNN), news compilation services on Apple or news on Facebook from the main commercial channels, the Guardian, SBS and the BBC. Using Facebook to engage with issues of racism varied from scarcely ever, to regularly. One of the most attractive personalities to gather support has been Waleed Aly, an Egyptian Australian academic who also now features in a prime-time commercial talk show. He was controversially selected as the Golden Logie winner in 2016, a prize voted on by audiences for the most popular TV personality. He is African, dark-skinned and Muslim, and therefore acts as something of a role model for Africans who see no other Muslims of colour as leading prime-time TV personalities.
During 2016, the issue of Africans in Australia continued to grow in the public mind, especially where conflict or crime appeared to be rising. The issues became highly contentious, with traditional mainstream Australian commentators increasing their rhetorical clashes, and young Africans looking for ways to push bac k again st the racism and xenophobia.
One example given by the group was of public “fake news” in which a non-African woman claimed that her baby had been seized by an African, when it turned out she had actually slain the infant. The group swapped stories about the alleged murder, and the slanders of Africans that had erupted on Facebook in its wake. One response to the defamations was quoted in the group, describing the reaction of an African woman discussing the man who had been accused She said that the woman should be charged with defamation, as her comments were so different to the reality of African men, and were a slander on all African men.
Another example of successful pushback referred to a 2005 case of a university professor who campaigned against African refugees; he eventually was forced to resign (Dick 2005). Every member of the group was able to share experiences of public online racism against Africans, including the 2007 false criticism of Somalis and Sudanese by the then conservative Australian Immigration Minister Kevin A ndrews (Jakubowicz 2010).
Pushback, unfo rtunately, has another side to it—the issue of the power relationships between racists and targets. The angrier the targets become, the more successful perpetrators feel they have been (Anglin 2016). Discussing whether he would respond to racism online, one group member said:
It depends on how angry I get. Sometimes I just don’t want to give them that power over me, over my emotions. I’m stubborn that way. I don’t want them to be able to see the impact they’ve had on me. But sometimes I have, I admit, made a statement to say “this is what I think”, but on my terms rather than in responding to something.
The most angering aspects are the anony mity possible on the Internet, the rapidity of comments and their short lifespan on screen as they are soon overwhelmed by later posts. The participant suggested it is better to use social media to bring about change “to create something real and approach people with respect.”
Exposure to online racism is not a new phenomenon solely associated with contemporary social media. One group member recalled a time when he was a schoolboy participating in an ICQ forum (an open-source instant messaging computer program). Suddenly, discovering he was African, the group turned on him, calling him a monkey and hurling derogatory remarks. Some tried to defend him, but he became so upset he logged out of the group. “To be attacked in that way was horrible. That was probably the first time.” Yet, advances in technology have only intensified the dangers—with Facebook Live allowing real-time abuse and hostilities between people to be aired widely.
One group member, a community worker, said that she took action when hostilities and fights between Tongan and African girls were videoed and aired on Facebook. She reported it to Facebook who took it down as the gratuitous violence clearly breached the ISP rules, and bullying is banned.
Alternatively, one Congolese member spoke of the value to be found in a closed Facebook group designed for young Congolese only. Importantly, these forums keep people in touch transnationally, wherever the diasporic flow of Congolese or other Africans has taken them. Moreover, they also serve to build resilience by affirming African qualities that are undermined in the wider social media.
Young people enjoy that space to express themselves because they are very active on social media. They can tweet when they play soccer, when they play basketball, they can share their videos and make comments.
Another participant commented on a particular social media channel they followed for its positive messaging.
There’s an Instagram account that I deliberately follow. It’s a magazine that deliberately includes people with darker skin. All their models are dark skinned and it talks about celebrating melanin and loving all the different skin shades. It is also against the skin bleaching and shares positive messages about the beauty of dark skin.
And yet, the awareness of racism being “everywhere on the Internet” persists. “And sometimes it’s the subtle ones that get me as well…. Because white people are unconscious of their racism,” one woman concluded.
Australia’s Middle Eastern Community
Under the categories used by the Austral ian Bureau of Statistics, the Middle East is included in a region that also encompasses the Arabic-speaking countries of North Africa. The Middle East stands as a proxy for overlapping but importantly different ways of iden tifying populations—it includes Muslims from the region who are not Ara bs (e.g., Turks and Iranians), Arabs from the region who are not Muslims (e.g., Eastern Catholic, Orthodox and Coptic Christians) and also Jews who were born in Israel.
At the 2011 Census there were some 220,000 people born in the countries of the Middle East, up from 184,000 in 2006 (Australian Government 2014 ). The major sources were Lebanon (76,000), Iraq (48,000), Iran (34,000) and Turkey (33,000). A majority of the Lebanese are Christian, while minorities from the other countries include Ba’hai, Assyrian Christians, Yazidis and Alawites.
One of the important distinctions made about race hate speech refers to whether religion per se can be included in an analysis of racism, given that faith is a choice and race is not. In the Australian context at the national level, religion is not a protected characteristic in racial vil ification legislation. In order to explore what the relationship might be between religion and “ race” or ethnic appearance and culture, we created a focus group drawn from people of Middle Eastern background. In a number of Australian police jurisdictions, “Middle Eastern ap pearance” is used as an identifier for alleged criminal perpetrators. This label has caused widespread distress, given its highly stereotypical implications and its impact on anyone who might fit such an imagined phenotype. In Sydney, this has been a particular issue where the NSW Police Middle Eastern Organised Crime Squad (MEOCS) has been blamed for stigmatising whole communities who might fit the label (Australian Arabic Council 2003).
One particular and difficult issue has been the additional tension created for the Christian and Muslim L ebanese communities, where some Christians have argued the label really should be dropped as the criminals are more likely to be Muslim, yet the label also gets attached to their supposedly more law-abiding community. Our focus group contained three Syrians (all Muslim) and three Lebanese (all Christian); the majority of the group was female, as the issue of gender has been noted as a key dimension of online racism (Ellemers and Barreto 2015)
All members of the group had Facebook accounts, with one having Twitter and another YouTube. All used Facebook to access news and used satellite TV for international news. Most used their Facebook pages to follow the news feeds of friends and maybe connect with friends. A central focus for all participants was the p olitics of the Middle East inclu ding the Palestine Israel con flict, an issue of significant controversy and conflict online. They all noted that their engagement with politics online has become more muted as interactions have become angrier. One participant also noted that any undue interest in these issues, especially accompanied by a desire to go to the area, could be identified by security officials.
In this group, the emphasis, however, was on communicating ideas about peace and unity, rather than difference and conflict. One Lebanese woman noted:
Yes, I share things online, like a wise saying or something about religion and how it should, is meant, to unite people, not separate or incite them against each other.
Another Lebanese woman commented:
I avoid or ignore things that are racist or about discriminating against someone else. But if something is talking about harmony and peace, I will share it.
One of the Syrian women reflected on how she tried to avoid entering into online arguments about her points of view, but would “like” things in order to signal her position:
People might “like” something but not necessarily want to engage. I think I tend to do that now. If I agree with something, I’ll hit “like”, but I don’t comment because I expect someone to respond, and I don’t really want to have that.
One example discussed was the emergence of Syrian Christian sites, which the Syrian Muslim women saw as anti-Muslim. “I found it very degrading very humiliating,” one commented. She also referred to her horror at another anti-Muslim site she had seen:
They put up a lot of pictures, like an Arab having sex with a camel, and they say, “This is really what happens in Islam.”
A Lebanese Christian woman said this was “Disgusting!” and the Lebanese male responded:
At the beginning of the war in Syria there was a site: “Fuck Assad”… and it was so disgusting. It encouraged people from other Muslim religions to abuse a particular sect [not identified—could have been Alawi, or Syrian orthodox or Catholic] … I was one of the many people involved in a petition to close it down but it actually stayed up for a long time.
The Lebanese man, who works for a Muslim Organisation, continued:
To be honest, I’ve seen some terrible stuff online, and I’ve complained a lot. But I notice on Facebook they just don’t take them off, even when you complain. I don’t understand the censorship [rules] that they have, and yet they took down a woman feeding her baby because she showed her boobs! So I don’t understand how that works. … Because I work for a Muslim organisation and I actually do workshops on Islam, people post me very anti-Islamic, very shocking stuff because they know I’m not Muslim. I unfriend such people but sometimes, I don’t know whether the people repost on their pages or share things, because I get people making comments and I don’t know who they are.
The Syrian Muslim women had all encountered anti-Muslim hate speech, a nd their strategy was mainly to ignore it because of bad past experiences of responding.
I’ve just commented, “Are you serious?” or something like that, but then it keeps going on and on. That’s why I don’t do it anymore because there’s no point.
When asked about their attitudes to the balance between freedom from hate and freedom of s peech, the Syrian women were critical of Australian laws that allowed supporters of ISIS to rally freely, and appear on television apparently, to them, without constraints. The group agreed that censorship of Facebook was needed, though they disagreed on how this should happen. The youngest Syrian woman, a student, noted:
You can’t permanently remove something, it’s always going to come back. So, something like this with Facebook even when you do remove it, it’s not stopping people from posting the same thing next time, and next time, and next time…
However, in response:
I know Facebook blocked anyone sending photos of Osama, but by the time it’s taken, thousands of people have already seen it and can share it themselves. Facebook’s not going to block 100,000 people for sharing a post.
One woman, who had previously been active on Facebook posting information about the Syrian conflict, reflected:
I think I used to get up and talk about a lot of this stuff, and then I just backed away because anytime you mentioned something you’re straight away put into a group about it.
Their friendship groups online reflected the different life histories of the groups—the Syrian women had very mixed circles of friends, some drawn from school days, others through work. The Lebanese tended to have more of their close circle from their own community. Even so, they recognise the Internet as a dangerous place:
I get scared of saying the wrong thing to someone on Facebook, because I’m scared for myself and my family. I’m very, very cautious of what to say and try and play along with everybody so as not to upset anyone.
While community hostility is one factor, government surveillance can also lead to fear.
I have a relative who posts about Palestine, she is against the Israelis and against Islamic things. I’m always careful not to respond to her and she has messaged me so many times saying, “You never messaged me back or shared my posts.” I say, “Either I didn’t see them or I don’t like them,” to her because I’m really concerned and worried living in Australia. You don’t know whether the ASIO [Australian Security Intelligence Organisation] will come.
Anonymity h as its “rewards”:
I don’t wear the hijab, but I am Muslim, so people don’t immediately know I’m Muslim. The amount of anti-Muslim things that have been said and shared with me, because people don’t know I’m Muslim, is incredible… It’s kind of like this silent racism.
But the opposite also occurs:
People think I’m Muslim because I work in a Muslim agency and run sessions on Islam. I get Muslims coming up to me saying, “Come on sister, let’s go. These people are no good, we’ll take you out for coffee”, so I get exactly the opposite. I think, actually, that people hide behind social media.
How then does the religion versus race perspective appear to the group members? One Lebanese woman contributed:
I actually think that yes, there is racism or discrimination against Islam, but I think it’s more a discrimination a gainst Arabs, not all Muslims. I don’t h ear comments about Afghani or Dari or even Turks in the same way as I do about Arabs. I think it’s discrimination against Arabs. It’s anti-Arab campaign.
A Syrian woman continued:
So, once you say Muslim, it’ll become an Arab automatically…. They’re not educated enough. As soon you say, Muslim, “Oh, he’s a black Arab.” Or say an Arab, “Oh, he’s a Muslim.” But is it ignorance, or something else?
A Lebanese woman noted:
I don’t think that’s ignorance; I don’t think it’s uneducated. It’s more that they know what they are doing.
People do choose spaces in which they feel safe. On issues such as Israel/Palestine, group members chose pages where they expected people to have views similar to their own. When on pages they encountered “friends” who did not share their views, they would simply avoid engaging. However, the sense of the surveillance society still intrudes. Says one: “But where is the safe place if the government monitors everything?”
Personally, I get very scared of voicing my opinion. I’m very, very careful. I’m scared that ASIO will knock on my door. There are many, many people like myself. I would ignore things rather than voice my real opinion.
This sense of wariness seems particularly acute among communities associated with the Middle East conflict:
I’ve told people in my family that they probably shouldn’t put certain things on Facebook. I haven’t said anything online, but I’ve told them [face to face]. I’ve done it to my sister. I’ve said, “I really don’t think you should be posting that sort of stuff on your Facebook. You work in the public sector. If someone sees that screenshot it will affect your job.”
Even with as experienced a set of social media users as this group, knowledge about the availability of external advoc acy and support on cyber racism was limited. There was some awareness of government anti-discrimination bodies. In one interchange, comments were made about the Australian Arabic Council, which one member believed “the Jewish shut down.” Following on this theme, the group discussed the difference between the Jewish Israelis versus the Arabs in terms of “our ability to organise and stay together.” One Syrian woman noted: “they tend to be very collective, very nationalistic; whereas the Arabs, for some reason, are in this divide and conquer [mode] all the time, right?”
During the Lindt café siege in Sydney in December 2014 (Semmler 2014), the Internet was alive with posts about Muslims, especially when it was revealed that the perpetrator had declared himself a follower of Daesh (ISIS). One post a year later proposed that in retribution Muslims should be locked into mosques and burnt to death (Mannix 2015). A number of the focus group members had complained to Facebook about this, demanding the video post be taken down. Facebook assessed the complaints but did not take down the post. It was only after an online petition was launched and attracted 20,000 signatures that Facebook acted.
Yet, such responses have a downside according to one participant: “Don’t feed the fire! With something like that it’s great that Facebook pulled it down, but if 20,000 people signed it 20,000 people watched it.” The woman who posted the video from Sweden and “liked” it was in fact charged under the Victorian section 25 of the Racial and Religious Tolerance Act 2001. Ultimately, the case did not proceed.
Talking of anti-Arab sentime nt, one of the Syrian women said:
I feel like the only time Arabs are going to get a break, is when the next group of migrants come in from somewhere completely different and everyone gets over the Arabs like they got over the Wogs. For me it’s almost like it’s just sit and wait. Someone else will come and it’ll be their turn.
Australia’s Chinese Community
Australia’s ethnic Chinese communities are dr awn from across t he diaspora in Asia—the mainland People’s Republic, the Taiwanese Republic, Vietnam, Malaysia, Indonesia, Singapore, Hong Kong and elsewhere (Australian Bureau of Statistics 2013). While some 320,000 were born in mainland China, there may well be 100,000 more when all possible permutations of what it means to be Chinese are taken into account (Jakubowicz 2011).
Unlike people from Africa and the Middle East, most Chinese in Australia have not been refugees, apart from the 40,000 accepted in the wake of the Tiananmen Square violence and its aftermath in 1989 and 1990. The past generation of Chinese immigrants has been far more fashioned by nationalism and racial identity politics in China than by either Communist or anti-Communist ideologies and struggles. The focus group of five (three men, two women) spoke either Mandarin or Cantonese as their first language and were aged from 18 to 38 years. All had their own Facebook account. Most also had Instagram accounts and Twitter, which they rarely used, while two of the men had Reddit accounts. Facebook is used primarily for news, to keep up with friends or to find news that the Australian m edia do not broadcast. However, for at least one member, Facebook is used for staying in contact with Australian friends and clients, while Weibo and WeChat are primarily used in their interactions with Chinese friends, family and for culturally relevant information. Another uses Twitter to commend writers whose ideas he likes, while another uses MSN and also the Chinese international students’ site “6park.”
There was considerable reticence among the group to post anything controversial; for example, during the Australian Federal election of mid-2016, an evangelical Christian Chinese group began a campaign on WeChat to bolster support for the conservative Liberal Party, using communal fears of gay marriage and sex education in schools to raise the salience of the election choices (Deare 2016).
I have a friend, not that close, who did share something that was sensitive about two weeks before the election. She posted an article saying the Liberals were closer to the Chinese, vote for the Liberals and that Labor was not good…. After a few days, she posted another article saying that the Labor was going to have a gay school or something like that. It was really weird and I was very uncomfortable.
Much of the racist material this group encountered online, they identified as being about American Blacks and Muslims, far less about Chinese. However, bursts of anti-Chinese chatter do occur.
You get things like: “You people like eating dogs” which was a big thing a few months back. Then you see a lot of memes of Chinese people in America or Australia letting their two-year-olds go to the bathroom in the public part of a store. Or you see memes where Chinese people go to buffets and take a whole plate of lobsters. The racism towards Chinese people isn’t like “oh my god, they are getting killed or they are terrorists”. It’s “oh my god, what are they doing! Look how uncultured they are”. That’s the kind of racism I have seen and I don’t think it’s ever taken that seriously because it all gets turned into jokes… I think what Muslims or Black people go through is way more serious than what Chinese people go through.
Real-world experiences of racism may be fed back into the community through WeChat. One member told of her partner who had gone for fishing, and how his friends were harassed by the inspectors for their licenses, while the Australians nearby were not. He posted this story onto WeChat, where it entered a long line of posts about unfair treatment of Chinese.
The group also discussed whether they felt the racism they encountered was intentional.
If I see someone make a racist comment, … It’s not something I take personally, it’s just something I chalk up to ignorance rather than direct hate. I think it comes down to ignorance and the misunderstanding of culture rather than someone actively trying to anger me by insulting my race or culture. I think there is a very small minority that is actively trying to get a reaction by being racist.
Overall, the group tended to disregard offensive material—they distinguish between what is “objectively racist” and their reaction, “Whether you take offense to it is a different issue.”
Is there some truth, they asked, in stereotypes? One group member identified a number of stereotypes prevalent on the Internet about Chinese in Australia. Their poor driving records, their purchase of apartments driving up prices, their purchase of baby formula to send back to China emptying stores of product and their queuing up to buy the first iPhones for dealers to send back to China were all commonplace. One member noted that he knew that aspects of these stereotypes had validity, and while they did not apply to the majority of Chinese people, they seemed valid for some. So he argued that they should just be ignored. On the other hand, he was adamant about racism:
Although I am not African American, I feel their pain more than anything else… [talking of Black Americans being shot by police]. For me, that racism is causing more problems than the Chinese stereotypes.
Discussing their common experiences of racism online, a number of group members noted some real-life experiences of being harassed, but little direct online harassment personally. They would encounter occasional general anti-Chinese comments or memes, but they had not been personally targeted. Perpetrators, they thought, were people who “had too much pressure in their lives. Maybe they want to release this anger by posting racist things online, thinking they can say whatever they want.”
Online they tend to stick to their own friendship or family groups, so they can be protected from exposure to material they find offensive. Sometimes though, on Facebook, there may be pages where strangers can enter and post offensive material, but even then it is rarely targeted personally, and they agreed it was usually the consequence of ignorance. They expressed a preference for WeChat where participants were all family, or friends or friends of friends, a much safer place than Facebook. On Weibo, because of Chinese government supervision, “no one posts negative things … anything bad will be deleted within one or two minutes.”
Among the Asian communities, different national groups use different apps—WeChat and Weibo for the Chinese, the Koreans use KaoKao Talk, Japanese and Thai use Line. One group member says: “You can ping people nearby on WeChat, so you can connect with them. It’s more inside the community… whereas for Facebook and Instagram, they are so public.”
The group members agreed that they felt racism online was not “as bad” as it had been in the past. It appears that their experience of online racism has declined because their behaviour online has adjusted—they set access to “private,” they avoid public pages or offering their opinion where it might trigger an undesired reaction from anonymous posters and they ignore casual slurs that do not touch them personally. Moreover, they are far less the targets of racism than in the past, as the focus of online racist predators has shifted to other groups, especially Muslims.
Australia’s Jewish Community
The Jewish community has come under increasing pressure on the Internet as c onflicts in the Middle East have intensified and opponents of Israel have spread their targets to Jews in general, while, at the same time, neo-N azi and White power groups have intensified their attacks on Jews as race traitors and destroyers of the White race (Anglin 2016). The Australian Jewish community numbers about 110,000, with a pre-war segment, a post-Holocaust refugee segment and more recent waves of immigrants from Russia, South Africa and Israel (Graham 2014).
The Jewish focus group comprised three men and three women, half aged under 27, half over 45 years, from many different national backgrounds. All used Facebook, and most had some other accounts—Instagram, LinkedIn and Hot Copper among those listed. They ranged from individuals who visited Facebook many times a day, to those who preferred to mainly communicate using email. Apart from friend and family links, they all follow a variety of news pages on Facebook. Some also follow Reddit and Buzzfeed for the news “without getting all too heavy.”
Unlike most of the other groups, some members were avid readers of the online version of news and the comments attached. The comments were described as “quite heated,” where people were often reluctant to comment because of fear of exposure. “In terms of open debate out there, I don’t think it properly exists… it just puts me off wanting to get involved with strangers.” They describe the comments lists as increasingly places for people to attack each other and “call each other idiots.”
However, the group was aware of the politica l nature of social media. One recalled at school being told in class that everyone should go to a particular anti-Zionist page and report it to Facebook, so it would be taken down. “I’ve seen racist pages that have a very small group of vocal people following,” suggesting that such conflicts occur among very committed activists from different political positions, rather than mass engagement from the wider society. By implication, people who engage there do so knowing they will be entering a space of torrid interactions. On the other hand, one member reported her daughter being in a University Facebook group where, during a Palestine–Israel conflict, the la nguage became very heated and aggressive among Muslim and Jewish students until one of the students called “time out” to allow tempers to cool.
Another woman referred to the rising tensions in Europe being caused by Syrian refugee immigration, intensified by the Daesh conflict. She linked that into how the Internet had become increasingly intense on issues about race and religion, specifically about Muslims in Australia. She identified the controversy around local television personality Sonia Kruger who had publicised her views condemning Muslim immigration to Australia, and the associated rise of the One Nation party and the election of its anti-Muslim Senators led by Pauline H anson (Jakubowicz 2016). She reflected that the extent of social media attention on the refugee crisis and its Australian implications had increased dramatically, with the language and the arguments getting more intense. “It’s getting a bit scary and has definitely increased over the last few years.”
People who did get involved in political debates about world events reported that once the comments began to flow, attention wavered from the issue at hand to become personal. Sometimes they were attacked because of being Jewish and their opinions criticised from that perspective; in one case concerning political events in Europe, one group member said “it went out of control.” She described the follow-up: “People who had been involved in the discussion wrote to the person who had made the [specific] comment and said it was really, really inappropriate. So, he actually took it off a couple of days later.”
One of the men responded that the world of online interaction can be very brutal:
If you comment on something, you frequently expect to be attacked. I certainly found, at times, you get comments where they’re digging. I once made a very neutral comment on a news site about the Paris fight and started getting questions like, “oh, you’re obviously a Jewish B” and stuff like that. Then of course other people start joining in. I have a really thick skin, I went to a tough school and I’m used to environments where you have to stand up for yourself … [but] probably not a month goes by without seeing something where you actually think, “oh, hang on, that’s not good! If something is not done this could become the norm. Should I be upset about it? Should I be thinking this is unacceptable?”. It’s quite profound.
His comment points to the way in which norms of online civility are generated through what is accepted and what is rejected. Given the millions of people involved and the reluctance of many people to be hammered by opposing opinions, especially if the responses are racist or antisemitic, the norms are likely to drift towards being increasingly discourteous and personal.
Discussion moved to the online attack on Aboriginal Senator Nova Peris in May 2016 when she had been racially abused by a White man on Facebook. Although his name was on the post, he clearly expected to remain unidentified and not called to account (Killalea 2016). Initially, he claimed he had been the victim of a vicious cyber hack, but this defense soon foundered and he was convicted of a criminal offence. Commenting on the case, one woman noted that the anonymity an d disinhibition offered by the Internet were readily apparent, with racism a deeper issue.
There’s a lot of stuff [in comment threads] that I don’t believe most people would actually say in person. When they are hiding behind a keyboard and screen they think they can say whatever they want. Peoples’ racist undercurrent surfaces frequently though. I think if you look at anything that has a few hundred comments (and I don’t mean something completely innocuous like a photo of food), if it’s anything political, by comment 50 it has become racist and goes in every direction. The more comments there are, the more racist it becomes.
This case led directly into discussions around freedom of speech. The group members supported freedom of speech as a value, one saying:
There’s a point at which making sure people aren’t hurt by the things that are said, becomes more important. I know there’s the argument that you don’t want to silence people just because someone is going to get offended, but there are, I feel, quite clear lines about where people are going to get offended.
Once more the issue of anonymity came into play, with one member arguing:
There should be a difference between having that cloak of invisibility as an anonymous blogger and actually using your name. If you are prepared to have your name and identity out there then you should have the freedom to say exactly what you want.
But what happens to those opinions? One member expressed a concern with the impact of hate speech on social cohesion; however, another thought that the real issue was how anything said online could be taken by “millions of publishers… and cast differently,” so now people are much less willing to say anything “because there isn’t really a freedom of speech.” Another came back supporting the right to say whatever people wish if they are identified, urging civility as a norm: “I think you should own what you say, and when you speak try to think about other people and not hurt their feelings.” In order to create a safe place, one group member has a private page where only friends can engage, and where freedom to speak one’s mind in a civil way is much more the norm.
We have already noted the complexity of the relationship between anti-Zionism, criticisms of Israel’s policies and antisemitism . This is not the place to rehearse that debate or make a judgement call, but only to note that perspectives on these issues are heated, and political struggles over where the line is between legitimate critique of Israeli policies and antisemitism r emain a focus of concern within the community and are an element of global political contestation. Asked about their personal experiences of onlin e antisemitism, the group identified that distinction as critical. One member said:
Most of the antisemitism I’m seeing is actually people who are anti-Zionist and they are calling out the Jews through that, which is not actually the right thing to do…. The comment section [on major international stories], are always racially driven and that’s invariably where you see antisemitic activity.
With the advent of the alt-right in the USA (Amend and Morgan 2017), attention has been focused on its attempt to formulate undetectable online methods to label Jews, Blacks, Muslims and other minorities (also discussed in Chap. 4 in more detail). Jews, for instance, are either identified by triple brackets (e.g., (((Jakubowicz))) ) or with the label “Skype,” as these cannot be easily distinguished by Google’s anti-racist strategies and blocked. When the group began to discuss this emerging issue, one member became very emotional. He referred to how Jews try to successfully merge into the societies in which they live, and keep their religious beliefs personal, familial and communal. Being called out as Jewish and having his religious beliefs made public in a negative way brought back to him the horrors of antisemitic s ocieties, such as Nazi Germany. A number of members agreed there was a communal norm that Jewish people would try not to draw attention to themselves in the wider society, challenged, however, by those who would call out antisemitism by wearing religious symbols and identifying very publicly with their culture and faith.
Much antisemitism occurs in unexpected quarters online, for instance, among P remier League football clubs in the UK. One group member, who emigrated from England a decade before, noted that there was intense racism and antisemitism in the discussion threads of the Premier League clubs, where Tottenham Hotspur was known as the Jewish club. “Jewish” clubs and their players were labelled as Jews as a form of hate. Similar insights were offered when discussion turned to cricket in Australia, where a Jewish player was often condemned as unsuitable to represent Australia even though he captained Western Australia. There was general agreement this was similar to the negative assessments Aboriginal footballers had to survive.
In discussing what individuals can do if they find themselves a target of antisemitic attacks, the group was not optimistic. One, who had been abused in broadcast emails at her work, had been very disheartened by the lack of avenues for recovery. Another, a lawyer, said that her advice was always the default—“Don’t bother. It’s a massive waste of time.” Responding, another woman said:
Facebook is the ultimate arbiter… it sits above the law, because it can dictate what you see and how it gets distributed to you. It’s essentially running the law in so many countries… Facebook should be able to name and shame … embarrassment, I think, is extremely powerful.
As with the other groups, the Jewish group was modifying its use of social media to provide safer spaces. Its members recognised that antisemitism was rife, probably unbreakable, and thus best dealt with by avoidance. Interestingly, no one mentioned Jewish community organisations or online anti-racism groups like the Online Hate Prevention Institute as sources of support. One mentioned using the Australian Human Rights Commission, with her case taking five years to reach resolution. The remarkable aspect was the pervasiveness of antisemitism as a d iscourse, not dissimilar to the sense of danger expressed especially by the Indigenous group.
Australia’s Muslim Community
Islam was present in Australia before European settlement, through Indonesian fishermen who traded with indigenous people along the northern Australian coast. Thereafter, there has been regular contact with Muslims during the period of the exploration of Australia (Humphrey 1987). In 2011, about 470,000 people self-identified as Muslim.
Our Muslim group, specifically chosen from Islamic communities outside the Middle East, comprised African and Asian Muslims. Islamop hobia has become a major area of concern on the Internet, with a number of Australian civil society organisations having been established to monitor and counter it . The group comprises three men and three women: three from Africa (Egypt, Somalia and Eritrea) and three from Asia (Bangladesh and Pakistan). They were aged between 24 and 37 years and are very active on social media. All have Facebook accounts, though one closed hers recently. Amongst them, they also have Snapchat, Tumblr and LinkedIn, and two have their own YouTube channels and blogs for their professional activities. Their primary source of news is online, with one member following key sources on Twitter and others using a range of sources including Al Jazeera, Russia Today and other country of origin online and satellite news services.
The woman who had given up Facebook did so following a period of racial abuse of her and her friends. She felt too exposed:
Facebook is about your profile, whereas for Twitter, I felt like I could disguise myself as someone else, or didn’t have to connect to other people…. I use Twitter as a reprieve. Facebook is very intense and serious, and brings a lot of anxiety in comparison. On Twitter I follow musicals, and Broadway stars.
However, one of the other Twitter users says the opposite:
I am there for political issues, and feel comfortable being able to communicate in that way. There is a lot of racism, because people have no shame…. and I would get slandered on there…. but that’s where I got the truth, specifically from people on the ground who would share what’s happening on the minute, every minute.
Another member used Tu mblr to connect with a group of artists from her home region, enabling her to write. This writing grew into a creative blog and resulted in a supportive online community whose members interacted with each other.
Another participant cited the use of YouTube to publish an interview with an elderly Muslim revert, and then closed it. “The comments were brutal…. People have no respect…. We ended up deleting and disconnecting the comments.”
The group has very different attitudes to posting controversial material. Some will post stories about gay Muslims or Shia and Sunni partying together. Others feel these issues are too hard. “I am very sensitive. I either get really into my little box [i.e. withdraw] or I explode.” The same woman described a major internal fight she had in her community on Facebook, where she had been attacked for entering a Sikh temple, as she was a Muslim. Since then, after “my whole community pushed me aside, even if I want to say something, I don’t.”
Controversial is sues about religion included an online debate over whether Muslims should participate in a government Iftar dinner, given the continuing incarceration of hundreds of Muslim asylum seekers by the government. One member said she was accused on Facebook of trying to divide Muslims when they were trying to fit in.
Another woman mentioned an occasion where she had been attacked on Facebook for wearing hijab, on a page devoted to herbs and nutrition, when she contested claims about nutrition she thought were incorrect. A more political group member told of how she had gradually closed off her Facebook page, so that the only people on it now were people she could trust and who had values similar to her own. Yet another demonstrates great persistence on Facebook. “So as a friend, and between friends of friends, I will comment until I’m actually deleted. That’s happened on only two occasions.”
One woman said she would not reply directly to offensive comments, “I don’t want to get into it because I don’t have enough experience to deal with it,” but did describe how she dealt with some material she found very offensive (an ISIS beheading and unpleasant comments) on Facebook by reporting it. Facebook deleted the material and blocked the commentator from posting to the page.
In discussing the strategies of responses to racist material, one member stressed the importance of philosophical calmness. Another described the escalation in an interaction that made her very angry when a friend’s partner who had appeared pro-Palestinian then proved himself to be very anti-Black.
I was having a conversation with him on something the Israelis were doing in Africa, then he shared that this is what happens when you enter these countries [Black African]. I tried to have a conversation with him about that, and he obviously got very angry…. When I get angry, I feel like I get nowhere. It just caused more problems.
As with the African focus group, the awareness of surveillan ce affects how members of the Muslim group use social media.
ASIO contacted me. I was doing community work with certain groups and I was very vocal on Facebook about how I felt….They got all my information from Facebook… so I just felt trapped and thought it was better to get off.
One of the other women offered a different perspective.
I now curate a Facebook page where there are people who are supportive of me and my cause. Previously, I would share things about women’s issues or queer Muslims, but people started calling me in the middle of the night to threaten me. It can really get to the point where it is very dangerous.
Thinking about how to handle these situations she continued:
Everyone I know has had a phase in their life where they’re having online debates until 3am, and having lots of anger. Then you move towards a more balanced way of responding to people… in a way that will actually get you what you want. I think anger is really important to begin with, I don’t think it’s a negative thing, but there needs to be a time when you move on from that and start discussions that are beneficial for the community.
The group recognised the anger reaction as a major challenge.
I go through those articles [in online conservative newspapers] to read what comments come up, and it is amazing what you see. The level of racism, you can’t even put a level to it! You just sit there and think, “do such people really exist in this world?” Which is scary, absolutely scary. It’s in my nature to neutralise or try to bring some understanding to some of these people, but what I’ve realised, is that they’ll go onto the next person who’s attacked or insulted them in some way, and you’ll be completely ignored.
What then is the value of freedo m of speech? Several in the group felt that freedom of s peech was not equally available—that Muslim communities were far more restricted in their freedom, while majority and White communities were far freer, especially in relation to racism against Muslims. Commenting on former Australian Prime Minister Tony Abbott’s cr iticism of Muslims, the Somali participant argued:
Freedom of speech is a tool of white supremacy, to keep the system the way it is…. We’re all operating under a white supremacist system. It wouldn’t make a difference whether you’re in England or France or Australia, you still turn the TV on and hear people talking about your religion, talking about your race. I’m a black Australian. I identify with the issues that Indigenous Australians face. Freedom of spe ech is definitely reserved for one area of the community, and it’s a way to silence a whole other area of the community.
The woman who had left Facebook continued on her theme that she had been exposed to harassment on Facebook, and in the minds of the authorities she was associated with more dangerous people, which made her community work more difficult. “What I was doing for the community ended up being questioned, but all the people saying all this hatred stuff were never questioned.”
She discussed a friend, an Indigenous Muslim woman, who was blogging about her faith. Her friend also wanted to quit Facebook because of the hatred and abuse she experienced, but found it hard to leave because she also had many supporters:
Her whole intention was to get people to think, and to get people to understand what’s going on, and be a voice for people who weren’t going out and speaking. But yet it was tearing her apart.
Another woman interjected: “There are some, especially men, online that are incredibly violent and incredibly hateful in the way that they write things.”
While Facebook was the main arena for this discussion, one member pointed to the comments lists on online news sites (in this case MSN).
The kind of hate I’ve seen there is scary. You don’t want to step in, because once you do, you know for a fact it’s not going to end. … I might have someone inbox message me and say eff off, you Muslim, it’s happened before. … When I come off social media for a while, I find inner peace. I’ve got more time to myself…. When I go back on it’s overwhelming.
The discussion moved to the issue religion or race, and its role in building community online. The Somali woman has a global friendship network, majority of whom are not Muslim, but are Black.
I feel like I have more in common with the black community around the world than with Muslims. I don’t really like Muslims online, a lot of times.
However, for people who were less willing to get into conflict, Facebook serves as a way of creating communities of like-minded people and excluding people who might not agree or would be offended.
I have Muslim and Christian Facebook friends, but when two Jewish people approached me to be friends I said no. I am very anti-Israel and pro-Palestine … I didn’t want them because I post stuff that is anti-Israel, anti-Zionist. So I don’t want to offend friends in public.
The more activist member said:
Because I use my Facebook page primarily for polit ical views, I don’t care if I offend you. If this is something that I feel is right … I don’t have a problem having a discussion.
She had been “boxed” (messaged to her inbox) by a Muslim Somali woman who said she had found a recent post incredibly offensive. This conversation happened via email, as they argued back and forth, rather than in the open on Facebook.
One example demonstrates the interaction between the online and offline worlds. One member, a Pakistani community worker, was leading a group project comprising Indian men. She was harassed online by activists from the Pakistani Muslim community who believed she should not interact with Indians or men. Her husband was also harassed. She completed the project but then withdrew from community work. “I was really worried. I have got young kids, and at this age they are vulnerable…How can I fight? What am I fighting for? Is it worth it?”
Another woman was abused because her Facebook profile image showed her with hijab, but she was seen without one in the real world. She was castigated, because she was not “wrapped” like other Muslims.
They were his exact words. You’re not wrapped like the other Muslims. He said incredibly demeaning and borderline violent things about who I am as a person, and my character.
Yet, she also said the real world is where the real danger lurks:
Real life Islamophobia and racism is probably the scariest thing I’ve ever experienced in my life…. I was going to my sister’s house with a bag, and three guys on the train said to me, “Oh my god, she has a bomb in the bag.” No one on the train said anything…. It’s so much scarier. Especially when it comes from men that look like they would act out the violence that they’re threatening.
Individuals in this group joined various online groups that provided safe spaces to discuss issues such as race in Australia, women of colour, activism and self-care. Close groups are formed when more open discussions generate feelings of threat or experiences of harassment and abuse. The Islamophobia Watch list was named as a community actor, but not as a site that could protect people. “Nothing protects us,” added one woman. “Once you step into the online world, there is nothing to guard you. You are there to guard yourself, simple.”
There was a desire, though, to see the Human Rights Commission be given more active powers, especially in relation to online bullying:
To set up guidelines for Facebook, so that Facebook is not setting the laws for itself. A moderator on Facebook telling me what I have reported is not racist enough for it to be taken down isn’t good enough.
What Do the Target Communities Share?
As we can see from these discussions, the Internet has become a dangerous and treacherous place, where proactive engagement in public spaces will almost always trigger an attack from some predator looking for the opportunity. It varies, group to group, as to what they perceive as the direct threats and how they develop defense strategies; nevertheless, there are commonalities. All groups think of the dangers of racism as having at least two sources. The first, and less common, comprise organised, sustained attacks using Facebook, Twitter, Instagram and myriad other platforms, by groups for whom landing racist punches on their targets is the primary aim. The second and more common experience is casual racism. This emanates from individuals who are angry about cultural, racial or religious difference and express this rage by finding random targets they can irritate, enrage and intimidate.
While for people of colour, much of the source lies in the White communities, there is also evidence of intergroup racism and harassment. Whichever source generates specific experiences, responses appear be in line with each other. Most people do not want to be harassed or intimidated, so they self-censor online in order to avoid provoking an attack, even if this undermines their right to freedom of speech. On issues they belief require strong advocacy, some will put themselves “out there,” hardening their digital skins to the blows that may soon be landed. Sometimes it has proven valuable, where online conversations that initially proved difficult are gradually resolved into more useful explorations of issues from different perspectives.
Commencing with “race” as the primary concept, the Leximancer concept map demonstrated the links between the core concept and other concepts that were linked to them through their shared presence in the field. The group members felt angry and abused by their exposure to raci sm. Moreover, they reiterated that these emotions—anger generating anger—consistently permeated their awareness of hate speech. Often, they would seek to withdraw from engagement, yet were engulfed by the associated emotions and labels.
The experience of targets demonstrates once more the complexity of the universe that has been created by the Internet, and the level of energy and activity that now fills it. It seems impossible to enter that universe and not encounter r acism, so there are three implications, which we explore in later chapters. Firstly, individuals join or find communities as a protection, with each community evolving its own (somewhat similar) practices to enable resilience. Secondly, the platforms are coming under increasing pressure to ensure their spaces are safer to navigate and they take more proactive initiatives to help create these spaces, lifting, where possible, the responsibility from the hands of the targets themselves. Thirdly, the regulators are being forced to engage with the platforms to try to ensure civility, even though more libertarian pressures are trying to release constraints on online behaviour.
In the next section we explore how two such “communities of targets” addressed the threats they experienced.
Faith, Hate and Fear in Online Australia
There are similarities that can be found in the actions taken by the NSW Jewish Board of Deputies when antisemitic content was posted on a popular high school website in late 2009, and the response of the Muslim Women’s Association to the posting by a hate group of a photo, and subsequent vilification, of a Muslim woman on public transport in Sydney in 2014. Both organisations, two of several representing the Jewish and Muslim faith communities in Australia, played proactive roles in combatting the online vilification of their communities. By tracing the extent and nature of how these organisations countered vilification on the Internet, we can identify the processes through which a resilient practice can be formed by groups targeted by hate.
As a complex, global, electronically powered communication environment that supports access through a range of mechanisms, the Internet embodies the notion of a “networked information environment” (Savolainen 2002, p. 11). The vast majority of organisations and institutions now have an Internet presence, including many operated by faith communities. With the Internet a transborder phenomenon, information and communication moves at lightning speed across the globe. As global faiths, both Judaism and Islam are embedded in such transnational networks and open to whatever marauding harassments discover their local presence.
As already discussed, racism covers the use of terms reflecting claims about the superiority or inferiority of groups defined by their r ace (in the vernacular sense), ethnicity, skin colour, descent, national origin and immigrant (i.e., non-nativist) status. Racism, for some purposes, covers claims about groups defined by religion where it has similar connotations, such as the NSW discussion of ethno-religious discrimination or the Victorian specification of religion as a protected category within racial vilification laws. Racial discrimination, the practice of excluding, distinguishing, restricting or preferring individuals or groups based o n race, colour, national or ethnic origin, can, to all extents and purposes in these two jurisdictions, be regulated by civil law. The International Convention on the Elimination of All Forms of Racial Discrimination refers to such practices as intended to impair recognition of, or access to, human rights and freedom on an equal footing. The case studies we will soon describe point to the complexity and contradictory nature of Internet regulation. Under the Commonwealth civil law (the RDA), for example, religion is not protected. However, Islam is protected in Victoria, but not in NSW. Jewish people are protected as a “race” at the Commonwealth level, and as an ethno-religious group in NSW, as are Sikhs, but not Muslims. In nei ther case is the protection for religious believers a proxy for blasphemy, which is a protection for belief, and no longer sits on the Australian statute books.
Racial vilification covers material that is likely to affect a member of a targeted community or category of people through (in increasing order of seriousness) offending, insulting, humiliating, intimidating and advocating the expulsion from the country of, or advocating violence against, that person. In different Australian jurisdictions, these may be civil or criminal offences. Conversely, we use the phrase “counter-racism” to refer to the development and application of strategies designed to resist, divert, diminish or reverse the processes of racist harassment and vilificati on.
The use of online media has made it possible for religious communities to create an active e-public sphere of discussion about issues of faith, practice and interfaith relationships. This discourse and the people who engage in it extend beyond conservative or traditional approaches and interpretations of religion. As it is based on electronic instead of face-to-face communication, contributors are able to overcome theological objections to their participation in public life because of their gender or cultural/political considerations. Jewish communities have, for example, found opportunities to address the Middle East conflict with Arab youth (Yablon 2007), though often these topics trigger “flame wars”—a lengthy, to and fro of angry and abusive messages between users of an online forum. Communities of Orthodox and Haredi Jews have been found to use the Internet in ways that spanned religious, communal, personal and educational purposes including the maintaining of websites for theological and social information, while the online presence of these communities was found to reflect the breadth of Jewish religious d iversity (Lerner 2009). The negotiated use of the Internet has also been observed among female members of ultra-Orthodox Jews for whom the medium is officially frowned upon as a carrier of secular values (Livio and Weinblatt 2007).
The Australian media’s cov erage of Islam and Muslims has been described as inherently biased and reliant on stereotypes and hysteria (Kabir 2006). The responses of Muslims to this coverage have been varied. Some Australian Muslims have called for long-term engagement with the mainstream media, while some have rejected the idea of any cooperation with an industry they view as anti-Islamic. Others support the notion of alternative or independent Muslim-run or Muslim-focused media in which alternative discourses about Islam and its role in modern Australia can be engaged in (Bahfen and Wake 2011). Aly (2007) has looked at the ways in which Australian Muslims both engage in and refute the dominant narratives related to Islam found in mainstream media. For British Muslims, the Internet quickly became an important communication tool for the expression of Islamic identity (Bunt 1999). Members of young Muslim minorities in Western, non-Muslim c ountries such as the USA use the Internet to engage in the formation of Islamic community and identity on the basis of visibility, individual choice, transnationalism and social ethics (Schmidt 2004). Young people in both Muslim majority countries and members of Muslim diasporas are adept at Internet communication and use and mediate their religion online in the process of obtaining both spiritual and material aims (Echchaibi 2009), such as using the Internet in overcoming geographical boundaries and cultural barriers when seeking a spouse (Lo and Aziz 2009). It is not yet clear what the intersection of Islamic faith communities and the Internet will produce in terms of new Muslim identities, relationships or qualitative experiences—or even if these things will be occurring at all, given that the migration of Muslims online might serve to merely reflect existing offline Islamic cultures and practices (Bunt 2004).
However, online media serve to illustrate the wide range of r eligious expression that can be found in Muslim and Jewish faith communities (Campbell 2004). Because the majority of racist activity by far-right organisations can be found online (Loewenstein 2014), cyber racism represents a key new front in a socio-cultural i deology war. Efforts to strengthen the communities who are victims of such activity need to be concentrated in the area of engagement online as well as offline. Faith communities can engage in discourse about cyber racism and online hate, particularly where these pertain to attacks on their communities, and be supported in the process of obtaining the required levels of cultural literacy that will permit such engagement (Islamaphobia online 2016). Pushback by communities against hate speech and vilification has an important role to play in resisting being pressed into a position of victimhood, and will avoid more dramatic and dangerous actions in the real world. Our research suggests that affirming social identity and strengthening cultural capital in the face of online hate is difficult or impossible without a community involving itself in the process of developing resilience to cyber racism.
Case Studies: Response and Resilience
The approach taken to analyse the t wo examples here is an inductive approach, using a socio-culturalist perspective. In this approach, the functions of the Internet in social life are emphasised, and social factors are acknowledged as an influence on the production and reception of content found online. It, therefore, is suited to the nature of the research, which explores how two religious community organisations reacted to, and attempted to, combat racism as experienced by members of their faith groups online. The theory this research is attempting to generate is how resilience by religious communities in the context of online racism might be defined and implemented.
The qualitative methodology selected for this project was textual analysis, which is used to evaluate how the faith groups reacted or responded to the racism directed at their members. Textual analysis was chosen because of its potential to explore the ways in which these responses construct and represent their communities. It is a methodology that provides depth of analysis crucial for inductive qualitative research and allows for an understanding of the role played by online content in cultural construction.
Case Study 1: The NSW Jewish Board of Deputies’ Response to Online Antisemitism
In December 2009, postings on the popula r high sc hool students’ onli ne forum, Bored of Studies, contained antisemitic remarks such as “Kill all Jews” and “I hope Iran nukes them big time,” in addition to posts that discussed the locations of Sydney synagogues and provided instructions on how to make Molotov cocktails (Edelson 2009). Following complaints about the violent and threatening nature of the posts to the publishers or owners of the site, the users who were responsible were banned from participating in the forum, and the offending content was removed.
The organisation that was instrumental in bringing the v ilification to the attention of the forum’s administrators was the NSW Jewish Board of Deputies, the peak Jewish body in NSW. In an interview with the Jewish news and community site J-wire in 2009, CEO Vic Alhadeff explained that the motivation for attempting to get the content removed was an apprehension that such attitudes could be expressed on Web forums (which represented, at the time, a popular medium for discussing news and opinion), especially one that was primarily used by high school students. Such comments, he said, constituted racial hatred, which the Board of Deputies was concerned about whether it was directed at Jews or any other group (Ross 2009). Although he praised the actions of the website operators in quickly banning the offending users and removing the content, Alhadeff noted that preventative measures were preferable, such as tighter monitoring or controlling of the forum’s discussion space. “Taking action to moderate after the comments are posted is too little, too late, because the damage has been done” (Ross 2009: np).
The Muslim Women’s Association and an Anti-Islam Hate Group
In April 2014, several Aus tralia n media o utlets reported on the intimidation and online vilification campaign run by the Australian Defence League (ADL), involving photographing and filming Muslim schools and Muslim women for the purpose of posting the content online in order to invite derogatory and abusive comments. The effect of the campaign extended beyond the women or schools targeted to encompass many members of the local Muslim community.
One Sydney-based Muslim woman described to a Guardian writer and prominent Jewish author the fears held by many within the community following the mainstream media reporting of the incidents. The ADL simply reacted by reposting the photo onto Facebook along with a status asking its members to take photos of random Muslim women to humiliate them online… “Hearing that definitely made me anxious. My commute home that day was very uncomfortable. I kept glancing around me, keeping an eye out for anyone who might be trying to snap a photo of me due to my hijab” (in Loewenstein 2014).
Students at a Muslim school that had been filmed said they felt threatened, while the principal alerted the Australian Security Intelligence Organisation (ASIO) and the police (Rubinsztein-Dunlop 2014). Following the Facebook posting of a photo, taken without her knowledge, of a mother of three children and the abuse and vilification of the victim online, the NSW police launched an investigation into a number of ADL me mbers (Olding 2014), and the photo was taken down, although the group set up another of its myriad social media groups (Fleming 2014).
In interviews with Australian media outlets, one victim said she had to attend counselling following the cyberattacks, and expressed her fear of taking public transport as a consequence. She also indicated that she had reached out to the Muslim Women’s Association (MWA) for support.
According to Maha Abdo, the executive officer of the MWA, the organisation reacted on two fronts by “supporting the young woman and taking up this issue of cyber bullying and intimidation with the appropriate authorities.” Abdo explained that the issue of harm minimisation arising from the online vilification of the woman (and threats to blow up her child’s school) extended beyond the individual victims, adding that the welfare of the woman was not her organisation’s only focus. The MWA had to look at the impact of the attack on the women of the Muslim community in Australia as a whole.
The public humiliation and vilification on the basis of someone’s faith is not acceptable…It is defamatory and hurtful. We believe there is a real concern for the welfare of not just this young mother, and the fact that she is now too afraid to catch the train to get to work because it [the online stalking or harassment] might happen again. But we are worried about the other adverse effects this may have on other Muslim women in Australia. (Maha Abdo, Septe mber 2016, personal communication)
She described the actions of the hate group as an attempt to vilify “not only the woman herself, but also Australian Muslim women generally.” Abd o said that she believed the online harassment of the woman was not an issue of freedom of speech but of “security and safety…that is owed to us in a multicultural nation like Australia.”
The two case studies examined here illustrate that, given the rapid pace at which online communication evolves (from centralised spaces such as online forums to extremely decentralised individual content on social media), legislative and other approaches to combating hate online tend to play a game of catch-up, and that older frameworks for dealing with hate speech (such as the criminalisation and restriction of such speech or limits on its distribution) need adaptation. Here, we note the prime difference in the Australian contextual interpretation of free speech as a nation-state where no bill of rights exists and where free speech is only alluded to in the constitution (and even then, limited to political matters). Although Australia—like the USA—supports international treaties designed to broadly protect free speech, the extent to which free speech is privileged and accorded legal protection in Australia differs markedly from that in the USA (Gelber 2002).
Comparative studies of the US and European approaches to hate speech in the online sphere have concluded that some nation-states (such as France or Germany) (Akdeniz 2001) have more vigorously attempted to enforce laws and constraints, with varying degrees of success. These studies also found that legal discussion of such cases often rests on whether it is possible (given the globalised nature of online communication means) to determine where the hate speech took place or was disseminated, and therefore whether such actions were subject to the relevant laws of the nation-state (Nemes 2002; Timofeeva 2002; Douglas-Scott 1998).
The antisemitic h ate in the first case study constituted an example of hate content on a website that was locally run, within a geographically and demographically defined target user base (students in their final year of secondary schooling in the Australian state of NSW). The group that brought the hate speech to the attention of the website operators could point to the specific state and national laws that such comments could have been demonstrated to have broken, had their approaches initially to the website operators been rebuffed.
By comparison, in the second case study, by the time the MWA attempted to support one of the victims of the ADL online hate campaign in 2014, the most popular methods of online communication had shifted from website forums to social media. Here, it is arguably much harder—though not impossible—to hold site operators to account over hate speech. Groups such as ADL are constantly playing cat and mouse with social media networks in their attempt to establish a presence despite being banned, having pages deleted but constantly developing new pages. This indicates working with social media website operators to report users who engage in spreading hate works to an extent, despite sites such as Facebook operating on seemingly arbitrary definitions of what constitutes “hate speech” or incitements to violence. As Oboler (2008) points out, social network sites in the age of WFeb 2.0 are not prepared to deal with the deluge of cyber hate their platforms inadvertently paved the way to.
Approach the platform responsible for publishing the hate online such as website operators or social networks, irrespective of the chances of success. Most major social networks now have built in reporting mechanisms as a result of public pressure and criticism;
Document and report instances of cyber hate to the relevant authorities and organisations who specialise in researching online hate groups, such as the Online Hate Prevention Institute or specific community-based groups and
Promote the accessibility of community organisations (such as the two involved in the case studies discussed here) that can lobby on behalf of victims and provide support to counter the emotional or psychological negative effects of hate and vilification online.
To properly constitute “resilience” by the victims of cyber hate, we argue that these approaches need to be combined and pursued in conjunction with community education about the threat of online hate. The antisemitic hate on the high school Web forum in the first case study took place against a backdrop of rising violence targeting Jews (Ross 2009), while the second case involving the online and offline harassment of Muslim women and schools occurred in the context of the well-documented spike in Islamophobia and attacks against Muslims in recent years (e.g., Bouma 2011; Ho 2007). One of the key skills required to use the Internet has now become how to identify, report and deflect hate speech in order to minimise its impact and build individual and group resilience.
The legislative framework controlling hate speech erects, as it were, a fence within which people rightfully feel they would be protected. In the case of cyber racism, however, this fence is neither well embedded nor well defended. In a situation where social media providers are reluctant to proactively police the posts on their platforms, and the state seeks to limit its intervention in the name of free speech, the onus then shifts towards civil society. Civil society organisations, therefore, are very important groups in combating this problem. They can collaborate with well-meaning providers to have the offending content removed, they can impel the state to use the law where it exists and they can engage with industry to act against hate speech. Also, critically, they can support victims to resist the negative impact that hate speech might have on their lives. They can help the community to identify race hate, understand its impact, deflect its trajectory, report its presence and recover from its pain.
The next chapters take up these themes as we explore questions of resilience, narrative and regulation .
- Akdeniz, Y. (2001). Case Analysis of League Against Racism and Antisemitism (LICRA), French Union of Jewish Students, v Yahoo! Inc.(USA), Yahoo France, Tribunale de Grande Instance de Paris, Interim Court Order, 20 November 2000. Electronic Business Law Reports, 1(3), 110–120.Google Scholar
- Amend, A., & Morgan, J. (2017). Breitbart Under Bannon: Breitbart’s Comment Section Reflects Alt-Right, Antisemitic Language. Hatewatch. https://www.splcenter.org/hatewatch/2017/02/21/breitbart-under-bannon-breitbarts-comment-section-reflects-alt-right-antisemitic-language. Accessed 23 Feb 2017.
- Anglin, A. (2016). A Normie’s Guide to the Alt-Right. The Daily Stormer. http://www.dailystormer.com/a-normies-guide-to-the-alt-right/. Accessed 15 Jan 2017.
- Australian Arabic Council. (2003). Ethnicity & Crime in NSW: Politics, Rhetoric & Ethnic Descriptors. http://www.aac.org.au/report1/. Accessed 30 Apr 2015.
- Australian Bureau of Statistics. (2008). ‘Census 2006 – People Born in Africa’, Commonwealth of Australia. http://www.abs.gov.au/AUSSTATS/abs@.nsf/Lookup/3416.0Main+Features32008. Accessed 26 Apr 2009.
- Australian Bureau of Statistics. (2012). 2011 Census Counts—Aboriginal and Torres Strait Islander Peoples. 2075.0—Census of Population and Housing—Counts of Aboriginal and Torres Strait Islander Australians, 2011. http://www.abs.gov.au/ausstats/abs@.nsf/Lookup/2075.0main+features32011
- Australian Bureau of Statistics. (2013). People in Australia Who Were Born in China. 2011 Quickstats Country of Birth, ABS. http://www.censusdata.abs.gov.au/census_services/getproduct/census/2011/quickstat/6101_0. Accessed 23 Sep 2016.
- Australian Government. (2014). The People of Australia: Statistics from the 2011 Census. Canberra: Department of Immigration and Border Protection. https://www.youtube.com/watch?v=OctRQoCodVw
- Bahfen, N., & Wake, A. (2011). Media Diversity Rules: Analysing the Talent Chosen by Student Radio Journalists Covering Islam. Pacific Journalism Review, 17(2), 92–108.Google Scholar
- Biroscak, B., Scott, J., Lindenberger, J., & Bryant, C. (2017). Leximancer Software as a Research Tool for Social Marketers: Application to a Content Analysis. Social Marketing Quarterly, 23(3), 223–231.Google Scholar
- Bunt, G. R. (2004). Religion Online: Finding Faith on the Internet. London/New York: Routledge.Google Scholar
- Deare, S. (2016, July 13). Federal Election: Same Sex Marriage and Gay Rights Targeted in How-to-Vote Pamphlets Written in Chinese. Northern District Times. Available at: http://www.dailytelegraph.com.au/newslocal/the-hills/federal-election-same-sex-marriage-and-gay-rights-targeted/news-story/4b930ea4141f2e310d95a333e52be62c
- Dick, T. (2005). Academic Stirs Fight Over Race. Sydney Morning Herald. Available at: http://www.smh.com.au/news/national/academic-stirs-fight-over-race/2005/07/15/1121429359329.html
- Douglas-Scott, S. (1998). Hatefulness of Protected Speech: A Comparison of the American and European Approaches. William and Mary Bill of Rights Journal, 7(2), 305–346.Google Scholar
- Edelson, D. (2009, December 31). Australian Youth Site: Kill All Jews. Y-net. http://www.ynetnews.com/articles/0,7340,L-3827855,00.html. Accessed 4 Jan 2017.
- Fleming, A. (2014, January 29). Who Are the Australian Defence League? New Matilda. https://newmatilda.com/2014/01/29/who-are-australian-defence-league. Accessed 4 Jan 2017.
- Graham, D. (2014). The Jewish Popualtion of Australia: Key Findings from the 2011 Census. Sydney: JCA.Google Scholar
- Jakubowicz, A. (2010). Australia’s Migration Policies: African Dimensions Background Paper for African Australians: A Review of Human Rights and Social Inclusion Issues. https://www.humanrights.gov.au/publications/african-australians-project-australia-s-migration-policies-africandimensionsaustralia: Australian Human Rights Commission.
- Jakubowicz, A. (2011). Empires of the Sun: Towards a Post-Multicultural Australian Politics. Cosmopolitan Civil Societies: An Interdisciplinary Journal, 3(1), 65–86.Google Scholar
- Jakubowicz, A. (2016). First the Word, Then the Deed: How an ‘Ethnocracy’ Like Australia Works. The Conversation. https://theconversation.com/first-the-word-then-the-deed-how-an-ethnocracy-like-australia-works-69972. Accessed 12 Mar 2017.
- Killalea, D. (2016). Nova Peris Owns Racist Troll Who Posted Shocking Facebook Tirade. news.com.au. http://www.news.com.au/technology/online/social/nova-paris-owns-racist-troll-who-posted-shocking-facebook-tirade/news-story/b234d9c204a77cb50c1ffbb8f495f33e
- Loewenstein, A. (2014, May 6). Australia’s Far-Right Fringe Groups Must Not Be Allowed to Become Mainstream. The Guardian. https://www.theguardian.com/commentisfree/2014/may/06/australias-far-right-fringe-groups-must-notbe-allowed-to-become-mainstream. Accessed 4 Jan 2017.
- Mannix, L. (2015). Woman Charged After Facebook Call to Burn Down Mosques Goes Viral. The Age, Fairfax. http://www.theage.com.au/victoria/woman-charged-after-facebook-call-to-burn-down-mosques-goes-viral-20151222-gltqz9.html. Accessed 13 Apr 2017.
- Mantle, G. (2015). Killing Indigenous Australians Is Not a Game! https://www.change.org/p/amazon-killing-indigenous-australians-is-not-a-game
- Oboler, A. (2008). The Rise and Fall of a Facebook Hate Group. First Monday, 13(11). https://doi.org/10.5210/fm.v13i11.2254.
- Olding, R. (2014). Anti-Islamist Nathan Abela Charged Over Shooting at His Home. http://www.smh.com.au/nsw/antiislamist-nathan-abela-charged-over-shooting-at-his-home-20140418-36viw.html. Accessed 4 Jan 2017.
- PV Reviews. (2015). Survival Island 3: Australia. YouTube Android Gameplay. https://gaming.youtube.com/watch?v=pH9GJt0J0qs. Accessed 12 Nov 2016.
- Ross, M. (2009). Student Race Hate Posts Anger Jewish Group. http://www.abc.net.au/news/2009-12-29/studentrace-hate-posts-anger-jewish-group/1192842. Accessed 4 Jan 2017.
- Rubinsztein-Dunlop, S. (2014). Inside the Far-Right Group Spreading Fear Through Muslim Communities. http://www.abc.net.au/news/2014-04-21/inside-the-far-right-group-spreading-fear-through/5402494. Accessed 4 Jan 2017.
- Semmler, C. (2014, December 12). Could the Sydney Siege Have Been Predicted and Prevented? The Conversation. https://theconversation.com/could-the-sydney-siege-have-been-predicted-and-prevented-35608. Accessed 17 Jan 2017.
- Timofeeva, Y. A. (2002). Hate Speech Online: Restricted or Protected? Comparison of Regulations in the United States and Germany. Journal of Transnational Law & Policy, 12(2), 253–285.Google Scholar