Taking a stand against white supremacy, popular knitting site Ravelry bans content supporting Trump

Taking a stand against white supremacy, popular knitting site Ravelry bans content supporting Trump

Canadian activists demand an end to white supremacy during an anti-racism protest held at the United States Embassy in Ottawa, Canada, on Aug 22, 2017. (Photo credit: Obert Madondo)

By Obert Madondo |  | Jul. 1, 2019

Ravelry, a free social networking service dedicated to knitting, crocheting, and other yarn crafts, recently announced a new policy banning content supporting of U.S. President Donald Trump and his administration.

In a statement Ravelry, “the Facebook of knitting,” stated that the ban was a stand against white supremacy. The ban affects “forum posts, projects, patterns, profiles, and all other content” expressing support for the U.S. president and his controversial policies and ideas.

Ravelry’s new policy update states:

We cannot provide a space that is inclusive of all and also allow support for open white supremacy. Support of the Trump administration is undeniably support for white supremacy.

Policy notes:

  • You can still participate if you do in fact support the administration, you just can’t talk about it here.
  • We are not endorsing the Democrats nor banning Republicans.
  • We are definitely not banning conservative politics. Hate groups and intolerance are different from other types of political positions.
  • We are not banning people for past support.
  • Do not try to weaponize this policy by entrapping people who do support the Trump administration into voicing their support.
  • Similarly, antagonizing conservative members for their unstated positions is not acceptable.

The policy update also clarified that the social network still welcomes supporters of the Trump administration and won’t tolerate users deliberately antagonizing other users holding conservative views.

“We are definitely not banning conservative politics. Hate groups and intolerance are different from other types of political positions,” the new policy said.

According to The Washington Post:

The site did not explain which Trump policies it believes signify white supremacist ideology, though the president was roundly criticized for not condemning white nationalist violence after Charlottesville’s 2017 Unite The Right rally.

Ravelry, a private site launched in 2007, has eight million members, 125,000 Twitter followers, and 80,000 followers on Instagram.

The National Public Radio (NPR) reports that Ravelry has hosted “a scattering of politically-based patterns” and “impassioned discussions” since Trump’s election in 2016:

Perhaps the most popular of these is the pink “pussyhat” that became ubiquitous at women’s marches in 2017 and came to symbolize a feminist rallying cry against Trump for his remarks about women.

In another veer into explicitly political territory, one scarf pattern creates an illusion that makes it look like “innocuous stripes from the front, but says F*** TRUMP when viewed from an angle.”

There are also pro-Trump projects. A member called Deplorable Knitter has posted several hat and scarf patterns that echo the “Make America Great Again” slogan, along with “Build the Wall” and Trump 2020 images.

Ravelry has made a political statement or taken a strong public stance against white supremacy and other forms of prejudice in the past, according to Catherine Shu, a member for 11 years who writes for TechCrunch:

This is not the first time the site has taken action against racist, xenophobic and white supremacist sentiment. For example, it does not allow patterns with the Confederate flag and in January removed a pattern for a hat that said “Build the Wall”…

With its policy update today, Ravelry has the potential to launch important discussions about the role (and responsibilities) that online communities and their moderators have in shaping public discourse, starting within specific groups and spreading further.

Ravelry says its new policy was influenced by a similar change made last year by RPG.net, a roleplaying game site. Announcing its policy banning support for the Trump administration in October 2018, RPG.net stated:

We will not pretend that evil isn’t evil, or that it becomes a legitimate difference of political opinion if you put a suit and tie on it.

We are banning support of Donald Trump or his administration on the RPGnet forums. This is because his public comments, policies, and the makeup of his administration are so wholly incompatible with our values that formal political neutrality is not tenable. We can be welcoming to (for example) persons of every ethnicity who want to talk about games, or we can allow support for open white supremacy. Not both.

Ravelry’s ban on content in support of Trump comes at a time social media companies such as Facebook, Google, Reddit, and Twitter are struggling to police hate speech and divisive rhetoric on their platforms.

In recent years, neo-Nazis, white supremacists, far-right terrorists, Islamic jihadists, anti-semitists, and racists have used the Internet and popular social media platforms to radicalize and inspire hate. Their conspiracy theories and extremist calls for violence through social media platforms have led to numerous acts of mass murder. For example in March, a terrorist shot and killed 51 people at two mosques in Christchurch, New Zealand. The shooter, a white supremacist, was radicalized online. His manifesto entitled “The Great Replacement,” referenced ideas and material spread by white nationalists on the Internet and social media platforms.

In the name of promoting free speech, the social media platforms assumed positions akin to tolerating white supremacy and hate speech.

Facebook had always allowed white supremacist content disguised as white nationalism and white separatism to thrive on its platform, banning only content that glorified white supremacy. The social media behemoth even ignored the voices of civil rights groups, academics, race relations experts, and human rights activists, which insisted that “white nationalism and white separatism are white supremacy”. Last September the Lawyers’ Committee for Civil Rights Under Law dispatched a letter to Facebook, stating that the company’s “approach to white supremacist, white nationalist, and white separatist content on Facebook is misguided, inconsistent, and dangerous.” The organization further stated that Facebook’s “failure to recognize that white nationalism and white separatism are intrinsically segregationist subsets of white supremacist ideology.”

The New Zealand terrorist attack compelled Facebook and other social media companies to revisit their positions on content praising and supporting white supremacy, white nationalism and white separatism. In the aftermath of the attack, most have taken a strong stance against violent extremist content.

Facebook admitted that white nationalism “cannot be meaningfully separated from white supremacy and organized hate groups.” The social network banned “praise, support and representation of white nationalism and [white] separatism” on its platform and on Instagram.

Most notably, Facebook now agrees with critics who have have all along insisted that “white nationalism and white separatism are white supremacy”. According to Facebook:

It’s clear that these concepts are deeply linked to organized hate groups and have no place on our services.

Our policies have long prohibited hateful treatment of people based on characteristics such as race, ethnicity or religion – and that has always included white supremacy.

Facebook also acknowledged its critics role in the company’s change of heart:

We didn’t originally apply the same rationale to expressions of white nationalism and separatism because we were thinking about broader concepts of nationalism and separatism – things like American pride and Basque separatism, which are an important part of people’s identity.

But over the past three months our conversations with members of civil society and academics who are experts in race relations around the world have confirmed that white nationalism and separatism cannot be meaningfully separated from white supremacy and organized hate groups. Our own review of hate figures and organizations – as defined by our Dangerous Individuals & Organizations policy – further revealed the overlap between white nationalism and separatism and white supremacy. Going forward, while people will still be able to demonstrate pride in their ethnic heritage, we will not tolerate praise or support for white nationalism and separatism.

In the aftermath of these admissions, Facebook permanently banned scores of “dangerous” far right, white supremacist, and white nationalist groups and individuals based in Canada, the United States, Australian, and several European countries under its “Dangerous Organizations and Individuals” policy.

Facebook banned the UK-based Britain First, British National party (BNP), and the English Defence League. Canada-based white nationalist groups banned by Facebook include the Aryan Strikeforce, Canadian Nationalist Front, Wolves of Odin, and Soldiers of Odin.

Facebook’s ban on individuals netted far-right figures with a massive reach online, including prominent US conspiracy theorist Alex Jones, Milo Yiannopoulos, a former employee of the far right syndicated American website Breitbart News, Canadian neo-Nazi Kevin Goudreau, Laura Loomer, a prominent Canadian anti-Muslim figurehead, and Canadian far-right media commentator Faith Goldy, who has accused Jews and people of color of “replacing” white populations in Canada, Europe, and the United States.

Meanwhile, earlier this month, Reddit banned r/frenworld, a 60,000-member subreddit that “was associated with the online neo-Nazi movement” and spread “thinly veiled anti-Semitism”.

Ravelry’s ban on support for Trump is a reminder that the toxicity of online discourse now also threatens “non-political” online communities whose foundations are built on diversity and inclusivity.

Still, we must recognize the negative impact of social media platforms’ content moderation rules and practices on human rights and freedoms, including the freedom of expression online. Current speech moderating rules are opaque, uneven, often biased, and, according to the San Francisco-based Electronic Frontier Foundation (EFF), sometimes harm “those for whom the Internet is an irreplaceable forum to express ideas, connect with others, and find support.” At the beginning of June, the EFF, Syrian Archive, and Witness jointly published a white paper entitled, “Caught in the Net: The Impact of ‘Extremist’ Speech Regulations on Human Rights Content,” in response to the “Christchurch Call to Action,” which was billed as a “commitment by Governments and tech companies to eliminate terrorist and violent extremist content online.” The paper detailed “several concrete instances in which content moderation has resulted in the removal of content under anti-extremism provisions, including speech advocating for Chechen and Kurdish self-determination; satirical speech about a key Hezbollah figure; and documentation of the ongoing conflicts in Syria, Yemen, and Ukraine.”

Obert Madondo is an Ottawa-based blogger, photographer, digital rights enthusiast, former political aide, and former international development administrator. He’s the founder and editor of these blogs: The Canadian ProgressiveZimbabwean Progressive, and Charity Files. Follow him on Twitter: @Obiemad

Share