An Investigation into Algorithmic Bias

in Content Policing of Marginalized Communities on Instagram and Facebook.


Publish Date: October 22, 2019


Table of Contents


Pg 3. Our Story: Why We Collected this Data and Why We are Releasing it


Pg 5. Survey Basics


Pg 6. Part I: The Data - Who is Most Affected by Censorship


Pg 7. Part I: The Data - Reasons Given for Disabling Account or Rejecting Ads


Pg 8. Part II: Community Findings - Users Feel Instagram is Targeting Them Based on their Identity.


Pg 9. Part II: Community Findings - Policies Harm those they are Supposed to Protect.


Pg 10. Part II: Community Findings - Policies Breeds Distrust of Platform and Poor Community Relations


Pg 11. Part III: General Observations - High Numbers of False Flagging for Queer Users


Pg 12. Part IV: Advertising Bias Against Women


Pg 13. Submitted censored content


Pg 14. Part V: Further Enquiry and Support


Our Story:


Salty is a membership driven digital newsletter and platform committed to amplifying the voices and visibility of women, trans and nonbinary people. We launched on International Women’s Day in 2018 - and were unceremoniously kicked off Mailchimp a few hours later, with no concrete reason, except violating “community guidelines”. Since then, we’ve grown into a group of over 130,000 users, 45,000 newsletter subscribers and 3 million monthly impressions across our platforms.


In our first year and a half Salty has faced digital harassment, hacking, been denied access to resources and been ‘accidentally’ booted from platforms - including Instagram. Our lived experience tells us that there is an unconscious bias that shapes our digital environment.


Algorithms are the backbone of content moderation. Algorithmic models produce probability scores that assess whether the user-generated content abides by the platform’s community guidelines. When pertaining to offensive content - social media scholar Tarleton Gillespie says “State- of-the-art detection algorithms have a difficult time discerning offensive content or behavior even when they know precisely what they are looking for…automatic detection produces too many false positives; in light of this, some platforms and third parties are pairing automatic detection with editorial oversight” (Custodians of the Internet, pg 98)


Our Story (cont):


In July 2019, Instagram algorithms rejected Salty’s ads because they claimed we were promoting “escort services.” The ads were simply portraits of our Salty community - women, trans and non binary people - some were disabled, some were plus sized, most were women of color.


Unable to rectify the problem via automated channels, we called this ‘false flag’ to the attention of our community, and as the press started to pay attention, Facebook reached out to rectify. After admitting these were falsely flagged, they reinstating the ads. Facebook publicly agreed to meet with Salty to discuss ways to make the policies more inclusive. We figured it was the beginning of a powerful conversation.


In preparation for our meeting with Facebook Policy team, we collected data from our community to better tell the story of the way these algorithms affect us, and formulate recommendations to make FB/ Instagram a safer place for women, trans and non binary people. We released a survey on our website and encouraged our readers to submit their experiences of the ways in which Instagram/Facebook rejects ads, closes down accounts, or deletes posts.


Unfortunately, over the past two months, Facebook has ceased communication with Salty, and has made no indication that they plan (or ever planned) on actually meeting with us to discuss policy development.


We believe the information included in this report is newsworthy and of public importance, and with the consent of the participants, we’ve decided to make it available publicly.


Survey Basics


Salty distributed a survey on to its followers on Instagram and via newsletter. As a community for and by women, nonbinary people, and queer folx, the Salty’s followers is heavily composed of these demographic groups. As such, the survey surfaced issues that are affecting these communities. The survey is not meant to be representative of all Instagram users.


One further caveat to keep in mind with the data is that we cannot see the full universe of posts or accounts that were flagged and reviewed, so we are not making directly causal claims. As with any survey data, the response rate to certain questions was somewhat patchy, so the data visualizations must keep in mind not just the total number of respondents, but the total number of respondents per question.


These findings have been collated by the Salty Algorithmic Investigation team on behalf of The Coalition for Digital Visibility.


Part I, The Data


Who is Affected by Censorship


The demographics of our survey respondents (118 people total) reflects our readership. Many of the respondents identified as LGBTQIA+, people of color, plus sized, and sex workers or educators. All of these Instagram users experienced friction with the platform in some form, such content taken down, disabled profiles or pages, and/or rejected advertisements.



image


Part I, The Data


Reasons for Disabling Account or Rejecting Ads


The vast majority of respondents said they were given no reason for for actions against their account, and were simply told they violated "community guidelines," an extremely vague response.



image


Part II, Community Findings:


  1. BIPOC users, body positive advocates, women and queer folx feel Instagram is targeting them for their perceived identity terms.


  2. Policies Harm those they are Supposed to Protect.

    Users who come under targeted harassment due to their identity are the ones getting reported and banned.


  3. Distrust of Platform: Preventing users from easily appealing action and not clarifying why the decision was reached leads to greater distrust of the platform and fosters the idea of arbitrary shadow-banning.



Part III, General Observations in Our Survey:

A significant number of accounts in the survey seem to be disabled in

error. For example, close to half of accounts that were disabled were later reinstated. This shows there seems to a problem with false flagging.


image


Part IV, Advertising Bias Against Women Led Business:


  1. Respondents in our survey that were unable to advertise were more likely to identify as cis women than any other identity type.


for erectile dysfunction medication, penis pumps, and “manscapping” razors. Why are penises normal but the female and non-binary body considered a threat?”


Submitted Censored Content


image image image


image image image


Part V, Further Inquiry


Salty will be continuing this investigation - to take part in our next survey, please Click Here.


Part VI, Support

Salty relies on contributions and volunteers to survive. If you believe the work we are doing with this kind of research has value- please click here to become a Member.


Or click here to make a one off contribution.


For follow up questions regarding this report - email [email protected]


image