September 23, 2021

SEO, Wordpress Support & Insurance, Mortgage, Loans, Legal, Etc Blogs

SEO, Wordpress Support & Insurance, Mortgage, Loans, Legal, Etc Blogs

, SEO, Wordpress Support & Insurance, Mortgage, Loans, Legal, Etc Blogs

Taking On Tech: Ángel Díaz Explains How Data On Content Moderation Can Expose Harms Against Marginalized Users

Share This :
, SEO, Wordpress Support & Insurance, Mortgage, Loans, Legal, Etc Blogs
, SEO, Wordpress Support & Insurance, Mortgage, Loans, Legal, Etc Blogs
, SEO, Wordpress Support & Insurance, Mortgage, Loans, Legal, Etc Blogs

Taking On Tech is an informative series that explores artificial intelligence, data science, algorithms, and mass censorship. In this report, For(bes) The Culture examines how social media policies disproportionately target marginalized groups.


The Brennan Center For Justice recently published a report called, “Double standards in social media content moderation.” The document examines how discretionary policies place marginalized communities under scrutiny, while failing to protect them from harm. Vulnerable groups are disproportionately targeted for policy violations and typically receive heavier penalties. Seldom can they appeal a platform’s actions against them. On the other hand, users who experience over-enforced rules say their abusers are permitted to perpetuate violence with impunity. 

According to the report, platforms like Facebook, Twitter, and YouTube are setting a dangerous precedent with their low prioritization of ethical content moderation. Experts believe these companies and their subsidiaries are constricting activists’ speech under the guise of community safety. As platforms continue implementing new rules, marginalized users are experiencing abuse at exponential rates. This begs the question, “Who exactly are these policies protecting?” 

Social media companies are being urged to reorient their modus operandi to center the safety of marginalized groups. Expectedly, top-executives are responding to critical concerns with plausible deniability of implicit bias in their approach to content moderation. Without transparency, oversight, and regulatory governance in place, making the case for discrimination can seem daunting. 

Ángel Díaz is a former counsel in the Liberty & National Security Program at the Brennan Center For Justice. He is co-author of Brennan’s latest report and recently became a lecturer at the UCLA School of Law. For(bes) The Culture spoke to Díaz about the findings in the report. 

MORE FOR YOU

For(bes) The Culture: What inspired you to advocate for marginalized communities through policy work? 

Ángel Díaz: In between undergrad and law school I worked in the legal department at Google. That’s where I was exposed to policy and the ways in which private rules impact public discourse. After law school I worked at a couple of law firms. That’s where I helped tech companies draft intentionally broad and vague policies with built-in flexibility over how rules are enforced. After leaving my last firm, I got a job at the Brennan Center, where I focused primarily on two areas. One was police surveillance and its impact on freedom of expression and equal protection. The other was on content moderation and how decisions impact marginalized groups.

For(bes) The Culture: Do you feel social media platforms are genuinely invested in the safety of marginalized communities?

Ángel Díaz: These policies carry certain ideas around freedom of expression, human rights, and user safety. But when you look deeper, they reflect a series of choices regarding whose voices to protect and whose to burden. Those decisions map on pretty neatly to existing power dynamics.

For(bes) The Culture: How would you say platform policies reinforce those dynamics?

Ángel Díaz: Most of these policies are designed to protect public figures and powerful constituencies. When platforms decide to enforce rules that are over broad, they tend to only be applied to marginalized groups. I acknowledge that content moderation is hard and that there will inevitably be mistakes. You can’t get it right all the time. However, they’ve chosen to protect public figures, elected officials and powerful individuals. These people are often the major drivers of harassment, hate speech, and violence. Focusing on higher-profile accounts is a more logical way to intervene. It’s better than what we have now; over broad enforcement for the marginalized and a very measured, light-hand approach for the powerful.

For(bes) The Culture: What do you think may be preventing company executives from making equal enforcement an immediate priority?

Ángel Díaz: The decision makers at the top are privileged and have a pretty limited world view. That limits their understanding of the dangers marginalized groups face. Leadership doesn’t believe the threats are as serious as we often state, until something like the Capitol insurrection unfolds. Then it becomes undeniable.

For(bes) The Culture: In the report you emphasize the importance of data collection. What role does data play in mitigating the abuse marginalized groups are exposed to?

Ángel Díaz: Detailed data can help us document harm. Those findings can be used to empower lawyers and advocates to sue these companies and hold them accountable for the harm they perpetuate. Right now, platforms choose what to share and the metrics in how they share it. We have no fundamental way of checking their homework. If we don’t understand the full logistics of what is happening, we can’t advocate for regulation in an intelligent way.

For(bes) The Culture: Facebook recently justified withholding requested data for public interest research, expressing concern for users’ privacy. What are your thoughts on their explanation?

Ángel Díaz: Facebook has said, “We’re under a consent decree with the Federal Trade Commission, so we can’t share this kind of information or facilitate this kind of research because it would be in violation.” There’s value in protecting the privacy of their users, and I think that is universally agreed upon. But pretending the FTC decree prevents them from facilitating public interest research is just not true. There was a recent letter from the FTC to Facebook saying, “Do not use this consent decree to justify the actions you’re taking.” The FTC was only asking them to inform users on how their data is being used. It’s misleading to pretend there are restrictions in place preventing them from creating solutions. The current regulatory regime is very permissive about what they’re allowed to share or conceal.

***For(bes) The Culture made contact with a spokesperson at Facebook for comment. No official statement has been given at this time.***

For(bes) The Culture: Your report addresses the inconspicuous nature in which Facebook’s policy updates are announced. Do you believe there should be firmer guidelines in place on how legal disclosures are to be made? 

Ángel Díaz: It’s almost a full-time job just to understand what is and isn’t allowed on their platform. I have to search for clues and piece it all together. They have a blog, but sometimes they make announcements on Twitter—which isn’t even their platform. Oftentimes they will make an announcement, but they may not implement it into their community standards. It’s not difficult to inform users of new changes. There is more than enough protocol in place. Their choice not to follow it reflects an attempt to go under the radar because they know there may be backlash.

For(bes) The Culture: How well do you think social media companies have moderated COVID-19 related content?

Ángel Díaz: Warning labels are pretty useful interventions to have, and they can actually break this binary of either leaving content up or taking it down. It’s useful to have contextual tools that educate people. However, Facebook’s choice to attach the COVID-19 warning label to all COVID related content isn’t very helpful. People simply become desensitized to it. At this point, whenever I see the warning label, I ignore it. It’s not saying, “Hey, this is actually misinformation about COVID. You should learn more about it from this place.” It’s like they’re saying, “Well, we warned you. Whatever happens next is on you.” That warning label almost feels designed to fail.

For(bes) The Culture: What are your thoughts on government intervention? Should there be federal laws in place to help drive transparency?

Ángel Díaz: Right now, their disclosures are mostly focused on how much content they’ve taken down. That doesn’t necessarily track their success. Which particular communities are being impacted by all of those removals? It’s not hard to track. They can say, “Hey, we removed X pieces of content that were hate speech against Black people. We removed X pieces of content that were harassment towards women.” There are ways of gathering better data on who is impacted by these decisions, and how. 

For(bes) The Culture: What steps should be taken if the government decides to intervene? 

Ángel Díaz: There are a lot of difficult questions around that from a legislative perspective. So for me, as someone whose work also relates to police surveillance, I’m nervous about the idea of just handing over a bunch of platform data to the government. We have a long history of law enforcement and the intelligence community targeting civil rights activists and protestors. I would be worried about facilitating that kind of surveillance under the guise of, “serving public interest.” That’s not to say we shouldn’t have the government involved in regulation. We just need to be more thoughtful when facilitating public interest research. There should be guidelines put in place that say, “You’re not allowed to give this data to the cops or intelligence agencies.” If there is a researcher studying for the purpose of surveillance, the answer should be no. We can be strategic in overseeing research assignments that aren’t unintentionally facilitating surveillance.

For(bes) The Culture: What might that surveillance look like?

 In countries like Israel, there are Internet Referral Units. Government actors sit on computers all day and flag pieces of content they want removed. It enables the government to have platforms delete content they wouldn’t be allowed to legally remove themselves. America has a parallel system of censorship that is deeply invisible. Platforms have a lot of incentive to comply with those government requests. One of the things we called for in the report is full transparency on whether or not platforms are removing content on behalf of a government agency. If an agency was involved in removal, we also need to know the rule they requested it under.

For(bes) The Culture: What would you say to platform executives who still deny the experiences of marginalized users on their platforms?

Ángel Díaz: As much as social media companies love telling people of color we’re wrong about how these systems work, it always comes out that there’s truth to our experiences. Develop apps and policies that actually support marginalized communities. Given all of our contributions to these platforms, it’s long overdue.

Share This :