I’m Casey Newton, Silicon Valley editor at The Verge. Earlier this year, I wrote about the depressing working conditions of Facebook moderators at a campus in Phoenix, where moderators are underpaid and struggling with PTSD. Last week, I published a new investigation into the bleak workplace conditions at a second moderation facility, this one in Tampa, FL. Three former moderators went on the record — the first Facebook moderators in North America to do so — revealing extensive new details about the human cost of these low-paying jobs.

The Trauma Floor: The secret lives of Facebook moderators in America https://www.theverge.com/2019/2/25/18229714/cognizant-facebook-content-moderator-interviews-trauma-working-conditions-arizona

Bodies in Seats: At Facebook’s worst-performing content moderation site in North America, one contractor has died, and others say they fear for their lives https://www.theverge.com/2019/6/19/18681845/facebook-moderator-interviews-video-trauma-ptsd-cognizant-tampa

Proof: https://i.redd.it/c0cqfbod5c631.png

EDIT: Thanks very much for following along, everyone! I look forward to bringing you more of these stories. If you have anything to share, please get in touch: [email protected].

Comments: 100 • Responses: 35  • Date: 

FreeFreddieHugs21 karma

Have you talked to regular employees inside of facebook about how they feel about all of this?

TheVergeDotCom27 karma

I have, including top executives. Generally speaking, I'd say the company has welcomed the scrutiny here. (One thing I like about covering Facebook is that, more than other tech giants, they tend to be responsive to public pressure.) And there's a sizeable group of employees working on these issues who believe that working conditions will improve for moderators over time as they roll out new features and develop additional support programs. The only static I tend to get from employees is from those who think I'm singling Facebook out — other big platforms, including YouTube and Twitter, use the same vendors and may have similarly bad working conditions. So I do want to turn my focus there as well. — Casey

ojaneyo9 karma

But do you think there are talks behind closed doors that are really in crisis mode, trying to control the message here? Working in a corporate culture, I find this happens more times than not when in the midst of a negative message that may impact the stock price/shareholders.

Note: I think the FB, YT and others need to suck it up and put these folks on their headcount just for overall moral, if for nothing else.

TheVergeDotCom8 karma

I'm definitely still working to understand all the dimensions of how this is playing out inside of Facebook. If you now something, please get in touch! — Casey

jmisener17 karma

Do you think Facebook hiring moderators as full-time employees instead of contractors would lead to better working conditions and better moderation?

TheVergeDotCom32 karma

Absolutely. If these moderators were sitting shoulder to shoulder with their peers in Menlo Park, it's unimaginable to me that their bathrooms would be filthy or their desks littered with fingernails. If they were crying at their desks, colleagues would comfort them. And the jobs would pay better, attracting higher-quality candidates who would likely do a better job enforcing the policies. — Casey

slinky31715 karma

I think a big part of this is that Facebook is outsourcing their moderation to third party companies, who then are more concerned with keeping their contract than they are the quality of the work. Sometimes that overlaps, but it seems like often it doesn't - and Facebook can escape blame by just shifting it onto their third party company. They then would hire another company which makes the same mistakes, so on and so forth.

In your opinion, what would change (for better or for worse) if Facebook brought moderation in-house rather than contracting it out?

TheVergeDotCom8 karma

Absolutely! See my reply to u/jmisener. — Casey

Togapr3313 karma

Do you think companies like Facebook and Google should be broken up?

Are they the Standard Oil of today?

TheVergeDotCom20 karma

I do tend to favor this approach, yes. I think a world where Facebook had to compete against WhatsApp and Instagram for users would benefit users, Facebook, and the world. — Casey

Feresd9612 karma

Hi Casey! Given all the bad things that happen on almost all social networks, how do you deal with it? Do you use them or are you trying to ditch them?

TheVergeDotCom23 karma

I still use social networks regularly and don't feel too many ill effects — save for when I've read too much Twitter for the day and absorb all that global anxiety. For me, Facebook is a phone book and an events manage, and it works fine for that purpose. On Instagram, I tend to follow only people I know in real life, and it's generally pleasant to look at for a few minutes a day. As an aside, one reason I think it took so long to uncover the darker side of social networks it that many people never see it. Certainly I've never come across videos of murder or mayhem on Facebook or Instagram — thanks, I now realize, to the invisible work of contractors. — Casey

RhaegarReed11 karma

What is going to happen to those who broke their NDA now?

TheVergeDotCom15 karma

Hopefully nothing! — Casey

ealixir11 karma

What do you think is the single worst image an employee has described seeing to you?

TheVergeDotCom23 karma

I think it's a tie between a video of a girl laughing while her friend is raped in the background and another girl screaming while she's being cut up. The latter video was found to be a hoax, but a moderator I spoke with didn't realize that at the time. — Casey

tgupscoffee9 karma

  1. As you mentioned, many other companies have issues with dealing with poor human conditions as the result of needing to filter abuse. What companies do you feel are doing WELL in this space? What companies are treating people fairly given the circumstances?

  2. Do we always need humans in the loop for moderation? In other words, do you believe AI will never totally catch up to the latest trolls/memes people will use to post abusive content?

  3. Since posting the story, what additional facts/feedback most surprised you? Did anything you hear give you hope?

  4. Your opinion, not the Verge — is Facebook good for society on balance?

TheVergeDotCom12 karma

  1. Recently I heard from a YouTube moderator that he gets two hours per work day of "wellness time" to decompress from the stress of the job. That compares with nine minutes per day at Facebook. I'd say YouTube has the more humane policy there.
  2. I do believe automation will eventually shoulder much of the burden. But we'll also need humans in the loop, because so much moderation requires judgment calls that machines are not equipped to make. Consider also that Facebook policies are updated almost every day — it's not clear to me that AI will ever be able to train itself perfectly in real time. — Casey
  3. I've been overwhelmed by dozens of messages from moderators around the world. I'm surprised at how bad working conditions tend to be in call centers generally. What gives me hope is the pride that these workers take in cleaning up the internet for the rest of us, despite the low pay and awful physical environments they often work in.
  4. I honestly don't know how to approach answering this question. More data is needed, basically! — Casey

secondman028 karma

What platforms or processes do you believe provide the best solution for Moderation?

I find that wether you're Facebook or a smaller forum, problems with Moderation always inevitably surface. This seems to be a problem the internet has always had, but now blown up to a public scale because of the size of these companies

TheVergeDotCom24 karma

I actually really like the Reddit model! Reddit sets a "floor" for moderation that no subreddit can fall below — so, no one can post terrorist content. But then each subreddit can also raise the "ceiling" — so individual subreddits can create new rules that make sense for their communities. And then volunteer moderators work to keep the conversation civil and productive. The model seems much more scalable and sustainable to me than what we see on Facebook or YouTube. — Casey

marcodj8 karma

With this situation getting more attention, what have been Cognizant's actions (if any) that you know about to try to do some damage control?

TheVergeDotCom12 karma

For one thing, they stopped allowing workers to access email from home! They also had a town hall meeting reminding all their current workers that they are still under NDA. More broadly, they've tried to assure workers that they take their safety and happiness seriously. The workers I've talked to have not been convinced. — Casey

tayswiftie7 karma

With so many users and its services even used to distribute information for the US Government, how does FB escape big brother and regulation?

TheVergeDotCom15 karma

Facebook has escaped regulation because we have a divided Congress with many more urgent-seeming issues in front of it. There's growing interest in regulating big tech companies, including from Congress, but right now there's very little agreement on what problem the regulations should solve. And so we get gridlock. — Casey

fumanshoes7 karma

Did employees you talked to think this situation could ever be improved? If so, how? Did any say, "by all means turn this over to bots"?

TheVergeDotCom16 karma

Bots is definitely part of it. Three other things that would help: screen moderators before hiring to ensure they are willing and able to shoulder the psychological burden of seeing so much disturbing content; pay them more (at least $45K a year); and offer them counseling after they leave in case they do get PTSD. Facebook is talking about all these things; the question is whether they will implement these policies, and how quickly. — Casey

JDgoesmarching4 karma

Let’s stop with softballs and get to the real questions

  1. What’s your go-to hair product
  2. Does hair count towards your overall height on Tinder? asking for a friend
  3. Any tips on sliding into Ashley’s DMs?
  4. Is the Verge providing any financial or legal help for the interviewees breaking their NDAs?
  5. What are the odds Nilay dusts off his lawyer suit to personally defend our heroes sources

Whether you answer or not, good reporting my dude

TheVergeDotCom5 karma

  1. I use a drugstore hair-paste product called Unruly from Old Spice. Highly recommended.
  2. No! I would be 6'5" even if (god forbid) bald.
  3. Absolutely do not do this — Casey

chaseqi4 karma

Hi, sorry it’s a little off topic, but I just missed your converge podcast so much, it was a really interesting format. Will there be a second season?

TheVergeDotCom4 karma

Thanks so much! Converge was a blast to do. We want to do another audio project, but we want to tie it more closely to the work I do covering the intersection of social networks and democracy. Hopefully we'll still be able to sneak in some improv. Stay tuned! — Casey

rja504 karma

Hi Casey,

Do you think a union could help Facebook workers, and if so, how do you think the workers there should handle colleagues who are resistant to unionizing or refuse to partake in union actions?

TheVergeDotCom6 karma

I think moderators could benefit tremendously from collective bargaining. There are 15,000 of them around the world, and many of them are working in bleak conditions. — Casey

mabjr4 karma

With all that is happening on Facebook and many other social media platforms, do you think the platforms should just focus on what sort of content they want on their platforms instead of just accepting everyone/anything on their platform?

TheVergeDotCom9 karma

Not really — I want platforms to exist that support a wide range of political speech. I do think platforms might ask more of people trying to post certain kinds of content. Maybe posting a video should require that you verify more information about yourself, or that your account is of a certain age. But these restrictions can make life difficult or impossible for progressive activists, so there are important tradeoffs to consider. — Casey

myrke3 karma

Are all moderators given access to everything, from public posts/pages to Messenger to secret/closed groups or are there separate mod groups that monitor the posts depending on the status of the page/profile/group?

TheVergeDotCom6 karma

They can only see the reported piece of content, and maybe a little more context about the offending user or some comments on the piece. They don't get free reign to read all of Facebook. — Casey

tophkiss3 karma

I have a few questions

1.)Is the only reason Facebook doesent do this in house so that they can distance themselves from these stories? Are there other reasons? Wouldn't it be more practical to move it in house?

2.)Are the moderators responses used to train the AI moderators? If so doesent the 2% of error really have a large and lasting unseen effect?

3.)I'm confused how error is calculated. Is it only an error when the manager and moderator disagree? What about the posts that are never checked by a manager, do they just accept a certain percent is wrong and move on?

TheVergeDotCom5 karma

  1. The main reason is that it's cheaper. But they've told me they're considering moving at least some of the work in house. I hope they do.
  2. Yes! Some jobs are dedicated to training the AI. I have heard tell of a notorious meeting in Phoenix in which moderators attempted to train the AI on the subject of "butt focus" — essentially, which photos where a butt is present are drawing too much attention to the butt. A 2 percent error rate can have bad consequences for the AI.
  3. A decision is considered an error if the moderator makes a decision that the quality analyst who examines a subset of their decisions disagrees with, or if a Facebook analyst who examines a subset of the QA's decisions disagrees with. Most decisions are never checked by another person. — Casey

shmavalanche3 karma

In your article yesterday "A social network banned support for Trump, will others follow" you link to a previous article with the quote "there is no systemic evidence of bias on social networks towards anything but the extremes"

And yet we have a Google Executive Jen Gennai on camera saying:

"We all got screwed over in 2016, again it wasn’t just us, it was — the people got screwed over, the news media got screwed over, like, everybody got screwed over — so we’re rapidly been like, what happened there and how do we prevent it from happening again,"

"We’re also training our algorithms, like, if 2016 happened again, would we have, would the outcome be different?"

"The reason we launched our A.I. principles is because people were not putting that line in the sand, that they were not saying what’s fair and what’s equitable so we’re like, well we are a big company, we’re going to say it"

"The people who voted for the current president do not agree with our definition of fairness,"

Nearly half the country voted for Trump (I did not).

Is this attitude from Google concerning to you, and does it portray a bias that goes well beyond the "extreme"?

TheVergeDotCom1 karma

No one has benefited from big tech more than partisans. Look around you — they have all the power! They're in the White House, they're running the Supreme Court, and they get more engagement on Facebook than any other group. Many conservatives tend to define "bias" on social networks as any time they don't get a desired outcome — didn't appear high enough in search, didn't get recommended, that sort of thing. It's disingenuous and exasperating. — Casey

shmavalanche3 karma

Ok, I agree with that, of course. But you're not really answering the question. My concern is platform bias (in Google's case algorithmic bias) to one agenda, which is what Jen is clearly advocating in these quotes.

shmavalanche3 karma

I believe platforms should be completely, unequivocally unbiased to either side, or they should lose their 230 protections.

TheVergeDotCom1 karma

I think people who build platforms should have wide flexibility to set the rules of engagement. Otherwise you're building a platform that is legally required to empower Nazis and other horrible people. — Casey

TheVergeDotCom2 karma

The quotes aren't really relevant to the question, IMO. Obviously lots of Googlers voted against Trump. YouTube has also unquestionably been a boon to the far right. I think our lived reality should help us brush aside any terror of Google disadvantaging one political group over another. — Casey

Random812322 karma

Hi Casey. That was a very interesting article and it's a shame what those content moderators see and go through. I know I could never do it.

What do you think is a solution for this issue?

Also, a lot (not all) of that work place culture and rules you described and criticized the company for are pretty standard stuff for full time jobs that don't require advanced degrees (such as the company giving a 30 minute lunch and limited breaks). I'm not saying it's fair or right, just standard. What do you think should change in their every day working environment.?

edit: clarified that I was talking about work place culture.

TheVergeDotCom6 karma

My solution: bring these moderators in house, pay them more, and offer them better supports during and after their employment.

I realize other call centers have bad working conditions. But Facebook holds itself to a higher standard than those call centers, so I feel like it's important to highlight the discrepancy. — Casey

Twrd43212 karma

Would Facebook's pivot away from the news feed to more private forms of communication such as Messenger mean the need for moderation will decrease?

TheVergeDotCom2 karma

Good question, and I'm not quite sure. Lots of bad stuff will still be traded around via private message — the question is how much of it will still be reported. My assumption is that people will still receive awful stuff via private message, and report it to Facebook, and humans will have to look at it. — Casey

craZDude2 karma

Assuming the breakup of social media platforms happens and that the resulting platforms derive their content moderation policies according to their own values (e.g. Ravelry), would you be worried that people dividing themselves into platforms that align with their beliefs would drive polarization even further? Or do you think it wouldn't be that different from how things are on the current mega platforms?

TheVergeDotCom2 karma

I think smaller communities will actually feel less polarized. The reason big platforms feel polarized is because only the most committed partisans tend to post about politics there, and those posts receive the most engagement, making everyone feel like we're in the middle of a Civil War. If you rewind platforms back to the point that they're basically just web forums, people post more about their authentic interests and you lose the sense of perpetual crisis that tends to dominate on Facebook, YouTube, and Twitter. — Casey

xbyo2 karma

In your reporting, did you speak with anyone who was there at the beginning of Cognizant getting the contract? Was the situation already bad from the beginning or was this like a progressively deteriorating situation to get to the point it is now (in terms of work conditions).

TheVergeDotCom2 karma

I did, and there was actually a lot of optimism at the start of the project. Many of the people who participated were just out of school, and it felt like a grand new adventure. Their enthusiasm waned as the reality of the job set in, and as corporate mismanagement made their jobs unnecessarily hard. — Casey

CricTic2 karma

What's your perspective on how blame for these working conditions should be shared between Facebook (or other big tech companies) and Cognizant (or other service providers)? Seems like most of the bad public sentiment has been focused on Facebook, rather than Cognizant.

TheVergeDotCom2 karma

Everyone ought to take their share of blame. Facebook built the system, but it's Cognizant's job to administer it, and there have been multiple failures along the way. It is notable to me that other sites may have similar issues with moderators getting PTSD, but still manage to maintain clean bathrooms and work environments. So not every site is as bleak as the two I have visited. — Casey

AtLeastSomeOne2 karma

Five questions: 1. How close are we to a proper AI moderation of content? Did you talk to a techie who is actually working on this stuff?

  1. What is the situation with other platforms like Twitter and YouTube? Especially YouTube, given the scale. Quite sure people try to upload a lot of gore and disturbing stuff there all the time.

  2. Just like there’s a central database of hashes for matching child porn, why aren’t the videos that are being taken down stores as a database, to avoid re upload? And more importantly, they should rather make these hashes public so that other platforms can also proactively block them.

  3. Finally, do you personally think that it is a strong possibility that there is no solution for this problem? That AI never goes with 100% reliability. Number of videos will keep going up and so will number of moderators.

  4. Will platforms like Snapchat — that allow only selected public accounts/publishers to broadcast to large audience — finally have their day?

TheVergeDotCom3 karma

  1. I'm looking into this right now! If anyone out there has any information, please email me — [email protected].
  2. Companies are legally obligated to remove child porn and terrorism, so they take it more seriously. Hopefully the database you describe will emerge eventually.
  3. I think the idea of a free and open internet will always be in tension with the fact that people will upload horrible stuff and human beings will have to remove it. That's why I advocate for paying people more and taking better care of them emotionally.
  4. Certainly Snap's approach to this problem seems more humane than the alternatives! — Casey

mabjr2 karma

Platforms like MeWe.com have started to announce themselves as the "Anti-Facebook" social media. Do you think this sort of social media is something to look into?

TheVergeDotCom3 karma

Generally I don't think social networks are interesting until I hear about them from somebody who doesn't work there. Very few ever pass this test. — Casey

darkstormplayz2 karma

How do you think Facebook can moderate their platform without subjecting their employees to disturbing imagery?

TheVergeDotCom3 karma

They can't! But they are trying to make it easier by, for example, blurring out the faces of people in videos, shutting off audio, and turning images black and white. All of that can make life a little easier on the moderator. — Casey

fumanshoes1 karma

How will moderation issues be impacted as Facebook pivots from newsfeed to smaller groups? What if any moderation is happening in Messenger? What recourse will there be for investigating illegal content shared by encrypted means? Is that one of the drivers for FB's move to private messaging -- to wipe its hands of messy moderation questions?

TheVergeDotCom3 karma

I think some of the worst stuff will become less visible — but it will still exist, people will still report it, and someone will have to look at it. Regarding illegal content, I think social networks will come under increasing pressure to use metadata to identify bad actors. It will be interesting to see how they respond. — Casey

thatpj1 karma

Any thoughts of doing a report on Reddit moderators or the admins who (don't) listen to complaints?

TheVergeDotCom5 karma

I would definitely love to know more about Reddit moderation. My favorite subreddit is r/squaredcircle, and my few attempts at posting have all been rejected as low-effort posts. Can you imagine?! — Casey

BlackGoliath1 karma

Does what you saw make you think more of the process should be automated or is a "human touch" required to properly vet posts comments?

TheVergeDotCom4 karma

Human touch is required in lots of cases. There is a ton of ambiguity in what moderators look at, and AI is bad at ambiguity. — Casey

LeanQueen881 karma

Has Facebook taken any responsibility regarding your reporting? Do you see A.I. taking over this job in the future?

TheVergeDotCom1 karma

Facebook has said it will work to improve a number of the problems identified in reports from me and others. — Casey

u_asked_i_answered-8 karma

Using an accurate logrithmic growth model, how tall would a person be if they kept growing for 100 years?

TheVergeDotCom4 karma

OMG please do not ask a journalist math questions!! — Casey