Workers sue Meta, alleging watching brutal videos caused lasting trauma.

Legal action is on the horizon for Meta Platforms Inc., previously Facebook Inc., with recent lawsuits arguing that content moderators are subjected to extensively violent and malicious content. Employees claim this line of work has led to enduring psychological trauma.

Recent Legal Actions Against Meta

Meta Platforms Inc., formerly known as Facebook, Inc., is facing lawsuits regarding the effects of extensive exposure to violent and crude content on its workers. Employees argue that viewing such material has inflicted long-lasting psychological damage, including severe psychiatric disorders and trauma.

Gen Z is not immune to online scams like their boomer grandparents. Growing up with the internet doesn't protect them from falling victim to hackers and scammers.
Related Article

The lawsuit, launched by several ex-moderators, claims that along with direct exposure to toxic content, insufficient working conditions and training contributed to their experience. The trial revolves around allegations of negligence and a failure to offer reasonable safety measures.

Workers sue Meta, alleging watching brutal videos caused lasting trauma. ImageAlt

This issue sheds light on the crucial role played by content review specialists who are tasked with tackling the rampant accumulation of lurid content on social media platforms.

They are responsible for purging the platform of violent and inappropriate content, including explicit images, violence, and hate speeches. However, their routine exposure to such disturbing content can have harmful consequences on their mental health.

Existing Policies and Employee Protection

The company states it has established clear rules and guidelines laid out for content upload. However, the plaintiffs believe it is insufficient, arguing that guidelines and user agreements often fail to prevent users from uploading violent or explicit content.

Once it is published and noticed, moderators spend hours reviewing these potentially harmful posts. The job requires them to continuously review and judge content that may be harmful and extremely distressing.

To maintain its search engine on the iPhone, Google annually compensates Apple with a substantial sum ranging between $18 billion and $20 billion.
Related Article

In response to these allegations, Meta Platforms has argued its efforts to ensure employee safety. The company insists that it provides competitive wages, the option to work from home, mental health resources, and support for its moderation employees.

Despite this, critics argue that the offered mental health resources are insufficient and that the company needs to undertake more to protect its moderators.

The Unending Spiral of Trauma

In the lawsuit, plaintiffs presented evidence in the form of personal anecdotes of their distressing experiences. One content moderator claimed to have nightmares, paranoia, and other PTSD symptoms due to reviewed content. Others complain about being inundated with graphic visions long after work.

There's a growing consensus that this trauma is inevitable. Moderators working eight hours a day are exposed to harmful content, leading to significant mental health conditions. Psychologists agree that frequent exposure to distressing material can trigger issues like depression, anxiety, and Post Traumatic Stress Disorder (PTSD).

Over time, the distressing images they view can tarnish their view of humanity.

Although some may argue that the PTSD they experience cannot be equated with the trauma experienced by war veterans or accident survivors, psychologists argue that the symptoms experienced by these workers are consistent with PTSD diagnoses.

Industry-Wide Precedents

While efforts to shield employees from the mental health impacts are underway, the need for change is urgent. The case against Meta isn’t isolated. Across Silicon Valley, various technology companies and their health policies are under scrutiny.

Previously, Microsoft faced a lawsuit where two former employees claimed they developed PTSD after being made to view illegal images involving minors. With major technology companies coming under fire for similar reasons, it is evident that there is a need for change throughout the industry.

Just as importantly, the mental health crisis among content moderation workers is not confined to Meta employees. Across the digital terrain, these issues affect those who work under contract with third-party firms.

This industry-wide phenomenon implies that the solution needs to be broadly implemented, not only within Meta's boundaries, but across the entire technology sector and its associated entities.

Possible Solutions to the Crisis

Arguments are surfacing about viable solutions. One possibility is to disperse the responsibility of moderating content, thereby reducing the trauma-inflicting burden on individuals. This could mean hiring more moderators and thus lessening the load per employee.

However, opponents of the lawsuit may argue that such an approach would simply distribute the harm among a larger group of people. Instead, there might be a call for technology to shoulder the burden of filtering inappropriate content.

While Artificial Intelligence (AI) and Machine Learning (ML) algorithms can provide some relief, critics argue that these technologies are insufficient in catching nuanced or context-specific harmful content.

Consequently, in order to tackle this crisis effectively, it appears necessary to improve working conditions, offer better psychological support, and consider technological solutions that could alleviate the current moderating load.

Categories