In response to the #MeToo movement, companies across the United States and Europe are using Artificial Intelligence (AI) to detect workplace harassment that occurs over digital communications. Is this an effective solution for reducing gender-based discrimination?
Leora Eisenstadt, associate professor of legal studies and director of the Center for Ethics, Diversity and Workplace Culture, researches the impact of moving away from a human reporting system for discrimination. While it is true that digital communications is increasingly a platform where workplace harassment happens, Eisenstadt finds that using AI fails to address workplace culture at its core and leaves both employers and employees at risk.
Many AI algorithms are highly imperfect at catching legal discrimination, leaving employers who claim to be monitoring communication platforms liable for all the harassment they did not catch. Meanwhile, an AI reporting system results in employees who were harassed losing legal protections granted by Title VII. Unless the victim comes forward for themself, they are not privy to key legal protections given specifically to victims who report harassment to leadership.
Leora F. Eisenstadt, #MeTooBots and the AI Workplace, Forthcoming, University of Pennsylvania Journal of Business Law (2021).
Responding to the #MeToo Movement, companies across the United States and Europe are beginning to offer products that use AI to detect discrimination and harassment in digital communications. These companies promise to outsource a large component of the Equal Employment Opportunity (EEO) compliance function to technology, preventing the financial costs of toxic behavior by using AI to monitor communications and report anything deemed inappropriate to employer representatives for investigation. Highlighting the problem of underreporting of sexual harassment and positing that many victims do not come forward out of a fear of retaliation, these “#MeTooBots” propose to remove the human element from reporting and rely on AI to detect and report unacceptable conduct before it contaminates the workplace.
This new technology raises numerous legal and ethical questions relating to both the effectiveness of the technology and the ways in which it alters the paradigm on which anti-discrimination and anti-harassment doctrine is based. First, the notion that AI is capable of identifying and parsing the nuances of human interactions is problematic as are the implications for underrepresented groups if their linguistic styles are not part of the AI’s training. More complicated, however, are the questions that arise from the technology’s attempt to eliminate the human reporter: (1) How does the use of AI to detect harassment impact employer liability and available defenses since the doctrine has long been based on worker reports? (2) How does this technology impact alleged victims’ vulnerability to retaliation when incidents may be detected without a victim’s report? (3) What is the impact on the power of victim voice and autonomy in this system? and (4) What are the overall consequences for organizational culture when this type of technology is employed?
This article examines the use of AI in EEO compliance and considers whether the elimination of human reporting requires a reconsideration of the U.S.’s approach to discrimination and harassment. Appearing on the heels of revelations about the use of non-disclosure agreements and arbitration clauses to silence victims of sexual harassment, this Article posits that the use of AI to detect and report improper communications, an innovation that purports to help eradicate workplace harassment, may, in reality, be problematic for employers and employees alike, including functioning as a new form of victim abuse. Lastly, the Article considers the difficult work of creating open, healthy workplace cultures that encourage reporting, and the impact of outsourcing this work to Artificial Intelligence. Rather than rejecting what may be an inevitable move towards incorporating artificial intelligence solutions in the workplace, this Article suggests more productive uses of AI at work and adjustments to employment discrimination doctrine to be better prepared for an AI-dependent world.