As an expert article writer, here is my rewritten version of the article:
Artificial Intelligence (AI) is rapidly transforming the workplace, from the hiring process to employee monitoring, promotions, and even terminations. Hilke Schellmann, an Emmy Award-winning investigative reporter and assistant professor of journalism at New York University, delves into the accountability and impact of AI in the workplace in her book “The Algorithm: How AI Decides Who Gets Hired, Monitored, Promoted and Fired [And Why We Need to Fight Back Now]”.
Schellmann’s investigation reveals that most Fortune 500 companies employ AI tools at various stages of the hiring pipeline, such as resume screening, one-way video interviews, and even video games designed to assess personality traits and cognitive abilities. While these tools aim to streamline the process and handle the overwhelming volume of applications, there are concerns about their fairness and potential for discrimination.
In one experiment, Schellmann tested a tool claiming to assess personality and job suitability based on voice samples. Surprisingly, when she spoke in German and read from a Wikipedia entry on psychometrics, the tool rated her as 73% qualified for the job, despite the transcription being gibberish. This raises concerns about the accuracy and fairness of these tools, particularly for individuals with accents, speech disabilities, or different language backgrounds.
Schellmann emphasizes the need for skepticism, transparency, and explainability in the development and use of AI tools for hiring and employee management. Developers must be able to explain why an applicant was rejected or advanced, and there should be accountability measures in place to ensure responsible and ethical use of these technologies.
As the use of AI in human resources continues to grow, it is crucial to address these issues and ensure that the tools are fair, unbiased, and capable of making high-stakes decisions that can significantly impact people’s lives and careers.
Overall, Schellmann’s investigation serves as a wake-up call for companies and developers to prioritize ethical and responsible AI practices in the workplace, fostering a culture of transparency and accountability.