Amsterdam, March 11, 2024
In "Should computers be in charge?" experts, among them two researchers from the APAC team, Shuai Yuan and Almasa Sarabi, are weighing in on whether AI should take the reins in employment practices. We are in the age of digitisation, where workplaces are drowning in data. Theoretically, this data goldmine could train algorithms to be the perfect office managers. But, AI systems, for all their sophistication, have a knack for learning from the past, and not necessarily in a good way. Imagine a robot HR manager that only knows how to repeat the same mistakes as its human predecessors. We make three main points.
We emphasize caution in the integration of AI into employment practices, given concerns about perpetuating biases and the limited effectiveness of current AI tools. Imagine that you are implementing an AI-powered recruitment tool to screen job applicants. The tool uses historical data to assess candidates' suitability for a position based on factors such as education, experience, and skills. Upon closer examination, you discover that the historical data used to train the AI model predominantly consists of resumes from male candidates, resulting in a bias towards male applicants. What can you do? Make sure that the AI model learns from a more representative sample, including different genders, ethnicities, ages, and socio-economic statuses. Implement mechanisms to detect and correct biases in the AI model throughout its lifecycle. You can, for example, introduce weighting mechanisms to account for underrepresented groups or remove irrelevant features that contribute to bias. Finally, establish clear ethical guidelines—outlining principles for fairness, transparency, and accountability—for the use of AI and ensure that there is oversight from relevant stakeholders, including HR professionals, data scientists, and legal experts.
Second, we highlight challenges in using AI systems for optimising workforce management. We caution against prioritising short-term productivity at the expense of long-term employee well-being. Imagine again that you are implementing AI-powered scheduling software to optimise shift allocation for your workforce. The AI system analyses historical data, employee preferences, and business needs to generate optimised schedules. You soon observe negative outcomes, such as heightened stress levels among your employees. What can you do? Start by fostering a sense of ownership and empowerment among your workforce by implementing feedback mechanisms to gather input from your employees regarding their preferences, availability, and concerns related to scheduling. Be transparent around the AI-driven scheduling algorithm and offer flexible scheduling options where possible. Continuously evaluate the impact of AI-driven scheduling on employee well-being and organisational performance, using metrics such as turnover rates, employee satisfaction scores, and productivity levels.
Lastly, we draw attention to the EU AI Act, which classifies employment as high-risk for AI. Imagine a financial institution implementing an AI-powered credit scoring system to assess loan applications. The system uses machine learning algorithms to analyse applicants' financial data, credit history, and other relevant factors to determine creditworthiness. However, with the introduction of the EU AI Act, your organisation must ensure that its AI-driven decision-making processes comply with the new regulations. What can you do? Conduct a comprehensive review of the organisation's AI-driven decision-making processes to ensure compliance with the requirements of the EU AI Act. This includes assessing how the AI algorithms used in credit scoring are trained, validated, and monitored. You should also disclose the criteria used by the algorithms, the data sources utilized, and the potential implications for applicants' rights and interests.
So, should computers be in charge? Maybe. We have two very practical recommendations for all of you who work with data in their everyday work environments:
As practitioners, prioritise ethical considerations in AI implementation, ensuring that algorithms are trained on diverse and unbiased datasets to mitigate the risk of perpetuating biases. Additionally, regular audits by third-party experts can help identify and address any biases or inaccuracies in AI systems.
While AI can enhance efficiency, prioritise human-centric approaches to workforce management. This includes maintaining open lines of communication with your employees, considering their individual circumstances, and leveraging AI as a tool to support, rather than replace, human decision-making. Additionally, investing in employee training and development to enhance digital literacy and adaptability can help ensure a smooth transition to AI-enabled practices while preserving employee well-being and job satisfaction.
As our AI pals become more sophisticated, we need to be savvy about it. Trust is key, and audits by third parties might be the secret sauce to keeping our robot friends in check.
Almasa Sarabi is an Assistant Professor of HR at the Amsterdam Business School, University of Amsterdam. Her research focuses on strategic HRM, diversity, and careers. Reach out with comments and questions to a.sarabi@uva.nl.