For best experience please turn on javascript and use a modern browser!
You are using a browser that is no longer supported by Microsoft. Please upgrade your browser. The site may not present itself correctly if you continue browsing.
Erdinç Durak is a PhD candidate at the Amsterdam Business School, University of Amsterdam. In this blog, he explores how Explainable AI (XAI) can make HR decision-making more transparent and effective. Read more.

Unlocking HR potential: Enter explainable AI

by Erdinç Durak

Machine learning and artificial intelligence (AI) have transformed industries for years, and HR is no exception. From predicting employee turnover to hiring processes, AI models offer unprecedented practical value. Yet, despite their predictive power, many of these models remain black boxes, in other words, their internal decision-making processes are not transparent or easily understood. This opacity can create challenges: unfair or biased outcomes, lack of trust among HR teams, and hesitation to act on AI-driven recommendations.

These challenges are not unique to HR. Similar concerns have been raised in domains such as banking and healthcare, where the stakes of algorithmic decisions are also high. It is precisely in response to concerns about transparency and explainability across multiple fields that the discipline of Explainable AI (XAI) has emerged. XAI refers broadly to methods and approaches that make AI models more understandable to humans, either by using inherently interpretable models such as regression or rule-based models, or by explaining the outputs of complex models through techniques like SHAP or counterfactual explanations. In people analytics, XAI may help organisations not just predict outcomes, but also understand them, communicate them effectively, and act based on insights that stakeholders trust. For example, in hiring processes, XAI can clarify which attributes drive candidate evaluations, or reveal why rejected candidates were screened out by the model, making the process more transparent and ultimately helping organisations ensure fairness.

Glass-Box models: High performance with transparency

One branch of XAI focuses on building high-performing, yet inherently interpretable models, sometimes called “glass-box” models. These approaches combine predictive power with transparency (e.g., regression models, decision trees, etc.), and are especially valuable for HR, where decisions often require not just predictions but explanations. For instance, decision trees can:

- Highlight key predictors: Identify which employee- or organisational-level features drive outcomes most strongly or which factors drive employee turnover.
- Capture complex relationships: Reveal non-linear or interacting effects between variables. For instance, decision trees can show how the combination of high job demands and low autonomy might explain decreases in performance more clearly than either factor alone.
- Integrate contextual factors: Consider variables such as tenure, role, or team-context to show how outcomes differ for different groups of employees. Work–life balance, for example, may emerge as a critical predictor of well-being for younger employees only.

From a practical perspective, this means HR teams can move beyond one-size-fits-all conclusions. Inherent interpretability allows for understanding not only what drives performance or engagement, but how these drivers operate across different employee profiles. Organisations can identify critical patterns, clusters, and thresholds that help design-targeted interventions, learning and development programs, and policies.

Post-Hoc explainers: Making complex models transparent

The second branch of XAI focuses on explaining the predictions of  models, including highly complex ones. Techniques in this area do not require us to fully understand the inner workings of a model but instead use its predictions to show why certain outputs are made and which factors are most influential.

Counterfactual explanations, for example, are a technique in XAI that identify, for each individual data point, the smallest changes in input features that would be enough to alter the model’s outcome. In other words, they reveal the minimal adjustments an employee’s profile would need to receive a different prediction. While the specifics depend on the dataset and model, these explanations provide three key business benefits:

- Fairness and transparency: By examining the factors that drive predictions, HR can detect potential biases and ensure decisions are equitable across employee groups. In the context of selection decisions, these explanations can help confirm that discriminatory factors are not being used.
- Employee-tailored insights: Unlike traditional “one-size-fits-all” explanations, counterfactuals can highlight personalised paths for each employee. For instance, low organisational identity may be due to low engagement in one employee, while for another it may result from a high workload.
- Actionable decision-making: Managers can use these insights to design interventions, provide feedback, and plan training programs.

Conclusion

Explainable AI is more than a technical innovation; it is a strategic enabler for HR. Glass-box models provide clarity and insight, helping organizations understand complex patterns and contextual influences. Post-hoc explainers allow even highly complex models to be transparent, fair, and actionable.

In practice, HR professionals are not expected to master the technical details of these methods themselves. Instead, organisations may choose to collaborate with or hire data scientists who can apply XAI techniques while HR leaders focus on interpreting and using the insights. With this collaboration, HR professionals can move beyond predictions to understanding, designing interventions, and making data-driven decisions that employees and leaders can trust. Ultimately, XAI can make HR analytics not only smarter, but more human-centered and impactful.

About the author

Erdinç Durak is a PhD candidate at the Amsterdam Business School, University of Amsterdam. His research explores the application of machine learning and explainable AI techniques to human resource management problems, such as person-environment fit and data-driven job matching.