AI in HR: Easy to sell, hard to do responsibly
Building fair and transparent AI in HR Tech: Moving beyond flashy features to responsible and measurable impact.
Looking back at Workday Rising last week and ahead to the HR tech conference this week, it’s no surprise that artificial intelligence is everywhere. Riding the ChatGPT wave, vendors and service providers alike are pushing to be in the AI spotlight. There’s no doubt that the new AI tech will bring advanced capabilities to the market. However, in HR tech, we should aspire to be more than just flashy features and efficiency gains.
Walking up to any AI vendor’s booth, you might start to think building a fair and transparent AI experience is easy. Most pitches assure you of fully compliant, unbiased, and transparent models — even for functionality that hasn’t been built yet. While there’s no doubt the intention is there, we should question whether they are ready to deliver.
The AI research of the past years is clear: removing harmful bias from models is challenging. Models like ChatGPT have been found to try and reflect back presumed ideas and biases of the user and have some underlying tendencies of their own. Other models, when trained specifically not to see race or gender, were found to still contain that information afterward in hidden ways. It goes to show that introducing true fairness into a model after training is almost impossible.
So what should we do instead? First off, we need to shift our focus from individual models to the systems they form when combined. It’s not about the individual pieces of AI functionality, but rather about how, together, they make an impact on people. A well-designed system can often remove risk and improve performance. Secondly, that impact can and must be measured, especially with regard to disparate impact against specific groups. When we call our products fair, that’s something to back up with numbers. Third and last, vendors need to be transparent so customers can validate and test that their technology meets the bar.
These principles are well-embedded in new regulations like the EU AI Act. For many vendors, these new rules will be painful to comply with. However, vendors that see responsible AI as a foundational element rather than an after-the-fact checkbox will fare much better. At TechWolf, we have focused on making our skills AI responsible and empowering from day one, which has influenced hundreds of design decisions in the past 6 years. Mapping our approach back to the upcoming rules and frameworks, it’s more about expanding documentation than changing anything substantial. We’ve also launched additional tools to make it easy for customers to test for bias themselves.
In the next years, buyers will carry increasing responsibility for the AI they bring into their companies — both ethically and legally. That means they can no longer rely on unbacked claims from vendors. After all: if a vendor can’t explain it to you, how would you expect to explain it to a regulator?