How will the new EU AI Act impact your HR Tech? TechWolf signs new AI Pact to help organisations deal with upcoming challenges.
TechWolf strengthens commitment to responsible AI by joining the European commission's AI pact.
Under the new European AI Act, most AI applications in HR Tech fall under the high-risk category. This means enterprises and HR leaders using these tools in the EU must plan to comply with new rules, manage risks explicitly and safeguard fairness and transparency. While the AI Act is now written into law, businesses will have the next two years to adjust and comply.
TechWolf exists to make people flourish at work. In line with our purpose, we have consistently prioritised responsible AI since our founding in 2018. Building on this strong track record, we are proud to announce that today, TechWolf has officially joined the European Commission’s AI Pact. This move further strengthens our proactive commitment to fair AI, and towards helping our customers meet new obligations while implementing AI for their skill-based approaches.
"The AI Pact encourages and supports organisations to plan ahead for the implementation of AI Act measures. [...] This initiative encourages organisations to proactively disclose the processes and practices they are implementing to anticipate compliance."
As part of our entrance into the AI pact, we have signed nine different pledges (both technical and non-technical) on which we will work closely with the European Commission on industry-leading standards for compliance with the AI Act. Our aim is to make it easier for our customers to adopt AI responsibly, in line with these new regulations.
This work will build off of the responsible AI strategy we have developed over the past year in collaboration with Ian Brown, previously Professor of Information Security and Privacy at the University of Oxford and now an independent consultant on internet regulation for governments and companies around the world. He commented:
"I’ve enjoyed working with a company that has taken a responsible approach to AI development from the start, and has been developing its approach in close partnership with its customers. They play a key role in using TechWolf’s tools to support their staff and organisational development in a fair and transparent way.” - Ian Brown, former Professor of Information Security and Privacy at the University of Oxford
TechWolf’s leading approach to fairness and compliance has been critical for customers looking to drive results for their employees and organisations responsibly. For the skill-based organisation, a responsible AI approach is not just about compliance. Rather, it’s foundational in establishing trust and participation in data that can inform decisions and guide opportunities. Our customers know that we have taken these principles to heart, and have been key partners to us on this journey.
With a strong interlink of security, compliance, and AI teams both inside TechWolf and customer organisations, we’ve developed a strong approach to responsible AI. This approach covers both the know-how and practical tools to enable customers to raise the bar on their AI approach.
Here are five things we have learned in the process and how they can benefit your organisation:
1. Data privacy as a mature foundation for your skills project – While AI regulations are still new, data privacy laws such as GDPR are more common and established, with over 130 countries outside of the EU having GDPR-like legislation in place. Established tools like the Data Protection Impact Assessment (DPIA) are a powerful framework for setting the project up for success. TechWolf has developed expertise in supporting this process, and provides readily available DPIA templates for organisations starting to work with skills data.
2. Transparency and ownership as guiding principles – Be transparent and communicate clearly about the goals of the project. TechWolf’s Skill Assistant allows employees and other stakeholders to manage their skill data, which is made easy through integrations with systems such as MS Teams and Slack. Giving employees real ownership is a key part of how we empower people, and further increases the validity of our data. Read more in TechWolf’s WEF article.
3. Ensure relevant and representative input data – The data scope of your skills project defines the completeness and timeliness of skills data, as well as the coverage across your workforce. Throughout the deployment cycle, TechWolf’s Data Maturity Scan provides insights into the quality and suitability of the input data across all integrations. Our Skill Console provides an overview and helps you in taking data improvement actions.
4 Establishing fairness is a continuous process – Addressing bias in your data is not a one-time fix but an ongoing effort. While our AI models are designed with measures to prevent unwanted bias—detailed in our technical documentation—we recognize that biases can also exist within your organisation's data. That’s why TechWolf goes one step further by providing a bias testing toolbox, initially focused on complying with New York City's Local Law 144 against discrimination in hiring practices. We are working together with our customers on generalising this toolbox so it covers new legislation such as the AI Act.
5 This is new for everyone – Even some of the largest global organisations are still setting up their AI governance function. Without solid processes, it is often difficult to know what questions to ask. In the spirit of the AI Pact, TechWolf will play a leading industry role in exchanging best practices across the industry.
While the 2026 deadline for the AI Act may seem far away, any organisation working with any type of skills AI will do well to reflect on its approach. A skill-based organisation requires a strong technological foundation that deeply embeds principles of transparency and responsibility. Past the many claims of “unbiased AI,” it’s time for vendors to prove they’re walking the talk. As TechWolf, we will continue to lead the way, and we’d love to take you along on this journey.