Interview: Mike Hudy, Ph.D., Chief Science Officer at Modern Hire

I recently caught up with Mike Hudy, Ph.D., Chief Science Officer at Modern Hire, to discuss the use of AI in the hiring process. Mike Hudy is an industry expert in predictive modeling using human capital data with more than two decades of experience in experiment design and talent analytics. He is skilled in deciphering the complexities and ambiguities of talent acquisition to create the practical, effective and satisfying solutions that clients and candidates deserve.

insideAI News: Do you believe that the obsession with AI by organizations may be their pitfall, especially with continued news about companies creating AI platforms that reinforce bias in the hiring process?

Mike Hudy: While the obsession with AI may not be organizations’ downfall, as many types of AI are very powerful and effective, the lack of knowledge around certain AI platforms within the hiring landscape may be. For example, while the term AI is marketed as a differentiator for HR and hiring tools, the term is broad and encompasses a range of analytical capabilities, such as machine learning and deep learning. Because there’s a good deal of excitement around AI in hiring, it’s led to the development of many new tools that claim they make the hiring process quicker and more effective. That said, these tools aren’t always rigorously tested and applied in a human-first way – so HR teams are tasked with understanding which are effective, and which are not. These teams should ask vendors to show proof of their product’s efficacy. If they find that vendors are unable to provide proof or show data that documents that their tools are predicting hiring outcomes and doing so in a fair way , they should avoid using that product.

insideAI News: Specifically, what do organizations need to keep in mind when adopting AI-powered technology, particularly in hiring?

Mike Hudy: Organizations need to approach AI with a human-first mindset. When leveraging this powerful technology in the hiring space, it needs to be done with the individual in mind—candidates, recruiters, and hiring managers. There are a few key things that we keep in mind at Modern Hire when using AI-powered technology that we recommend every organization does as well:

  • AI must benefit both candidates and organizations: Much focus is placed on the benefit to the organization – efficiency gain and increased predictive power. But all too often, the candidate is left behind. Organizations must consider how their AI tools are experienced by the candidate and look to provide benefits to them as well. One example of this is to provide candidates with feedback on how they’ve done.
  • AI products must be transparent: It’s imperative that HR professionals root their selection methods on the knowledge, skills, and abilities required for success in the job. At the same time, candidates must also understand what they’re being evaluated on and how it relates back to the requirements of the job. With job relevancy as the foundation, transparency of AI becomes achievable.
  • AI product claims must be verifiable: HR leaders must challenge their vendors to provide them with robust documentation on how their AI products predict and achieve success. If a vendor claims AI predicts outcomes, ask which outcomes and whether or not the outcomes are relevant to on the job performance.  More specifically, organizations should ask vendors where data was collected, how much data was collected, how it was analyzed, how precise the prediction is, what was done to ensure fairness, and if the data is meaningful as criteria.

insideAI News: Can you outline why, in order to be used in hiring, AI-powered technology must be linked to the knowledge, skills, and abilities required for success in the job to be implemented and leveraged?

Mike Hudy: At the core, the goal of using AI in hiring and within the HR function is to enhance the candidate, recruiter, and hiring manager experience. The main goal is to give all parties involved better information and insights on whether a specific role is a mutual fit. With that in mind, linking AI tools to the knowledge, skills, and abilities required for success in any given job is critical for a few reasons. First, it is simply best practice to ensure the job relevancy of any candidate evaluation methodology you’re using – this includes pre-screen questions, interviews, pre-employment assessments, and AI tools too. Furthermore, it’s important that candidates are aware of how data is being used to evaluate them during the process. If candidates are able to understand the rational link between what they’re being evaluated on and the skills required to perform the job they’re considering, then the AI tools have a ‘felt fair’ nature about them which enhances the candidate experience. Trouble begins when candidates are unaware of how the technology is being used or even worse, perceive that they’re being evaluated on non-job relevant factors like facial expressions, gender, or ethnicity.

insideAI News: Can you detail how organizations can go about developing an adaptable AI guidebook that holds employees to ethical AI standards and evolves as the industry changes?

Mike Hudy: It’s important to recognize that technology and AI have outpaced the well established principles and guidelines around hiring. As guidelines, best practices and regulations are being developed, organizations should establish a clear, concise and transparent record of how they are using AI within their organization. As AI continues to fundamentally transform all aspects of the enterprise, it’s important to set public expectations across all stakeholders—inclusive of legal, compliance, IT and HR. By coming forward and being transparent about the technology used, organizations and employees can hold themselves accountable as AI and its applications continue to grow and develop.

insideAI News: How has Modern Hire developed its own roadmap for using AI internally and externally?

Mike Hudy: As Modern Hire introduces new AI technology and innovation, we will never forget that there is a human being at the other end of our technology looking for their next job. We released our AI Code of Ethics to show clients, partners, candidates employees and the market  how we’re driving innovation in our space by developing technology that is job relevant, fair, and built on trusted science. Our pillars are centered on the following: any application or use of AI in hiring must benefit the individual, operate transparently, be possible to verify that it works, and be fair.

Additionally, when possible, AI research findings should be published within the academic community. We also closely adhere with the authoritative guidelines and laws that govern employee selection, including the Uniform Guidelines on Employee Selection Procedures (UGESP) and the Society of Industrial and Organizational Psychology (SIOP) Principles for the Validation and Use of Personnel Selection Procedures, which support scientific research on new and emerging employee selection techniques and technologies. In the context of AI, we adhere to the Organisation for Economic Co-Operation and Development’s (OECD) Principles on AI and the Universal Guidelines for Artificial Intelligence OECD’s guidelines which broadly require that AI is used to help humans, operates transparently, and is robust and secure.

Sign up for the free insideAI News newsletter.