The Biden Administration recently introduced a Blueprint for an AI Bill of Rights (Making Automated Systems Work for the American People) that outlines how the public should be protected from algorithmic systems and the harms they can produce. The document provides an important framework for how government, technology companies, and citizens can work together to ensure more accountable AI.
Additionally, a recent interdisciplinary study from NYU Tandon researchers explores the issue deeper. The study reveals how resume format, LinkedIn URLs and other unexpected factors can influence AI personality prediction and affect hiring.
The study from the NYU Tandon researchers across computer and data scientists, a sociologist, an industrial psychologist and an investigative journalist finds that AI hiring tools such as Humantic AI and Crystal show substantial instability on key facets of measurement and cannot be considered valid testing instruments.
For example, the products frequently compute different personality scores if the same applicant provides a LinkedIn profile vs. PDF vs. raw text resume. Other notable findings are evidence of persistent — and often incorrect — data linkage by Humantic AI, such as linking an email address and a LinkedIn URL that appear in a resume, and then silently disregarding resume information when computing the personality score.
The team’s researchers, including professors Mona Sloane and Julia Stoyanovich from NYU Center for Responsible AI.
Sign up for the free insideAI News newsletter.
Join us on Twitter: https://twitter.com/InsideBigData1
Join us on LinkedIn: https://www.linkedin.com/company/insidebigdata/
Join us on Facebook: https://www.facebook.com/insideAI NewsNOW