As artificial intelligence finds its way into various areas of everyday life and continues to develop, some state parliaments feel an urgent need to create regulations for its operate in the recruitment process.
Artificial intelligence, commonly known as AI, is being adopted by a quarter of companies in the United States, according to the 2022 study IBM Global AI Adoption Index, an escalate of more than 13% year-on-year. Many are starting operate it in the recruitment process.
State laws have not kept pace. Only Illinois, Maryland and New York City require employers to first seek consent before using AI in certain parts of the hiring process. A few states are considering similar legislation.
“The legislators are critical, and as always, the legislators are always late to the party,” said Maryland Republican Rep. Mark Fisher. Fisher supported the Lawwhich came into force in 2020 and regulates the operate of facial recognition programs in recruitment. It prohibits an employer from using certain facial recognition services – such as those that compare applicants’ faces to external databases – during an applicant’s application process unless the applicant consents.
“Technology is the first to innovate, and then it always seems like a good idea until it’s not,” Fisher said. “Then lawmakers step in and try to regulate things as best they can.”
If AI developers are interested in innovating as quickly as possible, with or without legislation, both developers and policymakers need to think about the impact of their decisions, says Hayley Tsukayama, senior legislative campaigner at the Electronic Frontier Foundation, which advocates for civil liberties on the internet.
For policymakers to create effective laws, developers need to be limpid about what systems are being used and open to considering potential problems, Tsukayama said.
“This is probably not exciting for people who want to move faster or who want to put these systems in place in their workplace now or already have them in their workplace now,” she said. “But I think it’s really important for policymakers to talk to a lot of different people, especially people who are going to be affected by this.”
AI in recruitment
According to a study, AI can support the hiring process by evaluating resumes, scheduling interviews with applicants and obtaining data. analysis by Skillroads, a provider of professional resume writing services using AI.
Some members of Congress are also trying to do something. suggested The American Data Privacy and Protection Act would set rules for artificial intelligence, including risk assessments and the general operate of AI. It would also cover data collected during the hiring process. The bill, introduced last year by Democratic U.S. Representative Frank Pallone Jr. of New Jersey, is currently before the U.S. House Energy and Commerce Committee.
The Biden administration last year issued the “Blueprint for an AI Bill of Rights,” a set of principles intended to guide organizations and individuals in the design, operate and deployment of automated systems, according to the document.
Meanwhile, legislators in some states and municipalities have been working on developing appropriate guidelines.
Maryland, Illinois and New York City are the only places with laws explicitly targeting job seekers who consider using artificial intelligence in the hiring process. They require companies to inform them when it is being used in certain positions and to ask for their consent before proceeding, so Data by Bryan Cave Leighton Paisner, a global law firm that advises clients on commercial litigation, finance, real estate and more.
According to the New York Times, California, New Jersey, New York and Vermont have also considered bills that would regulate AI in hiring systems.
Facial recognition technology is used by many federal agencies, including cybersecurity and policing, according to the US Government Accountability Office. Some industries also operate it.
Artificial intelligence can link facial recognition programs to applicant databases in seconds, Fisher said, citing this as the reason for his bill.
His goal, he said, is to craft a narrowly-focused measure that could open the door to possible future AI-related legislation. The bill, which became law in 2020 without being signed by then-Governor Larry Hogan, a Republican, only covers the private sector, but Fisher said he would like to expand it to public employers.
Legislative challenges
Policymakers’ understanding of artificial intelligence, particularly its impact on civil liberties, is almost nonexistent, says Clarence Okoh, senior policy adviser at the Washington DC-based nonprofit Center for Law and Social Policy (CLASP) and a Just Tech Fellow at the Social Science Research Council.
As a result, he said, companies that operate AI often regulate themselves.
“Unfortunately, many AI developers and vendors have very effectively sidelined conversations with policymakers about regulating AI and mitigating social impacts,” Okoh said. “And so, unfortunately, there is a lot of interest in developing self-regulating systems.”
According to Okoh, some self-regulatory measures include audits or compliance measures based on general guidelines such as the Blueprint for an AI Bill of Rights.
Who uses this technology? And why?
– Maryland State Delegate Marc Fisher
The results have raised some concerns. Some organizations operating under their own guidelines have used AI recruitment tools that showed bias.
In 2014, a group of developers at Amazon began developing an experimental, automated program to screen applicants’ resumes for top talent, according to a Reuters But in 2015, the company discovered that its system had taught itself that male applicants were preferable.
People close to the project told Reuters that the experimental system was trained to filter applicants by observing patterns in resumes sent to the company over a 10-year period – most of which were from men. Amazon told Reuters that the tool had “never been used by Amazon recruiters to evaluate applicants.”
However, some companies claim that AI is helpful and that strict ethical rules apply.
Helena Almeida, vice president and managing counsel at ADP, a human resource management software company, says the company’s approach to using artificial intelligence in its products follows the same ethical guidelines as before the advent of this technology. Regardless of the legal requirements, Almeida said, ADP sees it as an obligation to go beyond the basic framework to ensure its products do not discriminate.
Artificial intelligence and machine learning are used in several of ADP’s hiring support services. And many current laws also apply to the world of artificial intelligence, she said. ADP also offers its clients certain services that operate facial recognition technology, according to the website. As the technology continues to develop, ADP adopted a number of principles to regulate the operate of AI, machine learning and more.
“Without AI, you can’t discriminate against a certain population, and you can’t do that with AI,” Almeida said. “So that’s a key part of our framework and the way we look at bias in these tools.”
One way to avoid problems with AI in the hiring process is to maintain human involvement, from product design to regular monitoring of automated decisions.
Samantha Gordon, program director at TechEquity Collaborative, an organization that advocates for the interests of tech workers in the industry, said that in situations where machine learning or data collection is used without human intervention, there is a risk that the machine will be biased against certain groups.
For example, HireVue, a platform that helps employers collect video interviews and assessments from job seekers, announced the removal of its facial analysis component in 2021 after an internal review found that the system had a lower correlation with job performance than other elements of algorithmic assessment, according to a company press release.
“I think this is something that can be understood without any computer science knowledge,” said Gordon. The acceleration of the recruitment process, said Gordon, leaves room for error. This is where lawmakers need to intervene.
And lawmakers from both parties, Fisher said, believe companies should disclose their work.
“I would hope that people would generally want more transparency and disclosure in the use of this technology,” Fisher said. “Who is using this technology? And why?”

