New York City has taken a pioneering and targeted approach by establishing regulations that govern the use of artificial intelligence (A.I.) by companies in workforce-related decisions. While European lawmakers are finalizing an A.I. act, the Biden administration and Congress leaders have their own plans to control artificial intelligence. Sam Altman, the CEO of OpenAI, the creator of the popular A.I. system ChatGPT, recently recommended the establishment of a federal agency with oversight and licensing authority for A.I. in Senate testimony. The topic has also been discussed at the Group of 7 summit in Japan.
Amidst these comprehensive plans and commitments, New York City has emerged as a modest leader in A.I. regulation. In 2021, the city government passed a law specifically targeting a critical application of A.I.: hiring and promotion decisions. The enforcement of this law will commence in July, and it requires companies using A.I. software in hiring to inform candidates that an automated system is being utilized. Additionally, the law mandates independent auditors to annually assess the technology for biases. Candidates have the right to request and be informed about the data collected and analyzed. Violations of these regulations will result in fines for companies.
New York City’s focused approach in regulating A.I. represents a significant aspect of A.I. regulation. Experts argue that the broad principles developed by governments and international organizations need to be translated into detailed definitions and specifications. It is essential to determine who is affected by the technology, the benefits and harms it brings, and who has the authority to intervene and how they can do so.
Julia Stoyanovich, an associate professor at New York University and director of its Center for Responsible A.I., emphasizes the importance of concrete use cases in answering these questions. Although the New York City law has been criticized even before its implementation, both public interest advocates and business groups find fault with it. This underscores the challenge of regulating A.I., which is advancing rapidly and raising both enthusiasm and anxiety about its unknown consequences.
Ms. Stoyanovich expresses concerns about potential loopholes in the city’s law, which may weaken its effectiveness. However, she acknowledges that having a law in place is better than having no regulation at all and believes that the process of regulation will provide valuable learning experiences.
While the law specifically applies to companies with workers in New York City, labor experts anticipate its influence on national practices. Several other states, including California, New Jersey, New York, and Vermont, along with the District of Columbia, are also working on A.I. regulation laws for hiring. Illinois and Maryland have already enacted laws restricting the use of specific A.I. technologies, particularly for workplace surveillance and candidate screening.
The passage of the New York City law resulted from a clash of conflicting perspectives. The City Council passed the law during Mayor Bill de Blasio’s final days in office. Subsequently, the city’s Department of Consumer and Worker Protection oversaw rounds of hearings and public comments, resulting in more than 100,000 words of input. Critics argue that the law is overly sympathetic to business interests.
Alexandra Givens, the president of the Center for Democracy & Technology, a policy and civil rights organization, believes that the law falls short of what it could have been and may become the national template for A.I. regulation. Givens criticizes the law for narrow definitions that require an audit of A.I. software only if it is the primary factor in hiring decisions or if it overrules a human decision. This excludes the most common use of automated software, where a hiring manager ultimately makes the final choice. Givens also highlights the law’s limitations in addressing discrimination against older workers or those with disabilities, as it covers bias based on sex, race, and ethnicity only.
City officials assert that the law was narrowed down to ensure focus and enforceability. The City Council and the worker protection agency considered various perspectives, including those of public-interest activists and software companies, in order to strike a balance between innovation and potential harm.
New York City faces the challenge of addressing new technology within the framework of federal workplace laws that date back to the 1970s. The main rule by the Equal Employment Opportunity Commission states that employers should not employ any selection method that has a “disparate impact” on legally protected groups such as women or minorities.
Businesses have voiced criticism of the law. In a filing this year, the Software Alliance, a trade group that includes Microsoft, SAP, and Workday, argued that independent audits of A.I. were not feasible due to the nascent state of the auditing landscape, which lacks standards and oversight bodies.
However, this nascent field presents a market opportunity. The A.I. audit industry is expected to grow, attracting law firms, consultants, and startups. Companies that sell A.I. software for hiring and promotion decisions generally embrace regulation and have already undergone external audits. They see these requirements as a potential competitive advantage, as it demonstrates that their technology expands the job candidate pool and enhances opportunities for workers.
The New York City law also introduces an approach to A.I. regulation that may become more common. It primarily focuses on measuring the “impact ratio,” which calculates the effect of using the software on protected groups of job candidates. It does not delve into the inner workings of algorithms, commonly known as “explainability.”
Critics argue that in applications like hiring, people have the right to understand how decisions are made. However, as A.I. software, including ChatGPT-style systems, becomes increasingly complex, the goal of explainable A.I. may become unattainable. Therefore, the focus shifts to evaluating the outcomes of algorithms rather than understanding their internal mechanisms.