Attention FAE Customers:
Please be aware that NASBA credits are awarded based on whether the events are webcast or in-person, as well as on the number of CPE credits.
Please check the event registration page to see if NASBA credits are being awarded for the programs you select.

Firms Report on Who Their ‘Humans in the Loop’ Are When Using Generative AI

By:
Ruth Singleton
Published Date:
Jul 17, 2024

iStock-686690190 Robot Robotics Bots AI Artificial Intelligence

The phrase “humans in the loop” refers to “the need for human interaction, intervention, and judgment to control or change the outcome of a process, and it is a practice that is being increasingly emphasized in machine learning, generative AI, and the like,” according to the Harvard Data Science Review. Within accounting firms, the phrase has come to mean that, all levels, there are individuals who supervise and are held accountable for AI processes, Accounting Today reported.

In an interview with Accounting Today, Aisha Tahirkheli, managing director of trusted AI at KPMG Dallas, said, "Human in the loop for us means the AI assistants aren't left to operate entirely on their own. Humans are actively involved across the end-to-end AI lifecycle, and continually reviewing and refining and guiding the decisions to ensure accuracy, trust and fairness." 

KPMG requires all professionals to take training on the firm’s "Trusted AI" policy, and there is a Trusted AI Council to provide an additional layer of oversight as well as to issue AI guidelines and policies. Yet it is primarily the professionals who are responsible for the outputs that their AI assistants produce, Tahirkeli stressed. Humans are required to review all outputs that are applied to any sort of business function, whether internal or external. "At the end of the day, everyone in our firm serves as a human in the loop when it comes to AI," she said. 

Similarly, at RSM, any AI output is attributable to a specific professional who takes responsibility for it, according to Sergio de la Fe, the firm's enterprise digital leader

"Our written policy is that all deliverables are the product of individuals who create them; there is no deliverable that is just created on its own." he said. "What AI does for us is accelerate their ability to create but the review, the verification, of the content for every deliverable must be reviewed and approved by a human. There is human accountability and human review to everything we send out to clients." 

In addition, De la Fe stressed that some individual products and situations will require extra layers of oversight, and possible intervention, from subject matter experts. For instance, he said, RSM has certain models that can go through the Tax Code and the firm's own memos in order to create tax position papers. Such papers are reviewed by tax experts, in addition to the original users and their supervisors. Such experts might determine, “’This doesn't make sense,’ because they know the tax law,” De la Fe said. 

Carmel Wynkoop, the partner-in-charge of AI, analytics and automation at Armanino, told Accounting Today that, at her firm, the humans-in-the-loop concept is no different between generative AI and other kinds of automation that the firm uses.

"You have [a robotic process automation] that does bank reconciliations," she said. "It pulls statements down, logs into the [enterprise resource planning] systems, reconciles those transactions, creates a journal entry and could post it. But the human in the loop there then looks at what the RPA did and verifies that it did it correctly, so you still got human in the loop oversight over the AI. And you can say the same for any AI process,"

OJ Laos, director of Armanino's AI Lab, agreed, noting that having a human in the loop is to be expected. "Everyone claims, 'Oh, we do human in the loop.' Well, you have to,” he said. “It is not in any way a special shining star you get, because [while] it depends on the tool, ultimately you have accountability and someone reviewing those items."

Software providers also envision an important role for humans when firms use their products. Jin Chang, the CEO of Fieldguide, which offers audit and advisory solutions, observed that the notion that a human is accountable for AI is not very different from the concept of firms supervising their professionals. "We say, 'Just like how you would review the work of your human practitioners, you should review the work of the AI outputs too.' That is how it feels familiar embedded into the workflow," said Chang. 

Josh Tong, Fieldguide's head of product, emphasized that the technology won't even work without human input, and therefore accountability.

Similarly, Carter Cousineau, vice president of responsible AI and data at Thomson Reuters, said that its products rely on human oversight and accountability.

"The AI model life cycle, from the moment we're creating or developing the creation of an AI model and the problem statement all the way through to deployment or decommissioning a specific model … human in the loop is in the development of any AI solution," she said. 

But David Wood, a Brigham Young University accounting professor, noted that having a human in the loop is not all that’s required to maintain accuracy. While "at some point a human needs to take responsibility and own whatever is the final output," that output may not necessarily be trustworthy, he said.

"Most people are worried about AI coming up with some crazy thing," he said. "The challenge is often, as humans, when we get a routine task, we just click and click and click and click and don't think. So it's not like humans in the loop will solve everything, especially if humans get into this mechanical [mindset of] 'Yup, this looks good.' You see [that] model performance can regress." 

Wood added that he viewed “humans in the loop” as meaning that" somewhere embedded in the process there is human review for making significant judgments." As long there is some human involvement somewhere, this does not necessarily conflict with what he believes to be the long-term goal of AI, which is to have little to no human involvement in the processes itself.

He noted that when accounting firms first began using computers, professionals didn't entirely trust the results of their calculations. For a while, humans recomputed everything to make sure it worked. "AI, I think, is moving in that same direction," said Wood. "As we move more and more, the human will be removed from more processes until they're not needed." 

Wynkoop, of Armanino, made a similar prediction. "The future is going to be less human in the loop, she said. ... We'll continue to see [AI] evolve, but in the business world we won't see as much movement because a lot of the stuff deals with finances and success of the business," she said. "I think it's a long way off to start thinking about AI not having a human in the loop from a business perspective. [But] we'll see it more on the more personal side for sure."

 

Click here to see more of the latest news from the NYSSCPA.