Leaders considering the use of artificial intelligence (AI) in their business operations face a number of challenges that go beyond managing the implementation of new technologies, a Gallup executive wrote in the Harvard Business Review.
A leader must initiate a profound cultural shift, at the heart of which is trust, wrote Jeremie Brecheisen, managing partner of Gallup’s EMEA division. He offered three insights based on three Gallup studies conducted in 2023, each presenting a different perspective on important trends in AI adoption.
The three studies were the Roundtable Survey of large company (average size 80,000 employees) chief human resources officers (CHROs); the Gallup Quarterly Workforce Study of nearly 19,000 U.S. employees and leaders; and the Bentley-Gallup Business in Society Report.
“At the heart of this cultural shift is trust,” he wrote. “Whether the use case for AI is brief and experimental or sweeping and significant, a level of trust must exist between leaders and employees for the initiative to have any hope of success.” His three insights are meant to assist leaders in finding the right balance of control and trust around AI.
His first insight was that leaders do not fully understand their employees’ use of, and readiness for, AI. Forty-four percent of the CHRO roundtable members, whose department supports most culture transformations, did not know how often their company’s employees were using AI to do their jobs. Brecheisen called that blind spot “a major factor in the erosion of trust between leaders and employees.” He said that it causes many leaders to deploy a rules-heavy approach, rather than a purpose-led approach, to control AI usage more tightly.
“As Gallup data show, leaders are often unaware of when and why their employees use AI,” he wrote. “This knowledge gap places leaders in a precarious position: managing the unknown rather than leveraging what they know.” The result is that employees receive a mixed message: leaders may be keen to promote a culture of agility, collaboration, and innovation. But their instinct shifts toward skepticism, control, and protective measures.
“When fear-based, rules-heavy strategies take root, innovation can be inadvertently stifled,” he wrote. “What begins as reasonable safeguarding can curb the very creativity leaders seek.”
His second insight is that Americans do not trust the use of AI by businesses.
In the 2023 Bentley-Gallup Business in Society Report, only 10 percent of U.S. adults think AI does more good than harm, and 79 percent of U.S. adults report low or no trust that businesses will use AI responsibly. That skepticism underscores "a profound trust deficit around AI at a societal level," he wrote. The report also found that 75 percent of U.S. adults believe AI will reduce the overall number of jobs in the next decade, and that 72 percent of CHROs in the roundtable strongly agree that AI will lead to job reductions at their organizations within the next three years.
His third insight is hopeful: There is common ground to build trust. CHROs in Gallup’s roundtable overwhelmingly believed that AI technologies will "drive productivity, enhance creativity and innovation, and enable their organizations to operate with greater efficiency." Ninety-three percent anticipated that AI will reduce workloads and 61 percent foresaw that AI adoption will enable employees to spend more of their time on strategic activities. In addition, roughly four in 10 white-collar and Millennial employees believed AI could help improve how their work gets done.
These data also represented common ground on which to build trust around AI adoption, he maintained, leading him to his three recommendations on how to strike a balance between control and trust.
His first recommendation is to measure and manage AI usage across the organization. This can be achieved by expanding the leader's knowledge of the current state of AI usage in the organization. Gathering information about which AI technologies and applications are actively deployed and how employees are using them do their jobs will lead to a complete understanding of how AI is already being leveraged by teams.So will measuring elements such as AI usage frequency and AI tool effectiveness throughout the organization.
His second recommendation is to create mutual trust by empowering managers. They can best identify where applications of AI can aid efficiency and productivity, as well as what training and technical support is needed.
Forty-seven percent of workers in the Workforce Study said they feel adequately prepared to work with AI, while 53 percent said they felt unprepared and needed more training. “Therefore, it’s essential that leaders actively engage with managers to understand where their teams are doing well in terms of adequate AI training and where they need more support,” Brecheisen wrote. To achieve this goal, managers must be aware of the company’s AI training programs and other resources to help employees use AI in their roles, and must meet with employees to discuss specific needs and solutions.
Brecheisen's third recommendation is to use a purpose-led AI strategy, rather than a rules-heavy one. “Companies tend to perform better when they can establish a meaningful connection between their purpose and their employees,” he wrote. “An AI strategy that is aligned with and driven by the organization’s purpose, rather than by a more fear-based and rules-heavy approach, will have a better chance of delivering the desired outcomes of efficiency and effectiveness.”
The gap in perception between what leaders know about their employees’ usage of and readiness for AI, and reality “underscores the urgent need for leaders to prepare their cultures for the imminent AI revolution,” Brecheisen wrote in conclusion. “The workplace of the future is here. Leaders need to be well informed about what’s really happening in their organizations in order to cultivate trust across the company and develop AI strategies that align with high-level goals.”