Securities and Exchange Commission (SEC) Chair Gary Gensler discussed some of his concerns about the impact of technology—particularly generative artificial intelligence (AI) tools such as ChatGPT—in an interview with The New York Times.
Gensler, who has studied this topic for years, said that the recent proliferation of such tools has demonstrated that the technology is set to transform business and society. As an Massachusetts Institute of Technolgoy (MIT) management professor in 2020, he co-wrote a paper about deep learning and financial stability. It concluded that just a few AI companies will build the foundational models that underpin the tech tools on which many businesses have come to depend, based on past experiences. The deepening interconnections across the economic system will make a financial crash more likely, he wrote as the centralization of information means that everyone will rely on it and respond similarly.
“This technology will be the center of future crises, future financial crises,” he said. “It has to do with this powerful set of economics around scale and networks.”
Concerned that AI models may put companies’ interests ahead of investors', the SEC proposed a rule that would require platforms to eliminate conflicts of interest in their technology. The proposal would prevent firms from placing their interests ahead of investors’ interests by requiring them to identify any potential conflicts of interest emerging from their use of AI and then eliminate them. Firms would also have to maintain written policies, procedures and record-keeping to prevent violations.
“You’re not supposed to put the adviser ahead of the investor, you’re not supposed to put the broker ahead of the investor,” he said. “And so we put out a specific proposal about addressing those conflicts that could be embedded in the models.”
In addressing the issue of who is responsible if generative AI gives faulty financial advice, he said, “Investment advisers under the law have a fiduciary duty, a duty of care, and a duty of loyalty to their clients. And whether you’re using an algorithm, you have that same duty of care.”
Although determining the legal liability for AI is still up for debate, Gensler told the Times that it is fair to ask the companies to create mechanisms that are safe and that anyone using a chatbot is not delegating responsibility to the technology. “There are humans that build the models that set up parameters,” he said.