The Financial Stability Oversight Council (FSCO) is the latest organization to warn about the dangers of using artificial intelligence in the financial system. Although several prominent institutions have discussed the risks of AI, this is the most significant government entity to hop on this bandwagon, describing the technology as an “emerging vulnerability.”
The comments were included in FSOC’s annual report as one of 13 risks facing the banking industry in the years to come. “The reliance of AI systems on large datasets and third-party vendors introduces operational risks related to data controls, privacy, and cybersecurity,” the report reads. The FSOC includes Treasury Secretary Janet Yellen as well as leaders from several high-ranking agencies, including the Securities and Exchange Commission, the Consumer Financial Protection Bureau, and a dozen others.
Gary Gensler, Chairman of the SEC, said in a statement that artificial intelligence could “heighten financial fragility, as it could come to promote herding among individual actors making similar decisions as they get the same signal from the base model or data aggregator; and they may not even know it.”
The report also noted:
A particular concern is the possibility that AI systems with explainability challenges could produce and possibly mask biased or inaccurate results. This could affect, but not be limited to, consumer protection considerations such as fair lending…. It is the responsibility of financial institutions using AI to address the challenges related to explainabilty and monitor the quality and applicability of AI’s output, and regulators can help endure that they do so.
Warnings Around AI
There have been similar warnings from prominent members of the AI field itself. In October, Stanford University researchers issued a report from AI engineers claiming that their employers were failing to put in place sufficient ethical safeguards. “It is clear over the last three years that transparency is on the decline while capability is going through the roof,” said Stanford professor Percy Liang.
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war,” the Center for AI Safety, a nonprofit organization, said in a statement last May. More than 350 people working in AI signed the letter, including Demis Hassabis, Chief Executive of Google DeepMind, Dario Amodei, Chief Executive of Anthropic, and Sam Altman, the on-again, off-again Chief Executive of OpenAI.
The FSOC report points out that AI has been around in simpler forms for a long time, as in regression analysis techniques. Where it’s headed next for financial institutions involves everything from customer interactions to invoicing. The FSOC also acknowledged the benefits of AI. According to Yellen, “Supporting responsible innovation in this area can allow the financial system to reap benefits like increased efficiency, but there are also existing principles and rules for risk management that should be applied.”
For financial institutions, the FSOC report could be taken as a warning that if you’re working with AI, you’d better be able to explain what you are doing to your regulators. “Many AI systems are indeed black boxes at present, but realistically no more than human beings are,” said Christopher Miller, Lead Analyst of Emerging Payments at Javelin Strategy & Research. “Yes, AI systems may operate in biased and inaccurate ways, and at scale these weaknesses can have significant impact. Yet the same is true of humans making decisions. The true risk from AI comes from the potential that it becomes centralized and standardized, that ‘one system’ with certain flaws operates in a biased or inaccurate way, in a way that human weaknesses are less likely to do.”