Generative AI tools, such as ChatGPT, are gaining the attention of many businesses who are looking to enhance their current offerings. And for American Express—who’s no stranger to artificial intelligence—generative AI may be a game-changer. But the company is taking a cautious approach.
Laura Grant, Vice President of Product Development for Emerging Platforms and AI at Amex Digital Labs, recently told VentureBeat that while the company is looking at ways to leverage large language models (LLMs) such as ChatGPT, it first wants to “seek to understand how it can help with its ‘3 Ps,’ making a product more personalized to an individual customer, more proactive and more predictive.”
Luke Gebb, Executive Vice President of American Express Digital Labs, also added that “our hypothesis at the moment is that we would be better suited using LLMs through partnerships. I don’t see us spinning up our own LLM from scratch.”
At the Forefront of AI
Formed in 2017, Amex Digital Labs can be considered a testing ground for new innovative product prototypes. Once developed, Labs ultimately transfers ownership to the most suitable team within the organization to deploy as part of their digital offering.
American Express also hopes to use it for predictive analytics technology. But its cautionary stance on generative AI tools certainly speaks to the industry’s overall view on this emerging technology.
Not too long ago, in an open letter, Tesla CEO Elon Musk, Apple Co-Founder Steve Wozniak, and over 31,000 executives from various industries and sectors called for AI developers to pause on any “giant AI experiments” they were working on. They said a better understanding of the future of this technology—and the various use cases that may unfold from it—would help organizations better manage it.
“AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts,” the letter states. “These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt. This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.”