Daily, it seems, we’re confronted by new reasons to distrust the development of generative AI models. Whether it’s features that deliver faulty recommendations or chatbots that tell lies or attempt seduction, those who are inclined to disdain AI—a growing and vocal proportion—have plenty of fodder for their points of view.
The latest bit of news to burn through my social circles: Meta will start leveraging users’ content (save for direct messages) across its platforms to train AI models. On one hand, this should be unsurprising: Users’ content is fair game according to the terms and conditions they accepted upon joining these platforms (even if they didn’t read the actual fine print). On the other hand, there have been so many instances of ill-gotten content being used to feed AI—recall the use of pirated e-books, a discovery that enraged authors and their professional groups—that businesses should tread lightly and transparently even when they’re in the clear. That Meta has made opting out a byzantine process without a certain outcome does not inspire confidence.
A Bifurcated View
I sit at an interesting intersection with regard to the development of generative AI, one that affords me a view of its many possibilities in the world of payments and financial services and of its many potential horrors should it be rampantly misapplied elsewhere.
One needn’t have an overactive imagination to consider that AI’s ability to detect patterns in data, process information, and surface insights can be transformative in the realm of financial services, touching every aspect of operations: back-office and middle-office functions, fraud prevention and cybersecurity, customer journeys from onboarding through the lifecycle of accounts, and payment experiences. Think of a future when digital wallets aren’t just another repository of payment credentials but rather extensions of the self, inerrantly choosing the best, most effective, most advantageous payment method and completing the transaction with no friction. Who, aside from the most stubbornly analog among us, wouldn’t want that?
However, one does need an expansive imagination to write novels (I’ve written 10) or create other forms of art, and those of us in the creative fields have been watching with growing alarm as AI development poaches our work and threatens what we do with a coming tsunami of content utterly devoid of heart and soul.
My author friends are almost categorically anti-AI, with “get rid of it” a common and futile refrain. They recoil from newcomers who see in AI a way to turbocharge their output. One declaration I saw, positing that “AI can help me write 50 novels this year,” prompted incredulity: One, that’s not exactly creative writing as I understand it to be. Two, if we assume that the juice for the creator is the exercising of memory and imagination, who would want to write 50 novels in a year? Three, who would want to read 50 novels that had all the humanity of a mass-produced widget? The mind boggles. After all, what is the purpose of art but to forge human connection through creations that emanate from unique minds?
That said…
Financial services, writ large, are not art. Payment methods are not art. They are form and function, a means to an end. When we view AI as a tool by which better experiences can be created, underpinned by better data and more robust insight, we alight on worthy purposes for it.
Ditching AI is simply a non-starter in the business world, and certainly in the arenas of financial services and payments. For reasons competitive and evolutionary, companies must be actively developing applications that leverage AI for the good of the enterprise and its customers. “Good,” of course, is open to debate, as most anything is these days, and the word certainly does a lot of work in the foregoing sentence. But “good” is achievable when AI is positioned as a tool and not as a shortcut or an insufficient replacement.
Maintaining that ideal is, or should be, the province of human beings who presumably have the perspective, wisdom, and restraint to keep AI model development in the background until it’s ready for public-facing applications. When a bot spews inaccuracies or declares an emotion it’s incapable of having, it’s not just a technological failure. It’s a direct hit on public confidence in the technology.
And that’s not good for anybody.