Artificial Intelligence in Payments and Banking - PaymentsJournal https://www.paymentsjournal.com/category/artificial-intelligence/ Focused Content, Expert Insights and Timely News Wed, 28 Aug 2024 17:57:36 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.1 https://www.paymentsjournal.com/wp-content/uploads/2024/03/cropped-paymentsjournal-icon-32x32.jpg Artificial Intelligence in Payments and Banking - PaymentsJournal https://www.paymentsjournal.com/category/artificial-intelligence/ 32 32 The PaymentsJournal Podcast is a podcast that features payment and banking industry professionals throughout the value chain discussing relevant payment and banking topics. If you have a topic you would like us to cover or would like to be on the podcast please reach out to us at info@paymentsjournal.com Artificial Intelligence in Payments and Banking - PaymentsJournal false episodic Artificial Intelligence in Payments and Banking - PaymentsJournal ©2024 PaymentsJournal.com ©2024 PaymentsJournal.com podcast Focused Content, Expert Insights and Timely News TV-G Microsoft’s AI Assistant Can Be Exploited by Cybercriminals https://www.paymentsjournal.com/microsofts-ai-assistant-can-be-exploited-by-cybercriminals/ Fri, 09 Aug 2024 19:00:00 +0000 https://www.paymentsjournal.com/?p=457155 microsoft copilot hackerMicrosoft’s Copilot has been touted as a productivity enabler, but the ubiquitous artificial intelligence app’s widespread use also exposes vulnerabilities that criminals can exploit. At the Black Hat security conference, researcher Michael Bargury demonstrated five ways how Copilot, which has become an integral part of Microsoft 365 apps like Word and Outlook, can be manipulated […]

The post Microsoft’s AI Assistant Can Be Exploited by Cybercriminals appeared first on PaymentsJournal.

]]>

Microsoft’s Copilot has been touted as a productivity enabler, but the ubiquitous artificial intelligence app’s widespread use also exposes vulnerabilities that criminals can exploit.

At the Black Hat security conference, researcher Michael Bargury demonstrated five ways how Copilot, which has become an integral part of Microsoft 365 apps like Word and Outlook, can be manipulated by bad actors.

For instance, after a hacker gains access to a work email, they can use Copilot to mimic the user’s writing style, including emojis, and send convincing email blasts containing malicious links or malware.

“AI’s ability to assist criminals in writing code to scrape information from social media, paired with its ability to match the speech patterns, tone, and style of an impersonated party’s written communication—whether professional or personal—is an insidious combination,” said Kevin Libby, Fraud & Security Analyst at Javelin Strategy & Research. “When used conjointly, these abilities considerably increase the probability of success for a phishing or smishing operation. AI can even help to scale phishing attacks through automation.”

Poisoning Databases

Bargury demonstrated how a hacker with access to an email account can exploit Copilot to access sensitive information, like salary data, without triggering Microsoft’s security protections.

In other scenarios, he showed how an attacker can poison the Copilot’s database by sending a malicious email and then steering Copilot into providing banking details. Additionally, the AI assistant could also be maneuvered into furnishing critical company data, such as upcoming earnings call forecasts.

During the demonstration, Bargury largely used Copilot for its intended purpose, but also introduced  misinformation and gave Copilot misleading instructions to illustrate how easily the AI could be manipulated.

A Glaring Weakness

The demonstration highlighted a glaring weakness in AI: when secure corporate data is combined with unverified external information. Copilot’s flaws raise concerns about AI’s rapid adoption across nearly every industry, especially in large organizations where employees frequently interact with the technology.

AI can also be one of the strongest tools in fraud detection, as it can help companies discover breaches much faster. Still, it’s clear that the technology is still developing, which opens up opportunities for criminals.

“While AI tools promise innumerable benefits, they also pose significant risks,” Libby said. “Criminals can use AI tools to help them with everything from malicious coding of malware, to scraping social media accounts for PII and other information about potential targets to fortify social engineering attacks, to creating deepfakes of CEOs to scam organizations out of tens of millions of dollars per video or audio call.”

According to Wired, after the demonstration, Bargury praised Microsoft and said the tech giant worked hard to make Copilot secure, but he was able to discover the weaknesses by studying the system’s infrastructure. Microsoft’s leadership responded that they appreciated Bargury’s findings and would work with him to analyze them further.

The post Microsoft’s AI Assistant Can Be Exploited by Cybercriminals appeared first on PaymentsJournal.

]]>
AI-Powered Financial Advisors Impact Wealth Management Industry https://www.paymentsjournal.com/ai-powered-financial-advisors-impact-wealth-management-industry/ Fri, 02 Aug 2024 13:00:00 +0000 https://www.paymentsjournal.com/?p=456561 ai wealth management, Revolut Business APILike many other industries, the wealth management sector has integrated artificial intelligence where applicable, including in chatbots and financial modeling. However, fully automated AI-powered financial advisors, known as robo-advisors, are beginning to make such an impact that there is speculation they could eventually displace traditional wealth managers. One example is the platform PortfolioPilot, which manages […]

The post AI-Powered Financial Advisors Impact Wealth Management Industry appeared first on PaymentsJournal.

]]>

Like many other industries, the wealth management sector has integrated artificial intelligence where applicable, including in chatbots and financial modeling. However, fully automated AI-powered financial advisors, known as robo-advisors, are beginning to make such an impact that there is speculation they could eventually displace traditional wealth managers.

One example is the platform PortfolioPilot, which manages $20 billion in assets through its automated portfolio and has gained 22,000 users in its two years of operation. In an interview with CNBC, Alexander Harmsen, Co-Founder of PortfolioPilot’s parent company Global Predictions, said the AI platform offers more personalized service than many human wealth managers.

“AI clearly has a critical role in the wealth management industry,” said Greg O’Gara, Lead Wealth Management Analyst at Javelin Strategy & Research. “However, the deployment and adoption of AI tools and business models will reach an equilibrium, falling into two main segments: self-directed investors and hybrid AI-advisory relationships. PortfolioPilot falls into the former (with the caveat that many self-directed investors also use a financial advisor).”

A Holistic Approach

Hybrid AI advisory combines AI tools, like generative AI, with human expertise. It empowers investors with advanced tools and provides advisors with resources like predictive AI for scenario analysis, reporting, financial planning, and client workflow management.

“While PortfolioPilot is demonstrating solid growth, it will face increasing competition from advisory models that create a human backstop (i.e., the advisor) for autonomous technologies,” O’Gara said. “Moreover, investment portfolios are only a piece of a larger financial strata which demands long-term financial planning. The interconnection of these advisory pieces, including estate planning, is complex.”

The increasing number of accounts, investment types, and revenue streams can complicate a portfolio quickly. This complexity is one of the main reasons high net-worth individuals turn to wealth managers.

Additionally, wealth management services encompass more than portfolio management. Many wealth managers now take a holistic approach to their clients’ finances, considering the entire family’s financial situation.

A Booming Industry

The wealth management industry is booming and remains dominated by big names like Morgan Stanley and Bank of America. Morgan Stanley alone has $4.4 trillion in assets under management in its traditional wealth management services, dwarfing the $1.2 trillion managed by the company’s self-directed advisory tool, which operates like PortfolioPilot. PortfolioPilot targets users with $100,000 to $5 million in investable assets, with the median PorfolioPilot user having a net worth of $450,000.

Unlike many traditional wealth management firms, automated financial advisors don’t take custody of their customers’ funds. Instead, these platforms provide users with advice on optimizing their portfolios. However, this model could change soon. PortfolioPilot’s Harmsen indicated that within the next few years, the platform might be enhanced to take custody of funds and execute trades for its customers.

“We will give you very specific financial advice, we will tell you to buy this stock, or ‘Here’s a mutual fund that you’re paying too much in fees for, replace it with this,’” Harmsen told CNBC. “Or it could be much more complicated advice, like, ‘You’re overexposed to changing inflation conditions, maybe you should consider adding some commodities exposure.”

Incumbent AI Challengers

There are still some regulatory hurdles that automated financial advisor platforms will need to overcome. PortfolioPilot recently drew a $175,000 fine from the U.S. Securities and Exchange Commission for billing itself as the first regulated AI financial advisor.

The company has since retracted that billing, but it hasn’t stopped investors from pouring in—PortfolioPilot raised $2 million in funding in the past month alone. Because automated financial advisors continue to gain users, some believe the wealth management sector is due for a shake-up.

“Ultimately, AI as a self-directed investment tool will challenge the advisory model, but the challenge may only serve to create greater client engagement,” O’Gara said. “And it will force advisors to demonstrate their value. Advisors who fail to adopt will be hard-pressed to stay in business as incumbent AI challengers rise.”

The post AI-Powered Financial Advisors Impact Wealth Management Industry appeared first on PaymentsJournal.

]]>
AI May Be the Strongest Tool Against Data Breaches https://www.paymentsjournal.com/ai-may-be-the-strongest-tool-against-data-breaches/ Tue, 30 Jul 2024 17:55:12 +0000 https://www.paymentsjournal.com/?p=456045 Quantum Isn’t Armageddon; But Your Horse Has Already Left the BarnArtificial intelligence can sometimes seem like a solution in search of a problem, but one area where it has already made an impact is fraud prevention. In fact, two-thirds of organizations surveyed by IBM reported using AI to detect and combat fraud within their security operations centers, and it’s paying off. By using strategies such […]

The post AI May Be the Strongest Tool Against Data Breaches appeared first on PaymentsJournal.

]]>

Artificial intelligence can sometimes seem like a solution in search of a problem, but one area where it has already made an impact is fraud prevention. In fact, two-thirds of organizations surveyed by IBM reported using AI to detect and combat fraud within their security operations centers, and it’s paying off.

By using strategies such as attack surface management, red-teaming, and posture management, these organizations were able to contain data breaches more quickly and at a much lower cost than those not employing AI. According to IBM’s Cost of a Data Breach Report, companies using AI incurred $2.2 million less in breach costs compared to those that don’t use AI to prevent such attacks.

Overall, the average cost of a data breach in 2024 jumped to $4.88 million from $4.45 million the previous year, marking the highest annual increase since the pandemic. The distinction between organizations using AI and those not using it is stark. When organizations extensively used AI and automation for preventing security breaches, their average cost for a cyberattack was $3.76 million. In contrast, those not using these tools lost an average of $5.98 million per breach.

A Tool for Criminals

One reason AI has proven so critical is that attackers are also using the technology. 

“The use of generative AI by cybercriminals is making it easier for them to socially engineer or trick employees into providing sensitive information,” said Jennifer Pitt, Senior Analyst of Fraud & Security at Javelin Strategy & Research. “There have already been several cases where cybercriminals successfully used voice cloning and/or deepfake images and video to convince even the most security-conscious employees to provide sensitive information to people they thought were executives authorized to obtain the information.”

AI has also helped speed up the detection of data breaches, a key factor in limiting the damage. Organizations extensively using security AI and automation identified and contained data breaches nearly 100 days faster on average compared to those without these technologies.

“It is crucial that organizations train employees on how AI is used for social engineering and phishing attacks and encourage employees to challenge anyone who asks for sensitive information,” said Pitt. “Organizations must also implement generative AI solutions that can detect deepfakes and AI-generated content, then learn and adapt quickly to changing attacker strategies. With the growing number of data breaches and AI-related cyberattacks, companies can no longer afford to rely on legacy detection solutions.”

The post AI May Be the Strongest Tool Against Data Breaches appeared first on PaymentsJournal.

]]>
The Role of AI in Fraud Detection: Enhancing Security in the Payments Industry https://www.paymentsjournal.com/the-role-of-ai-in-fraud-detection-enhancing-security-in-the-payments-industry/ Fri, 05 Jul 2024 13:00:00 +0000 https://paymentsjournal.com/?p=452318 Enhancing Fraud Detection Through Real-Time Graph DatabasesArtificial intelligence is one of the buzziest technological innovations out there, primarily because of its wide range of potential use cases. Manufacturers, educators, healthcare professionals, and various other industry sectors are actively exploring how AI can streamline workflows and reduce labor-intensive tasks, making their employees’ jobs easier. A particularly valuable use case for AI is […]

The post The Role of AI in Fraud Detection: Enhancing Security in the Payments Industry appeared first on PaymentsJournal.

]]>

Artificial intelligence is one of the buzziest technological innovations out there, primarily because of its wide range of potential use cases. Manufacturers, educators, healthcare professionals, and various other industry sectors are actively exploring how AI can streamline workflows and reduce labor-intensive tasks, making their employees’ jobs easier.

A particularly valuable use case for AI is in online payment fraud detection. Data from Juniper Research predicts that total losses to payment fraud will exceed $343 billion over the next five years—a massive hemorrhaging of capital that could potentially be stemmed by using advanced fraud detection tools. Major players in the financial services field are already using AI to forestall fraudulent payments, and if you’re considering adopting this technology, it’s about time too.

Infrastructure Requirements

Before purchasing a fraud detection tool that leverages AI, it’s crucial to audit the environment to ensure the right systems are in place. AI, especially in its early stages, can require massive amounts of processing power to analyze data. Additionally, network security is paramount to prevent cybercriminals from feeding fraudulent data into the model. Networks lacking the capacity for high bandwidth data transfers, tight security controls, or consistent uptime standards might benefit from switching to a dark fiber network.

A clean, consolidated pool of data is also essential for AI to function effectively. AI trained on incomplete or poor-quality data will fail to identify outliers that could indicate fraudulent transactions. Furthermore, there’s risk of alienating customers when using AI tools, so having a comprehensive communication plan in place before fully adopting the technology is important.

AI Best Practices

Making sure employees know how to use AI tools within regulatory and cybersecurity standards is important. In that spirit, here are a few guidelines to ensure proper AI usage.

  • Review and fact-check content: AI is effective, but not perfect—and it’s entirely possible that the technology can produce incorrect results as it learns. Regularly checking its output helps avoid false accusations that could harm your brand. Ensuring that employees are diligent in verifying AI-generated content can prevent misunderstandings and maintain customers trust.
  • Keep your databases clean: After the initial cleaning of your database, it’s crucial that you keep your data in order. AI continually learns from the same data set, and corruption over time can cause its results to become increasingly unreliable. Employees should follow best practices for data recording and storage. Consistently clean and organized data allows AI to function optimally, reducing the risk of data corruptions over time, which can lead to unreliable results.
  • Enlist your employees in mandatory refresher training: Even if your employees initially took technological training courses when the tool was debuted, ongoing training keeps everyone updated on best practices and regulatory changes. It also identifies knowledge gaps and empowers your team to handle fraudulent transactions effectively. Regular training sessions reinforce how important it is to stay current with any emerging AI developments and cybersecurity protocols. This also helps ensure that all team members are proficient in using AI tools.

Teaching your employees how their AI tools work, and the best practices for using them, will empower your team to identify, prevent, and handle fraudulent transactions more accurately than ever.

Interested in more about how cybercriminals are using AI to circumvent security and identity protocols? Javelin delved into this very topic in a recent report, Unmasking the Threat of AI: Deepfakes and Financial Security.

The post The Role of AI in Fraud Detection: Enhancing Security in the Payments Industry appeared first on PaymentsJournal.

]]>
Central Banks Are Largely Unprepared for AI’s Impact, Says BIS https://www.paymentsjournal.com/central-banks-are-largely-unprepared-for-ais-impact-says-bis/ Wed, 26 Jun 2024 19:30:00 +0000 https://paymentsjournal.com/?p=452091 central banks ai, Banks Self-Service Business ImpactCentral banks have a responsibility to safeguard the financial stability of their economies, but they should also be at the forefront of emerging technologies. To handle the challenges of artificial intelligence, for example, central banks must anticipate its macroeconomic implications and integrate it into their own operations. According to a new report from the Bank […]

The post Central Banks Are Largely Unprepared for AI’s Impact, Says BIS appeared first on PaymentsJournal.

]]>

Central banks have a responsibility to safeguard the financial stability of their economies, but they should also be at the forefront of emerging technologies. To handle the challenges of artificial intelligence, for example, central banks must anticipate its macroeconomic implications and integrate it into their own operations.

According to a new report from the Bank for International Settlements (BIS), most central banks are behind the curve in both respects. The slow adoption of AI could hinder these institutions’ ability to quickly adapt to economic shifts driven by AI itself.

“There is an urgent need for central banks to raise their game,” BIS wrote. “To address the new challenges, central banks need to upgrade their capabilities both as informed observers of the effects of technological advancements as well as users of the technology itself.”

Improving Infrastructure

Embracing AI will require many central banks to invest in costly infrastructure and hire specialized staff or outsource artificial intelligence services to a third party. While an external model will be cost-effective, it could also make central banks too dependent on a few third-party providers.

The European Central Bank recently voiced its concerns about the concentration of AI services in Europe’s financial systems. The ECB warned that this reliance could potentially lead to a herd mentality among financial institutions, and even cause systematic distortions in the economy.

The BIS report echoed the ECB’s concerns, and reiterated AI’s potential for bias. AI’s flaws only highlight central banks’ need to have the proper infrastructure and staffing. An optimal infrastructure also protects those financial institutions against emerging fraud trends, which often leverage AI themselves.

A Community of Practice

While some infrastructure improvements might be unavoidable, BIS concluded that central banks might be better off cooperating with each other, pooling their resources, and identifying synergies.

This includes creating common data standards for easier information sharing between banks and repositories to house the open source code of data tools. BIS, which acts as an umbrella organization for central banks, even suggested that banks share AI models that have been successful in financial applications.

“To harness the benefits of AI, collaboration and the sharing of experiences emerge as key avenues for central banks to mitigate these trade-offs, in particular by reducing the demands on information technology infrastructure and human capital,” BIS noted. “Central banks need to come together to form a ‘community of practice’ to share knowledge, data, best practices, and AI tools.”

The post Central Banks Are Largely Unprepared for AI’s Impact, Says BIS appeared first on PaymentsJournal.

]]>
Deepfake AI Threat Can Go Far Beyond Financial Losses https://www.paymentsjournal.com/deepfake-ai-threat-can-go-far-beyond-financial-losses/ Fri, 14 Jun 2024 13:00:00 +0000 https://paymentsjournal.com/?p=450759 ftc scamsMost financial institutions haven’t invested in identity verification programs that root out deepfake AI fraud. Though fraudsters could use the tech to steal or extort substantial sums, they could also use deepfakes to tarnish an institution’s hard-won reputation. Kevin Libby, Fraud and Security Analyst at Javelin Strategy & Research, studied deepfake AI fraud trends in […]

The post Deepfake AI Threat Can Go Far Beyond Financial Losses appeared first on PaymentsJournal.

]]>

Most financial institutions haven’t invested in identity verification programs that root out deepfake AI fraud. Though fraudsters could use the tech to steal or extort substantial sums, they could also use deepfakes to tarnish an institution’s hard-won reputation.

Kevin Libby, Fraud and Security Analyst at Javelin Strategy & Research, studied deepfake AI fraud trends in his report, Unmasking the Threat of AI: Deepfakes and Financial Security. He examined how fraudsters exploit AI and recommended ways businesses can protect themselves from the emerging threat.

A Digital Mask

Artificial intelligence has improved so rapidly that discerning a computerized voice from the real thing isn’t easy anymore. The new technology has accelerated the advent of deepfakes, which are forgeries of an aspect of a person’s persona created using AI.

In voice cloning, AI programs analyze conversations and develop novel scripts that replicate vocal intonations and inflections, and sometimes even word choice. Fraudsters have used deepfake audios in phishing applications where they impersonated company executives using cloned voices.

Another type of deepfake utilizes facial mapping or face cloning. Criminals use AI to extract samples from images and videos of the target. They might also use AI to scrape pictures and videos off social media accounts like Facebook or Instagram. AI programs can synthesize that data and create a digital mask that can be mapped onto someone else’s face.

“The technology is still developing, so it’s not a wide-scale problem yet,” Libby said. “The programs that can produce convincing deepfakes aren’t highly accessible and they require substantial computing power. However, as AI gets more efficient, the demands on computational systems are going to decrease and deepfakes will be cheaper, faster, and widely available.”

A Flood of Fraud

A recent survey found that 68% of financial institutions are vulnerable to deepfake fraud. More unsettling is that 53% of banks and credit unions not only don’t have a solution, but they also don’t have plans to implement one. As deepfakes proliferate, it could leave unprotected institutions in a difficult place.

“If they don’t have systems in place before we cross that threshold, there’s going to be a flood of fraud,” Libby said. “It’s going to be the kind of fraud that drains bank accounts and causes serious reputational problems for banks and credit unions. Financial institutions can’t wait until we’ve reached the threshold to invest in technologies to protect themselves.”

Even though deepfake quality is still developing, criminals aren’t waiting for the tech to be perfected. They are already using it to conduct scams, and not just against individuals. Fraudsters have scammed businesses, in some cases up to $25 to $35 million in a single instance.

Another disquieting aspect of deepfake fraud is the number of ways fraudsters can employ it. Criminals have used the tech in phishing, extortion, and manipulation applications through phone, video, and email avenues. Once an institution transfers funds to a fraudulent account, it’s immediately moved out and nearly impossible to track.

Reputation Control

Though the financial aspects of deepfake fraud are rightfully concerning, the more pressing threat for banks and credit unions might be to their reputation. It’s estimated that 67% of financial institutions that purchase fraud identity verification tools are most concerned about protecting their brand.

Fraudsters could use facial mapping to impersonate an executive and create videos that are deceptive, inappropriate, or offensive. Criminals could use deepfakes to give misleading investment advice or report fraudulent financial information about the company to affect stock prices.

Though the fraudsters could enrich themselves, the greater risk for financial institutions is acute damage to its reputation. After the incident, it could be hard for customers to trust the company, or the impersonated individual, for some time.

“To control their reputations, risk departments should do their own research and consume threat intelligence from a number of sources,” Libby said. “They should constantly monitor posts pertaining to their organization, including videos about their CEOs and their employees. As soon as something drops, they can vet it and respond. The longer it stays out there, the more damage it can do.”

Investing in Protection

The digital banking environment means fraud identification and verification must occur solely through electronic channels. Even though budgets are often tight, financial institutions must invest in technology solutions that identify and guard against deepfake fraud.

Internal protocols should incorporate a multi-layered process on significant transactions. For example, the approval process for transferring funds or sharing sensitive data should require more than a phone call from an executive. More secure protocols might include approval codes or device proofing.

Education is just as important. If employees are knowledgeable about fraudsters’ tactics, they will be vigilant for signs of fraud in email and phone conversations. Cybersecurity departments should conduct interactive annual risk trainings that specifically detail deepfake scams, so employees understand how difficult they are to identify.

“It might require a sizeable investment in technology and training,” Libby said. “However, the risk of financial losses and reputational damage from deepfake scams means the benefits far outweigh the investment.”

The post Deepfake AI Threat Can Go Far Beyond Financial Losses appeared first on PaymentsJournal.

]]>
Where Will AI Take Data Analytics? The Sky Is the Limit https://www.paymentsjournal.com/where-will-ai-take-data-analytics-the-sky-is-the-limit/ Mon, 10 Jun 2024 13:00:00 +0000 https://paymentsjournal.com/?p=450478 businesses AI, data analytics AIOrganizations have faced the challenge of deriving insights from their data for a long time. Some enterprises have the ability and resources to do this, but others are far behind. Artificial intelligence (AI) has the capability of catapulting data analysis into the future, allowing enterprise analytics to fit into the daily, general health and success […]

The post Where Will AI Take Data Analytics? The Sky Is the Limit appeared first on PaymentsJournal.

]]>

Organizations have faced the challenge of deriving insights from their data for a long time. Some enterprises have the ability and resources to do this, but others are far behind. Artificial intelligence (AI) has the capability of catapulting data analysis into the future, allowing enterprise analytics to fit into the daily, general health and success of a company.

Billtrust has been at the forefront of using AI to build out analytics processes, especially within the payments landscape. In a recent PaymentsJournal podcast, Ahsan Shah, Billtrust’s Senior Vice President of Data Analytics, talked about the AI-fueled future of data analytics with Christopher Miller, Lead Analyst of Emerging Payments at Javelin Strategy & Research.

The Democratization of AI

Organizations can no longer say they are not looking at AI. The success for most is going to come with the democratization of generative AI as opposed to a top-down mandate.

“Some companies are more advanced than others, just by allowing people to try it in the form of their goals and their own self-training,” Shah said. “Some of our teams here at Billtrust are doing hackathons where they just learn how to do this amazing thing. I think it’s going to flourish organically, and I think that’s the right way.”

AI is poised to go from a foundational model universe to a large set of tools, tooling, infrastructure, and services. The technology advancements are moving much faster than the rate of adoption. OpenAI is already at the forefront of multi-modality.

“There has been an explosion in the number of different systems that are monitoring various parts of how a business operates, ranging from frontline customer success to the nitty-gritty details of actual payment processing or chargeback processes, all the way up to when is revenue recognized and how is cash managed,” Miller said. “One of the challenges for teams has been to figure out how to put together those different pieces.”

An Explosion of Data

Most companies ask someone to piece together various pieces of information or cut and paste some data in a spreadsheet. Maybe they have a dashboard that brings together different pieces, but even maintaining that dashboard, adding new data as it comes to the forefront, can be a challenge. The explosion of data creates opportunities for insight but also challenges in terms of the sheer scale, especially for organizations with limitations in teams and resources.

This idea of cross-functional analysis is a challenge not just because of the volume of the data but also because of its structure. “You have three different kinds of vectors happening here,” Shah said. “You have the insane amount of data, the urgency of trying to act on it, and the explosion of the different functions. Enterprises need a better way of synthesizing the data across the functions and to be able to get it to the right person who can act on it, which is often overlooked.”

Emerging generative AI technology may offer one way to solve some of these problems, such as a new way to create reports other than simply handing a definition to an engineering team that produces the report. Rather than being pushed from the systems, data can be pulled from the systems by precisely the people who are in a position to act on those insights.

The new term is generative BI, for generative business intelligence.  You can simply ask a specific question in human language, such as “What anomalies are you seeing in my payment patterns for buyers in the West Coast?” That’s something that traditionally would have taken weeks of engineering analytics.

“It’s an exploding space,” Shah said. “Six months ago, there might have been one or two names that had LLM products in market that we could use. Everyone had written a poem in ChatGPT and experienced firsthand the power of the language model. But most people had also run headlong into the challenges of the data-gathering side of that model, which offers an interaction layer and doesn’t necessarily offer the insight. That’s the next step.”

Moving Beyond ChatGPT

Users of ChatGPT are limited to the context window. You can type in your question, but the tool doesn’t know about you, your enterprise data, your CRM, or your transactions. Integrating the data layer and the analytics layer into the LLM directly requires engineering and domain fine-tuning of the models.

There’s only so far you can go with a foundational model. How do you expose and make your data scalable and engineered in a way to take full advantage of generative AI? That is something Billtrust is actively working on.

“We are in the process of launching our Copilot product, essentially embedding a ChatGPT-like enterprise secure interface into it,” Shah said. “Rather than going back to the old way of hiring a data analyst and saying build me a report, you’re now going to Copilot and asking a specific question. We should not think of this as a profoundly transformative thing but rather a way of making what you do better.”

Some companies are already blazing through the capabilities. It’s not just Open AI, but also Facebook Meta and AWS and Claude Anthropic integration. You’re going to be hearing a term called agentic workflows.

“While this seems super forward-looking, I don’t think it’s that far ahead at all,” Shah said. “You’re going to see a universe where people are going to log into SaaS products or B2C products and simply ask it, “Book a trip for me and my family,” and it’s just going to do a multi-step flow to book your hotel. You could translate that to B2B now. Instead of booking a travel reservation, you might say run a campaign or target these customers.”

The Need for Governance

When systems act based on limited cues from human beings, the interoperability of those systems becomes critical. This suggests the need for standards and essentially another layer of API development.

“It’s important to have governance to avoid the problematic and even catastrophic implications of AI,” Shah said. “But it cannot be done in a way which impedes the ability of companies to innovate and build great products.”

One other concern is cost, which is high and still going up. The unit cost is slowly starting to bend, but the absolute cost is growing as the models exponentially add tokens, which creates additional computing demands to support them.

But the possibilities far outstrip the challenges. “You’re only limited by your imagination,” Shah said. “The best implementations on the agent level will create the biggest universe for that imagination to run wild. It’s almost like giving an artist the capability to focus on what they’re best at and removing the friction or the redundancy of other tasks. The technical capability will be there far before the implementations are there to support that kind of imagination.

“There’s going to be an entire knowledge of how to use different models effectively for different businesses. I see this explosion of options. It just might be a little bit of a zoo for a while till the dust settles.”

The post Where Will AI Take Data Analytics? The Sky Is the Limit appeared first on PaymentsJournal.

]]>
Organizations have faced the challenge of deriving insights from their data for a long time. Some enterprises have the ability and resources to do this, but others are far behind. Artificial intelligence (AI) has the capability of catapulting data anal... Organizations have faced the challenge of deriving insights from their data for a long time. Some enterprises have the ability and resources to do this, but others are far behind. Artificial intelligence (AI) has the capability of catapulting data analysis into the future, allowing enterprise analytics to fit into the daily, general health and success of a company.



Billtrust has been at the forefront of using AI to build out analytics processes, especially within the payments landscape. In a recent PaymentsJournal podcast, Ahsan Shah, Billtrust’s Senior Vice President of Data Analytics, talked about the AI-fueled future of data analytics with Christopher Miller, Lead Analyst of Emerging Payments at Javelin Strategy & Research.





The Democratization of AI



Organizations can no longer say they are not looking at AI. The success for most is going to come with the democratization of generative AI as opposed to a top-down mandate.



“Some companies are more advanced than others, just by allowing people to try it in the form of their goals and their own self-training,” Shah said. “Some of our teams here at Billtrust are doing hackathons where they just learn how to do this amazing thing. I think it's going to flourish organically, and I think that's the right way.”



AI is poised to go from a foundational model universe to a large set of tools, tooling, infrastructure, and services. The technology advancements are moving much faster than the rate of adoption. OpenAI is already at the forefront of multi-modality.



“There has been an explosion in the number of different systems that are monitoring various parts of how a business operates, ranging from frontline customer success to the nitty-gritty details of actual payment processing or chargeback processes, all the way up to when is revenue recognized and how is cash managed,” Miller said. “One of the challenges for teams has been to figure out how to put together those different pieces.”



An Explosion of Data



Most companies ask someone to piece together various pieces of information or cut and paste some data in a spreadsheet. Maybe they have a dashboard that brings together different pieces, but even maintaining that dashboard, adding new data as it comes to the forefront, can be a challenge. The explosion of data creates opportunities for insight but also challenges in terms of the sheer scale, especially for organizations with limitations in teams and resources.



This idea of cross-functional analysis is a challenge not just because of the volume of the data but also because of its structure. “You have three different kinds of vectors happening here,” Shah said. “You have the insane amount of data, the urgency of trying to act on it, and the explosion of the different functions. Enterprises need a better way of synthesizing the data across the functions and to be able to get it to the right person who can act on it, which is often overlooked.”



Emerging generative AI technology may offer one way to solve some of these problems, such as a new way to create reports other than simply handing a definition to an engineering team that produces the report. Rather than being pushed from the systems, data can be pulled from the systems by precisely the people who are in a position to act on those insights.



The new term is generative BI,]]>
Artificial Intelligence in Payments and Banking - PaymentsJournal full false 31:08
What To Do With AI? A Question Without a One-Size-Fits-All Answer https://www.paymentsjournal.com/what-to-do-with-ai-a-question-without-a-one-size-fits-all-answer/ Tue, 04 Jun 2024 13:00:00 +0000 https://paymentsjournal.com/?p=450018 What To Do With AI? A Question Without a One-Size-Fits-All AnswerDaily, it seems, we’re confronted by new reasons to distrust the development of generative AI models. Whether it’s features that deliver faulty recommendations or chatbots that tell lies or attempt seduction, those who are inclined to disdain AI—a growing and vocal proportion—have plenty of fodder for their points of view. The latest bit of news […]

The post What To Do With AI? A Question Without a One-Size-Fits-All Answer appeared first on PaymentsJournal.

]]>

Daily, it seems, we’re confronted by new reasons to distrust the development of generative AI models. Whether it’s features that deliver faulty recommendations or chatbots that tell lies or attempt seduction, those who are inclined to disdain AI—a growing and vocal proportion—have plenty of fodder for their points of view.

The latest bit of news to burn through my social circles: Meta will start leveraging users’ content (save for direct messages) across its platforms to train AI models. On one hand, this should be unsurprising: Users’ content is fair game according to the terms and conditions they accepted upon joining these platforms (even if they didn’t read the actual fine print). On the other hand, there have been so many instances of ill-gotten content being used to feed AI—recall the use of pirated e-books, a discovery that enraged authors and their professional groups—that businesses should tread lightly and transparently even when they’re in the clear. That Meta has made opting out a byzantine process without a certain outcome does not inspire confidence.

A Bifurcated View

I sit at an interesting intersection with regard to the development of generative AI, one that affords me a view of its many possibilities in the world of payments and financial services and of its many potential horrors should it be rampantly misapplied elsewhere.

One needn’t have an overactive imagination to consider that AI’s ability to detect patterns in data, process information, and surface insights can be transformative in the realm of financial services, touching every aspect of operations: back-office and middle-office functions, fraud prevention and cybersecurity, customer journeys from onboarding through the lifecycle of accounts, and payment experiences. Think of a future when digital wallets aren’t just another repository of payment credentials but rather extensions of the self, inerrantly choosing the best, most effective, most advantageous payment method and completing the transaction with no friction. Who, aside from the most stubbornly analog among us, wouldn’t want that?

However, one does need an expansive imagination to write novels (I’ve written 10) or create other forms of art, and those of us in the creative fields have been watching with growing alarm as AI development poaches our work and threatens what we do with a coming tsunami of content utterly devoid of heart and soul.

My author friends are almost categorically anti-AI, with “get rid of it” a common and futile refrain. They recoil from newcomers who see in AI a way to turbocharge their output. One declaration I saw, positing that “AI can help me write 50 novels this year,” prompted incredulity: One, that’s not exactly creative writing as I understand it to be. Two, if we assume that the juice for the creator is the exercising of memory and imagination, who would want to write 50 novels in a year? Three, who would want to read 50 novels that had all the humanity of a mass-produced widget? The mind boggles. After all, what is the purpose of art but to forge human connection through creations that emanate from unique minds?

That said…

Financial services, writ large, are not art. Payment methods are not art. They are form and function, a means to an end. When we view AI as a tool by which better experiences can be created, underpinned by better data and more robust insight, we alight on worthy purposes for it.

Ditching AI is simply a non-starter in the business world, and certainly in the arenas of financial services and payments. For reasons competitive and evolutionary, companies must be actively developing applications that leverage AI for the good of the enterprise and its customers. “Good,” of course, is open to debate, as most anything is these days, and the word certainly does a lot of work in the foregoing sentence. But “good” is achievable when AI is positioned as a tool and not as a shortcut or an insufficient replacement.

Maintaining that ideal is, or should be, the province of human beings who presumably have the perspective, wisdom, and restraint to keep AI model development in the background until it’s ready for public-facing applications. When a bot spews inaccuracies or declares an emotion it’s incapable of having, it’s not just a technological failure. It’s a direct hit on public confidence in the technology.

And that’s not good for anybody.

The post What To Do With AI? A Question Without a One-Size-Fits-All Answer appeared first on PaymentsJournal.

]]>
The Problem with Startups: Fintechs Face a New Future https://www.paymentsjournal.com/the-problem-with-startups-fintechs-face-a-new-future/ Fri, 24 May 2024 13:00:00 +0000 https://paymentsjournal.com/?p=449437 Startups: Fintechs Data Streaming Technology in Banking, corporates Enriched Data vs Faster PaymentsWhatever happened to fintech startups? Dollars, launches, exits, and up rounds were all hard to find in 2023 as founders and investors engaged in a wholesale restructuring of the fintech space. Some of the roles formerly played by startups have more or less permanently shifted to incumbents as previous rounds of acquisition have brought capabilities […]

The post The Problem with Startups: Fintechs Face a New Future appeared first on PaymentsJournal.

]]>

Whatever happened to fintech startups? Dollars, launches, exits, and up rounds were all hard to find in 2023 as founders and investors engaged in a wholesale restructuring of the fintech space. Some of the roles formerly played by startups have more or less permanently shifted to incumbents as previous rounds of acquisition have brought capabilities in-house.

A report from Javelin Strategy & Research titled Fintech Investment Trends: Waiting for the Next Wave looks at where the fintech industry is headed. Christopher Miller, Javelin’s Lead Analyst of  Emerging Payments and a co-author of the study along with Co-Head of Payments James Wester, explored why fintech startup money has dried up, and why the artificial intelligence  revolution may be quieter than some think.

The Shifting AI Landscape

New and emerging fintechs are focused on different business opportunities than the previous generation of consumer-facing companies had faced. The distinction between fintech and incumbent is blurring as former categories of differentiation, such as customer experience or specific product features, disappear.

“This new generation of startups is much more likely to be financed by existing incumbents in the first place, which makes them not really startups in the technical sense,” Miller said.

One place where startups are still thriving—and one of the key areas of investment focus for 2024—will be the suddenly ubiquitous AI. Much of the hype around generative AI has centered on creating artworks through online platforms, or customer interfaces like ChatGPT. But it’s becoming increasingly clear that generative AI’s impact will be much greater behind the scenes. 

Miller thinks that one problem with these splashier applications is that they will be hard to monetize. “I don’t think a ton of people see generative AI as being primarily a direct-to-consumer play,” he said. “The impact for generative AI is going to be on the back end. It is most likely that generative AI would impact payments or financial services through services provided by existing providers. For example, Visa may leverage generative AI to improve or change the way certain services are offered. AI is a feature—it’s not the product.”

Instead of looking for consumer plays, fintechs are broadly focused on developing business-to-business services that can be sold to a relatively small number of enterprise customers. That remains much more sizable and lucrative than the consumer market.

Bringing Development In-House

Many organizations have some sort of venture fund allowing them to invest in startups. The goal can be to learn from the smaller competitors directly or to use that model to foster their own innovation. The major exception remains AI.

“When we saw Silicon Valley startups blowing up in the late 1990s and early 2000s or even in the in the late 90s, the idea of enterprise venture funds wasn’t well established,” Miller said. “The crazy stories about all the money getting thrown around and big parties and the weird, quirky culture of startups—all those stories are back for the generative AI companies.”

Although there will be deals within fintech, acquisitions aside from generative AI will continue to be smaller and rarer. The remnants of the previous generation of fintech products and infrastructure will remain “on sale” but won’t necessarily be great values. Increasingly, the technology of a failed direct-to-consumer fintech is worthless.

“Rather than looking to acquire a startup, an established business can stick lower cost engineers on the same problem,” Miller said. “There’s no point in buying somebody’s seven-year-old platform when it’s easier to develop solutions in-house.”

An Unforgiving Economy

The development of new and exciting startups is as much about the environment that nurtures them as it is about the insight of their founders. The current economic landscape, particularly the high interest rates that have shown no signs of abating, have had a strong influence on the timing and scope of M&A activity. The expectation that lower rates may continue in 2024 and 2025 may do as much as anything else to delay a number of deals.

“Sometimes we like to think of innovation as a thing that happens because brilliant people are thinking brilliant thoughts,” Miller said. “And that might be true. But what happened in the first wave of the dot-com boom was that somebody walked around with a money gun and fired it at everything that was moving. That’s not happening anymore.”

The post The Problem with Startups: Fintechs Face a New Future appeared first on PaymentsJournal.

]]>
Mastercard Deploys AI to Combat Credit Card Fraud https://www.paymentsjournal.com/mastercard-deploys-ai-to-combat-credit-card-fraud/ Thu, 23 May 2024 18:14:04 +0000 https://paymentsjournal.com/?p=449497 mastercard aiMastercard is using artificial intelligence to detect compromised credit cards faster and intercept card data before it ends up in the hands of cybercriminals. Generative AI can cross-reference compromised credit card data with geographical clues to pinpoint breached cards so the company can replace them. Mastercard’s tool can also do the reverse. AI can scour […]

The post Mastercard Deploys AI to Combat Credit Card Fraud appeared first on PaymentsJournal.

]]>

Mastercard is using artificial intelligence to detect compromised credit cards faster and intercept card data before it ends up in the hands of cybercriminals. Generative AI can cross-reference compromised credit card data with geographical clues to pinpoint breached cards so the company can replace them.

Mastercard’s tool can also do the reverse. AI can scour bad card data to identify compromised merchants or payment platforms, and the tech is touted to function more effectively than human-based methods like database inquiries. The credit card giant announced AI will play a substantial role in its latest software rollout.

“It’s no surprise that AI is being leveraged to analyze credit and debit card compromises,” said Kevin Libby, Fraud and Security Analyst at Javelin Strategy & Research. “AI is well-fit to the task and will, no doubt, increase the speed of analyses and allow card issuers to get ahead of criminal activity and block and reissue cards faster, minimizing fraud losses.”

The Dark Web

It’s estimated that billions of credit and debit card numbers are available to cybercriminals on the dark web. Much of that data was obtained through breaches, but a substantial amount was pilfered by card skimmers who record card numbers through devices they secretly install at the point-of-sale or ATMs.

Customers often don’t know their cards have been compromised, and the breach can go undetected for weeks or longer. Criminals may sell the card data on the dark web, causing a delay between the compromise and the moment criminals charge the card. Mastercard hopes AI identifies the compromise before that happens, but the new program could have growing pains.

“A not-so-easily solved problem with proactively blocking payment cards is the risk of overreacting and blocking cards that weren’t exposed during the compromise being assessed,” said Libby. “Since reissuing new payment cards comes at a cost to card issuers, it’s important to fine-tune analyses so the tools correctly identify all compromised cards while minimizing false positives.”

Pros Outweigh the Cons

The news comes on the heels of an announcement that Mastercard and Salesforce will be joining forces to battle fraudulent chargebacks. The effort also leverages AI to identify patterns from massive amounts of credit card data. While there will undoubtedly be some hiccups in both AI implementations, in the long run, the pros will likely outweigh the cons.

“So long as the AI models employed incorporate feedback about which blocked cards are and are not eventually used by a criminal, I’m confident the models can be quickly honed to reduce false positives, block compromised cards sooner, and reduce losses for all parties involved,” Libby said.

The post Mastercard Deploys AI to Combat Credit Card Fraud appeared first on PaymentsJournal.

]]>