PaymentsJournal
No Result
View All Result
SIGN UP
  • Commercial
  • Credit
  • Debit
  • Digital Assets & Crypto
  • Digital Banking
  • Emerging Payments
  • Fraud & Security
  • Merchant
  • Prepaid
PaymentsJournal
  • Commercial
  • Credit
  • Debit
  • Digital Assets & Crypto
  • Digital Banking
  • Emerging Payments
  • Fraud & Security
  • Merchant
  • Prepaid
No Result
View All Result
PaymentsJournal
No Result
View All Result

Microsoft’s AI Assistant Can Be Exploited by Cybercriminals

By Wesley Grant
August 9, 2024
in Analysts Coverage, Artificial Intelligence, Fraud & Security
0
0
SHARES
0
VIEWS
Share on FacebookShare on TwitterShare on LinkedIn
microsoft copilot hacker

Microsoft’s Copilot has been touted as a productivity enabler, but the ubiquitous artificial intelligence app’s widespread use also exposes vulnerabilities that criminals can exploit.

At the Black Hat security conference, researcher Michael Bargury demonstrated five ways how Copilot, which has become an integral part of Microsoft 365 apps like Word and Outlook, can be manipulated by bad actors.

For instance, after a hacker gains access to a work email, they can use Copilot to mimic the user’s writing style, including emojis, and send convincing email blasts containing malicious links or malware.

“AI’s ability to assist criminals in writing code to scrape information from social media, paired with its ability to match the speech patterns, tone, and style of an impersonated party’s written communication—whether professional or personal—is an insidious combination,” said Kevin Libby, Fraud & Security Analyst at Javelin Strategy & Research. “When used conjointly, these abilities considerably increase the probability of success for a phishing or smishing operation. AI can even help to scale phishing attacks through automation.”

Poisoning Databases

Bargury demonstrated how a hacker with access to an email account can exploit Copilot to access sensitive information, like salary data, without triggering Microsoft’s security protections.

In other scenarios, he showed how an attacker can poison the Copilot’s database by sending a malicious email and then steering Copilot into providing banking details. Additionally, the AI assistant could also be maneuvered into furnishing critical company data, such as upcoming earnings call forecasts.

During the demonstration, Bargury largely used Copilot for its intended purpose, but also introduced  misinformation and gave Copilot misleading instructions to illustrate how easily the AI could be manipulated.

A Glaring Weakness

The demonstration highlighted a glaring weakness in AI: when secure corporate data is combined with unverified external information. Copilot’s flaws raise concerns about AI’s rapid adoption across nearly every industry, especially in large organizations where employees frequently interact with the technology.

AI can also be one of the strongest tools in fraud detection, as it can help companies discover breaches much faster. Still, it’s clear that the technology is still developing, which opens up opportunities for criminals.

“While AI tools promise innumerable benefits, they also pose significant risks,” Libby said. “Criminals can use AI tools to help them with everything from malicious coding of malware, to scraping social media accounts for PII and other information about potential targets to fortify social engineering attacks, to creating deepfakes of CEOs to scam organizations out of tens of millions of dollars per video or audio call.”

According to Wired, after the demonstration, Bargury praised Microsoft and said the tech giant worked hard to make Copilot secure, but he was able to discover the weaknesses by studying the system’s infrastructure. Microsoft’s leadership responded that they appreciated Bargury’s findings and would work with him to analyze them further.

0
SHARES
0
VIEWS
Share on FacebookShare on TwitterShare on LinkedIn
Tags: Automated FraudCopilotdigital fraudhackersMicrosoft

    Get the Latest News and Insights Delivered Daily

    Subscribe to the PaymentsJournal Newsletter for exclusive insight and data from Javelin Strategy & Research analysts and industry professionals.

    Must Reads

    Proof That Fintechs Are Disrupting Banks:

    In Today’s Fintech Market, Value Is Everything

    August 30, 2024
    DFAST test

    Dodd-Frank Stress Tests: Good News for Now, Watch for a Rugged 2025

    August 29, 2024
    Real-Time Payments Adoption in the U.S. Requires a Pragmatic Approach, ISO 20022 messaging challenges

    ISO 20022 Brings the Challenge of Standardization to Swift Participants

    August 28, 2024
    open banking small banks credit unions

    Open Banking Can Be an Equalizer for Small Banks and Credit Unions

    August 27, 2024
    Payments 3.0

    Achieving Seamless and Holistic Transactions with Payments 3.0

    August 26, 2024
    embedded finance, ecommerce, consumers reduce spending

    Quality Over Quantity: Key Priorities in the Payment Experience

    August 23, 2024
    bots fraud

    Next-Generation Bots Pose Formidable Fraud Challenge

    August 22, 2024
    crypto custodians

    Crypto Custodians Could Bring a Revolution in Holding Assets

    August 21, 2024

    Linkedin-in X-twitter
    • Commercial
    • Credit
    • Digital Assets & Crypto
    • Debit
    • Digital Banking
    Menu
    • Commercial
    • Credit
    • Digital Assets & Crypto
    • Debit
    • Digital Banking
    • Emerging Payments
    • Fraud & Security
    • Merchant
    • Prepaid
    Menu
    • Emerging Payments
    • Fraud & Security
    • Merchant
    • Prepaid
    • About Us
    • Advertise With Us
    • Sign Up for Our Newsletter
    Menu
    • About Us
    • Advertise With Us
    • Sign Up for Our Newsletter

    ©2024 PaymentsJournal.com |  Terms of Use | Privacy Policy

    • Commercial Payments
    • Credit
    • Debit
    • Digital Assets & Crypto
    • Emerging Payments
    • Fraud & Security
    • Merchant
    • Prepaid
    No Result
    View All Result