AI Cyber Attacks: What They Are and How to Safeguard Against Them
Written by:
East Carolina University®
• Aug 26, 2025

AI Cyber Attacks: What They Are and How to Safeguard Against Them
Cybersecurity and the threat of cyber attacks are among the top concerns for businesses across virtually every industry, as well as for public sector institutions. A total of 60% of small businesses reported that cybersecurity threats were their main concern—outranking theft (42%), natural disasters (39%), and even another pandemic (54%), according to a 2024 report from the U.S. Chamber of Commerce.
Amid this threat landscape, an increasing number of cyber attacks now use artificial intelligence (AI) and machine learning algorithms to target sensitive data and critical systems.
Just how widespread are AI-powered cyber attacks? Nearly 90% of security professionals worldwide say their organization has encountered an AI cyber attack within the last year, according to SoSafe’s 2025 Cybercrime Trends report. Also, more than 90% of security experts anticipate a substantial surge in AI-driven cyber threats over the next few years.
Despite the growing threat these attacks pose, many cybersecurity professionals and organizations say they’re inadequately prepared to defend themselves. In a 2024 survey of nearly 1,800 security experts conducted by Darktrace, a British cybersecurity company, about 60% stated that their organizations weren’t equipped to defend against AI-powered threats.
Fortunately, businesses and security professionals are increasingly aware of the scale of these threats, and the tools and techniques for combating AI cyber attacks—including AI-powered cybersecurity solutions—are becoming more sophisticated.
What Are AI Cyber Attacks?
Cyber attacks are attempts by hackers to infiltrate computer networks or systems to modify, steal, destroy, expose, or hold hostage sensitive data. AI-enabled cyber attacks employ AI and machine learning algorithms and techniques to enhance the speed, scale, customization, and stealth of these digital intrusions. By using AI, hackers can execute attacks more efficiently, often with minimal human involvement and greater precision in targeting vulnerabilities.
AI cyber attacks generally have five main characteristics:
-
Attack Automation: In the past, cyber attacks relied heavily on human involvement to execute certain tasks. However, the emergence of AI tools, especially generative AI (GenAI), has allowed cybercriminals to automate several aspects of an attack, including data gathering and analysis.
-
Enhanced Reconnaissance: The initial phase of any cyber attack is intelligence gathering, during which hackers search for potential targets, system vulnerabilities, and exploitable assets. Using AI to automate or accelerate this work allows hackers to not only shorten this research phase but also potentially improve the accuracy and comprehensiveness of their analysis.
-
Customization: AI can collect and analyze massive amounts of data—particularly from publicly available sources such as social media and company websites—much faster and more thoroughly than humans. In the context of a cyber attack, hackers can use this information to craft highly personalized and timely messages for phishing campaigns and other social engineering attacks.
-
Adaptive Tactics: The algorithms that power AI cyber attacks continuously learn and evolve, providing real-time insights that allow hackers to refine their methods and evade detection—and potentially devise patterns of attack that a security system can’t detect.
-
Strategic Targeting: Just as AI can customize messages based on public information, it can help identify potentially high-value targets within an organization: individuals with access to sensitive data, those with limited technical expertise, or those who have relationships with key personnel.
AI’s Role in Cybercrime Growth
It’s difficult to pinpoint what percentage of modern cyber attacks leverage AI—but the number is increasing dramatically. Some key figures illustrate the scope of the threat:
-
Roughly 75% of security professionals reported a year-over-year rise in cyber attacks between 2022 and 2023, with 85% of them attributing this increase to malicious use of GenAI, according to a report from Deep Instinct, a cybersecurity firm.
-
AI contributed to a 442% increase in incidents of voice phishing, or vishing—fraudulent phone calls or voice messages designed to trick people into providing sensitive information—in the first half of 2024, according to CrowdStrike’s 2025 Global Threat Report .
-
Darktrace reported that social engineering attacks on its email customers grew by 135% in 2023; it correlated this to the widespread adoption and use of AI tools such as ChatGPT.
Driven in large part by the growing use of AI to supercharge the scale and sophistication of cyber threats, the cost of cybercrime is expected to climb substantially over the next few years. SoSafe projects that cybercrime will cost the global economy nearly $14 trillion annually by 2028, a 50% increase from 2024.
Types of AI Cyber Attacks
AI can enhance virtually all kinds of cyber attacks, making them faster, more targeted, and harder to detect. Below are some common types of AI-enabled cyber attacks.
Social Engineering Attacks
In the context of cybersecurity, a social engineering attack is a tactic in which cybercriminals attempt to manipulate individuals—corporate employees, government workers, healthcare providers—into divulging confidential information, transferring money, or granting access to a secure system.
AI-driven social engineering attacks use AI’s customization and strategic targeting capabilities to plan and devise personalized, highly convincing messages at a scale and speed far greater than human hackers ever could. Using AI algorithms, cybercriminals can:
-
Identify high-value targets within an organization
-
Create credible personas and matching online profiles to engage targets
-
Develop plausible scenarios to get targets’ attention
-
Craft tailored content, such as emails or audio recordings, to communicate with targets
Phishing Attacks
Phishing is a type of social engineering attack in which hackers pretend to be a trusted individual or institution to trick targets into revealing sensitive information, such as passwords, or perform other actions that compromise security, such as clicking on malicious links.
Cybercriminals can use AI to automate real-time communication with targets, enhancing the realism and effectiveness of phishing campaigns. For example, AI chatbots can pose as customer support representatives, communicating with targets in a manner that’s almost indistinguishable from humans. AI can also be used to scale phishing attacks, engaging multiple targets simultaneously to boost the odds of a successful breach.
Deepfakes
Deepfakes, which can be used in social engineering attacks, involve AI-generated forgeries of trusted individuals’ voices or likenesses, presented as images, videos, or audio recordings. Hackers may deploy deepfakes to spread disinformation or smear someone’s reputation, or as part of a broader cyber attack.
The advancement and increasing accessibility of AI— specifically GenAI, which creates new content based on patterns from training data—have greatly contributed to the proliferation of deepfakes, with fraud cases involving deepfakes surging by 1,740% in North America alone between 2022 and 2023, according to a report from the World Economic Forum.
In one notable AI deepfake incident, a finance worker authorized a $25 million payment after a videoconference call that seemed to include the company’s chief finance officer and other staff members—only to discover that it was an AI-generated fabrication.
Adversarial AI
Adversarial AI refers to cyber attacks that seek to undermine AI or machine learning systems using misinformation or manipulation. One common tactic is data poisoning, in which hackers inject false or misleading information into the data that AI algorithms are trained on, compromising the accuracy and objectivity of the AI model’s output.
Another form of adversarial AI cyber attack is “evasion,” in which hackers attempt to corrupt an AI model’s input data to impair its accuracy and predictive abilities. Cybercriminals sometimes employ this tactic to manipulate the image recognition systems in autonomous vehicles, causing the AI to misidentify a stop sign, for example, leading to potentially dangerous road situations.
How to Defend Against AI-Powered Cyber Attacks
An effective security strategy empowers organizations to uphold core cybersecurity principles such as data confidentiality and integrity. As the threat of AI-fueled cyber attacks rises, it will become even more imperative to safeguard critical systems and data.
AI is evolving rapidly, and the strategies cybersecurity teams adopt to counter AI-driven attacks will need to evolve with it. However, while the vast majority of IT security teams recognize that AI represents a growing threat, more than half worry their current defenses are insufficient, according to Darktrace.
Still, organizations can implement foundational security practices to establish a more resilient defensive posture and stay ahead of emerging risks.
1. Perform Regular Security Assessments
Routine assessments of network and system security are an essential aspect of effective cybersecurity. A comprehensive security platform that provides continuous monitoring, intrusion detection, and endpoint protection can be a valuable ally for security teams. By establishing baselines of normal system activity and user behavior, organizations can detect anomalies or sudden changes that may signal an attack.
2. Develop an Incident Response Plan
An incident response plan outlines an organization’s protocols and assigned roles in the event of a cyber attack, AI generated or otherwise. Incident response plans are generally structured based on four key areas, as defined by the National Institute of Standards and Technology (NIST):
-
Preparation: Create and regularly update the response plan, clearly define roles and responsibilities, maintain a well-trained staff, and ensure tools and infrastructure remain current.
-
Detection and Analysis: Identify and validate potential threats or attacks, including their size and type; prioritize incidents based on business impact; document actions taken; and notify relevant stakeholders.
-
Containment and Eradication: Isolate affected systems or compromised assets to minimize the impact of an attack, remove the threat, restore operations, and preserve evidence.
-
Postincident: Hold cross-functional meetings involving relevant parties to review timelines, performance, and procedural gaps and identify areas for improvement; implement additional security measures; and update policies and procedures accordingly.
3. Raise Employee Awareness
Human error is a leading cause of cyber attacks, contributing to 95% of data breaches in 2024, according to a report. Misusing credentials, clicking on malicious links, and failing to follow protocols can all lead to serious security incidents.
Staff training and ongoing awareness programs are crucial to effective cybersecurity, particularly in the age of AI. This extends not only to security teams but also to general personnel. Training employees to recognize AI-enabled attacks, such as phishing emails and deepfakes, can help them avoid falling victim to these threats. Security training not only helps strengthen defenses but also helps organizations meet industry and regulatory compliance requirements.
Using AI to Prevent an AI Cyber Attack
Many organizations are deploying AI-powered cybersecurity tools to counter AI-driven cyber attacks. Morgan Stanley recently reported that the market for AI-based cybersecurity products is expected to grow from about $15 billion in 2021 to about $135 billion by 2030.
The same way that hackers harness AI to augment their attacks, organizations can leverage AI for a range of cybersecurity purposes:
-
Automating Routine Tasks: Using AI for certain functions, such as security log audits and vulnerability scans, allows security professionals to focus on more strategic work.
-
Enhanced Data Analysis: AI’s ability to analyze massive amounts of data allows security teams to more easily detect patterns and indicators of compromise in real time.
-
Improved Threat Detection: AI can analyze tactics and IP addresses—unique numeric designations that identify the locations of devices on a network—to identify and neutralize specific threats. It can also scan email messages and attachments to block phishing attempts and other types of social engineering attacks before they escalate.
-
Adaptation: Just as hackers exploit AI’s rapid learning capability to refine their techniques and enhance their attacks, organizations can harness adaptive AI to glean more accurate insights and better identify and counter emerging threats.
Organizations can also use GenAI tools to create realistic attack simulations, allowing security teams to test their defenses and uncover vulnerabilities in advance of a real attack.
Leveraging AI as part of a broader cybersecurity strategy does carry some risk. Security experts warn that in-house AI solutions can leave organizations exposed to new and innovative attacks, with cybercriminals potentially exploiting a company’s own AI to steal sensitive data or circumvent defenses. For example, many organizations use AI chatbots internally to answer employee questions and provide assistance. Hackers can co-opt these bots to gain access to confidential information.
Organizations must implement robust security measures—routine vulnerability assessments, rigorous access management, and comprehensive incident response plans—to ensure that their AI tools bolster defenses rather than introduce weaknesses.
Staying Ahead of AI-Powered Cyber Threats
As AI continues to amplify the speed, scale, and sophistication of cyber attacks—from automated reconnaissance to highly targeted social engineering campaigns—organizations can’t afford to rely on outdated defenses. By combining heightened vigilance and thorough incident response planning with AI-enhanced monitoring and threat intelligence, security teams and their organizations are better equipped to combat AI-assisted cybercrime.