Hackers and cybersecurity professionals are both leveraging AI technologies to create more effective and sophisticated cyberattacks and defenses, respectively. Hackers use AI for brute force attacks, deep fake scams, and spear phishing while cybersecurity professionals use AI to detect and respond to threats in real-time, and to identify malicious emails and network activity.
How Hackers and Cybersecurity Pros Leverage AI to Attack and Defend Businesses
Key Points in This Article:
AI-powered technologies are revolutionizing business and industry from new in-demand goods and services to substantially improved operations. But even as businesses leverage AI to compete, hackers leverage these tools to penetrate corporate and government networks more effectively.
Cybersecurity professionals, too, have taken advantage of AI to provide better safeguards from external threats. Both sides have developed tools that afford them impressive (or frightening) efficiency. But even though these tools are still in their infancy, does one side have an edge in this new arms race? Who has the advantage: hackers or cybersecurity professionals? Let’s look at each side’s new tech to answer these questions.
Hackers and scammers have kept abreast of AI and used it to enhance existing tools and develop new ones. Among the most commonly used new tools include:
Brute Force Attacks
Hackers already employ brute force methods to try to crack passwords. They use apps that create lists of likely passwords and try them automatically more quickly than a human could manually enter them. These applications leverage generative adversarial networks (GANs) – a machine learning framework that assesses data sets it’s been provided and generates new similar data.
GANs have many legitimate applications in design, gaming, medicine, search, and image processing, among other fields. But they can also generate millions of possible passwords per second, allowing hackers to crack open emails, networks, and devices to steal financial and sensitive information with relative ease.
Deep Fake Scams and Fraud
GANs have been used to recreate images of deceased historical figures, develop imaging of body parts and organs for medical use, create photographs and videos of real people, and create hypothetical pictures of people at various ages. However, this technology can also be used to create false images and audio of real people, known as deepfakes, which scammers are already putting to work.
The possibilities are enormous – and terrifying. Think about it. You could receive a video message from a loved one who details an emergency and asks for money to deal with it. Or you could receive a video message of your boss asking you to make a purchase using the company credit card for a supposedly legitimate business expense.
There are already several documented cases of deep fake-driven fraud. Tesla CEO Elon Musk, Binance’s Chief Communications Officer Patrick Hillman, and others have recently had their likeness imitated with a deep fake by scammers encouraging other company representatives to pay them for cryptocurrency promotions. And at least one company has fallen victim to what’s known as a vishing scam, where faked audio was used to convince a company executive to transfer corporate funds from their company to scammers.
The vishing example above is also a case of spear phishing. Spear phishing involves personalized phishing attacks – fraudulent digital messages designed to dupe recipients into revealing sensitive information and is often deployed against high-value targets. Company employees, especially those in financial departments, may receive emails to their work accounts directing them to move company funds to a vendor or account online – one that turns out to be fraudulent.
Scammers often use more elaborate phishing attacks against company executives, high-net-worth individuals, celebrities, and other wealthy people. With a spear phishing attack, a scammer could potentially net more money from a single victim than from multiple working-class ones. But whether these attacks are deployed against company accountants or high-net-worth individuals, they typically require significant research.
GANs help scammers considerably reduce the time it takes to assemble likely email addresses of intended targets. Using this information, scammers can significantly scale up their spear-phishing operations.
Hackers and scammers aren’t the only ones using AI. Cybersecurity professionals working for businesses, nonprofits, government organizations, and managed security service providers (MSSPs) also use AI to shore up employer and client defenses. Some of the more notable innovations include:
Spam and Phishing Detection
AI and machine learning can be leveraged to recognize spam emails, phishing attempts, and even deepfake audio and video. Cybersecurity professionals are enhancing existing spam filters and firewalls with these technologies to make them more effective at identifying suspicious messages and less reliant on human detection.
People are typically the weak link in a company’s cybersecurity defenses, even when they have received some cybersecurity awareness training. But when you use smarter filters, you’re less dependent on employees staying off questionable sites while on work devices, vetting each email, and reporting (rather than ignoring) suspicious sites.
Real-time Threat Detection and Analysis
AI isn’t just useful for identifying malicious emails. It can be taught to recognize various patterns in network activity that signal a cyberattack. Network traffic monitoring tools are now being enhanced with AI to help identify anomalous activity and assist with threat identification and automatic responses. Some more advanced cybersecurity systems use AI to anticipate and counter an attacker’s moves in real time. And in some cases, these systems will alert staffers that an attack is happening and have already taken multiple steps to counter it and safeguard critical systems by the time the first staffer can log on.
These systems are also incredibly helpful in today’s environment, given the decentralized nature of organizational computing infrastructure. Many businesses have fully or partially remote employees who may be accessing corporate networks across cities, states, and continents. Further, many businesses have embraced the Internet of Things (IoT) and have more devices connected online than ever before. Corporate networks have more endpoints than ever before, from commercial fleet vehicles to plant equipment, making them much harder to monitor and defend manually. AI-powered technologies are essential in this regard.
Prevention is also critical to cybersecurity efforts. Businesses must identify network vulnerabilities and remediate them before hackers can exploit them. Penetration testing, or pen testing, can be extremely time-consuming and labor-intensive, especially given that today’s businesses deal with dozens of technologies, devices, and IP addresses.
However, AI-powered tools can perform much of the heavy lifting. These tools can help gather threat intelligence and publicly available information about potential targets, analyze target vulnerabilities, and develop multiple lines of attack. AI can also help pen testers simulate real hacking attempts by executing these lines of attack simultaneously or in sequence.
Ironically, hackers and scammers use AI tools to discover network vulnerabilities. So it’s up to cybersecurity professionals to deploy superior tools first and remediate issues quickly.
While both sides have found impressive uses for AI, businesses hold some advantages. Some cybersecurity professionals are working alongside the minds developing new AI and machine learning technology, affording them early access to emerging innovations. Hackers often retrofit off-the-shelf AI technologies for their purposes, whereas cybersecurity professionals can help shape them from inception.
Further, cybersecurity professionals have blueprints for their own networks. They can substantially shore up their defenses by strategically deploying AI while proactively seeking and remediating their own weaknesses. By contrast, hackers have to find vulnerabilities without a guide to how a target operates, making it harder for them to penetrate businesses with advanced AI-powered systems that take a proactive approach to cybersecurity.
On the other hand, many businesses may be vulnerable due to simple economics. Advanced security systems can be expensive. Hackers can launch low-grade cyberattacks that can overwhelm and penetrate some defenses or dupe untrained employees. And if the business is considered critical infrastructure, it may find itself the target of the nation-state and non-nation-state actors who have access to more sophisticated tools.
Hackers continue to use AI-driven cyberattacks successfully against various businesses and organizations. However, businesses that approach cybersecurity proactively and strategically while taking advantage of the latest AI-powered defense systems can be reasonably safe against the typical cyber-attack.