insights

Is AI, Cybersecurity’s Double-Edged Sword?

March 28, 2023

Artificial Intelligence (AI) has certainly become the buzzword du jour these days with the rollout of popular image generators such as Midjourney and Stable Diffusion, or text-generating interfaces like OpenAI’s ChatGPT which produces outputs of all types—from poetry to complex blocks of code and even full-length books. AI has been increasingly employed in business technologies as a time-saving tool for automating and accelerating complex and time-consuming tasks. AI is even becoming more prevalent in Customer Experience, being used to boost self-service and support contact center agents, saving time for resources, and reducing overheads for organizations.

While AI has clear potential for many positive applications, what about the risks of nefarious use by those who would seek to benefit through using it to support cybercrime activities? To close out Fraud Prevention Month, let’s investigate both sides of this intriguing subject.

Some AI Basics

Before we touch upon the subject of cybersecurity, there are a few basics about AI that are key to understand before digging deeper. While AI represents a revolution in computing technology, it’s important to remember that it is based on algorithms, and it does not think for itself, at least not in the same critical or intuitive manner that humans do. Its output is based solely upon its depth of knowledge, meaning it only knows as much as it is fed. Unfortunately, this includes both credible information as well as fabrication. It can, however, absorb and process an astounding amount of data within a brief period, then process and extrapolate outputs at lightning-fast speed.

Unless we specifically inform an AI entity of the difference between fact and fiction, it simply won’t know, and any responses it delivers may be true, false, or a mixed bag of each. This is also prevalent in the visual creations that AI generators pump out, as they amalgamate and merge various existing art images and styles to generate some admittedly astounding images.

AI and Machine Learning have also been put to work in cybersecurity to assist in identifying and neutralizing threats in real-time. But is this the dawn of a new golden age of technology, or a double-edged sword with as many risks as benefits?

Benefits of AI in Cybersecurity

AI stands at the cusp of a new revolution in technology, with numerous positive applications in cybersecurity and fraud prevention. Due to the relative autonomy of AI and with machine learning advancements, AI sifts vast amounts of data in real-time enabling it to quickly identify and neutralize cyber attacks, thus preventing or minimizing damage. SEON reports that the University of Jakarta Computer Scientists found that machine learning algorithms were able to achieve up to 96% accuracy in identifying fraud for eCommerce businesses.

Routine task automation, like monitoring systems, scanning networks for vulnerabilities, logging, and reporting analytics, greatly reduces manual reviews, allowing human security teams to focus on other areas where human intervention is required, like reviewing alerts and responding to incidents.

Improved accuracy in data analysis and predictive analytics is another area where AI shines. Since AI systems can learn as they process more information, this results in progressively fewer false positives in the case of fraud detection. This also feeds into the potential of predictive analytics. AI can review past and current events to observe cybercrime patterns and methodologies and make projections to assist organizations in anticipating and preventing attacks before they take place.

Cybersecurity Concerns

Despite its potential in aiding computer programmers in producing, error-checking, and de-bugging complex code snippets, it could be used for nefarious purposes like phishing scams, malware, and ransomware. While safeguards against misuse are in place and constantly tuned, there have been instances where users were able to trick the AI into producing output that contravenes those restrictions. In one case, a user succeeded in tricking ChatGPT into thinking its content moderation features had been disabled, while another convinced it that they themselves were ChatGPT in order to work around these safeguards.

Recently, Check Point Research (CPR) found that Russian hackers have been attempting to bypass the regional restrictions that OpenAI set up for ChatGPT in order to access the technology. In another report, CPR notes that they are seeing evidence that cybercriminals have been endeavoring to use OpenAI to create new malicious tools for phishing, malware, information theft, and encryption tools (that could be used as ransomware by completely locking out a computer system without ever requiring direct access). While their findings also highlighted a distinct lack of programming knowledge on the part of some actors, they did reveal the damaging potential of AI-enabled cyber threats if used by experienced programmers.

Phishing emails, chats, and website or chatbot spoofing are other potential risks for the misuse of AI systems.  The technology could be used to scan emails, websites, and chats from companies to produce more accurate phishing emails or website spoofs, matching the language and tone of the brands, and making it more difficult to discern if the content is genuine. For instance, a spoofed website could be built to contain an accurate version of an organization’s chatbot, or a chatbot on an existing site could be overwritten with a duplicate, endangering the privacy and security of customers. Alternatively, text and emails could be rendered so realistically that even the most discerning eye for scams could be fooled into getting phished.

Each of these cases ties into the risks that come with a lack of human oversight. Due to task automation, the perceived need for human oversight will reduce and could ease some into a false sense of security over the fallibility of the technology as it improves and becomes ever-more present in daily life.  

The Silver Lining

If you’re feeling a bit alarmed by the doom and gloom of potential threats that cybercriminals pose with AI, take some solace in the fact that these things are well on the radar of AI developers as they constantly improve safeguards to prevent this kind of misuse. Aside from the potential for low-level cybercriminals to bolster their limited programming potential in scam execution, Aleksander Essex of Western University’s information security and privacy research lab, suggests that output would not likely be used very effectively for high-level attacks. Those types of complex attacks require a level of thinking and intuition that computers still can’t match.


We’re curious to see how this technology will continue to evolve and how we can use it to improve the technology that we deliver to our customers. If you’re curious to learn more about how Connex delivers AI-enabled solutions in Customer Experience, Security, and Networking, reach out, and let’s chat.