Daily Banking News
$42.39
-0.38%
$164.24
-0.07%
$60.78
+0.07%
$32.38
+1.31%
$260.02
+0.21%
$372.02
+0.18%
$78.71
-0.06%
$103.99
-0.51%
$76.53
+1.19%
$2.81
-0.71%
$20.46
+0.34%
$72.10
+0.28%
$67.30
+0.42%

The AI-Powered Cybersecurity Arms Race and its Perils


The advancement in the field of artificial intelligence (AI) is still one of the most important technological achievements in recent history. The prominence and prevalence of machine learning and deep learning algorithms of all types, being able to unearth
and infer valuable conclusions about the world surrounding us without being explicitly programmed to do so, has sparked both the imagination and primordial fears of the general public.

The cybersecurity industry is no exception. It seems that wherever you go, you can’t find a cybersecurity vendor that doesn’t rely, to some extent, on Natural Language Processing (NLP), computer vision, neural networks, or other technology strains of what
could be broadly categorised or branded as ‘AI’.

The benefits that come with AI-powered technologies, especially in the cybersecurity realm, are clearly visible and undoubtedly meaningful. From automating manual assignments, to differentiating between benign and malicious communication streams, to discovering
and correlating highly illusive patterns and anomalies that power a plethora of cybersecurity detection and prevention mechanisms.

The AI-powered cybersecurity arms race

These AI-powered technologies are able to automate, streamline, enhance, and augment security operations driven by human beings, and at times even replace them all together. However, unlike human beings, technology doesn’t possess an inherent disposition
and is neutral in essence. As such, AI algorithms could be exploited or weaponised by malicious actors to pursue their own objectives, while offsetting the defenders’ edge. Some pundits even claim that the world is already in the midst of a full-blown AI-powered
cybersecurity arms race.

As one ZDNet reporter
envisions
: “it’s possible that by using machine learning, cyber criminals could develop self-learning automated malware, ransomware, social engineering or phishing attacks. For example, machine learning could be employed to send out phishing emails automatically
and learn what sort of language works in the campaigns, what generates clicks and how attacks against different targets should be crafted”. With leapfrog advancements in language models such as the introduction of OpenAI’s GPT-3 NLP neural network, these ominous
predictions seem more plausible than ever.

One could imagine that on the heels of the progress made through the deployment of Generative Adversarial Networks (GANs), the ability to create synthetic data that reliably mimics human-generated content, could usher in a new era of Deepfake-powered spear-phishing,
Business Email Compromise (BEC), and fraud campaigns.

The challenges on the way to AI-powered cybersecurity

As organisations strive to leverage AI models to cope with these challenges and safeguard their systems and business operations, there are certain obstacles and pitfalls lying ahead that might hinder the progress of these AI-powered cybersecurity efforts.
The following challenges should be taken into account, regardless of whether AI-powered threat actors are already posing an imminent threat to enterprises around the world:

  • AI Bias – A known adage in the data science community is that your AI model is only as good as the data it is fed. “If relevant datasets are not accumulated, prepared, and sampled in a calculated fashion, the subsequent artificially-generated AI
    models will be inherently biased and generate prejudiced results.
    Bias-in, bias-out
    ,” I mentioned in my Forbes article. Thus, in many cases, algorithms are actually echoing and amplifying existing misrepresentations that are embedded in the training datasets, instead of eliminating them.

Arguably, this type of systemic bias might be less relevant in the cybersecurity realm where decision making processes are based predominantly on analysis of machine-to-machine communications and code rather than traditional human language. Notwithstanding,
as social engineering…



Read More: The AI-Powered Cybersecurity Arms Race and its Perils

Get real time updates directly on you device, subscribe now.

Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments

Get more stuff like this
in your inbox

Subscribe to our mailing list and get interesting stuff and updates to your email inbox.

Thank you for subscribing.

Something went wrong.