Daily Banking News
$42.39
-0.38%
$164.24
-0.07%
$60.78
+0.07%
$32.38
+1.31%
$260.02
+0.21%
$372.02
+0.18%
$78.71
-0.06%
$103.99
-0.51%
$76.53
+1.19%
$2.81
-0.71%
$20.46
+0.34%
$72.10
+0.28%
$67.30
+0.42%

The Dark Side of AI: Previewing Criminal Uses


Artificial Intelligence & Machine Learning
,
Next-Generation Technologies & Secure Development

Threats Include Social Engineering, Insider Trading, Face-Seeking Assassin Drones


November 20, 2020    

The Dark Side of AI: Previewing Criminal Uses
Advertisement for a real-time voice cloning tool (Source: “Malicious Uses and Abuses of Artificial Intelligence”)

“Has anyone witnessed any examples of criminals abusing artificial intelligence?”

See Also: Why Your Cloud Strategy Needs a Data Strategy

That’s a question security firms have been raising in recent years. But a new public/private report into AI and ML identifies likely ways in which such attacks might occur – and offers examples of threats already emerging.

“Criminals are likely to make use of AI to facilitate and improve their attacks.” 

The most likely criminal use cases will involve “AI as a service” offerings, as well as AI enabled or supported offerings, as part of the wider cybercrime-as-a-service ecosystem. That’s according to the EU’s law enforcement intelligence agency, Europol, the United Nations Interregional Crime and Justice Research Institute – UNICRI – and Tokyo-based security firm Trend Micro, which prepared the joint report: “Malicious Uses and Abuses of Artificial Intelligence”.

AI refers to finding ways to make computers do things that would otherwise require human intelligence – such as speech and facial recognition or language translation. A subfield of AI, called machine learning, involves applying algorithms to help systems continually refine their success rate.

Defined: AI and ML (Source: “Malicious Uses and Abuses of Artificial Intelligence”)

Criminals’ Top Goal: Profit

If that’s the high level, the applied level is that criminals have never shied away from finding innovative ways to earn an illicit profit, be it through social engineering refinements, new business models or adopting new types of technology (see: Cybercrime: 12 Top Tactics and Trends).

And AI is no exception. “Criminals are likely to make use of AI to facilitate and improve their attacks by maximizing opportunities for profit within a shorter period, exploiting more victims and creating new, innovative criminal business models – all the while reducing their chances of being caught,” according to the report.

Thankfully, all is not doom and gloom. “AI promises the world greater efficiency, automation and autonomy,” says Edvardas Šileris, who heads Europol’s European Cybercrime Center, aka EC3. “At a time where the public is getting increasingly concerned about the possible misuse of AI, we have to be transparent about the threats, but also look into the potential benefits from AI technology.”

Emerging Concerns

The new report desecribess some emerging law enforcement and cybersecurity concerns about AI and ML, including:

  • AI-supported hacking: Already, Russian-language cybercrime forums are advertising a rentable tool called XEvil 4, which uses neural networks to bypass CAPTCHA security checks. Another tool, PWnagotchi 1.0.0, uses a neural network model to improve its WiFi hacking performance. “When the system successfully de-authenticates Wi-Fi credentials, it gets rewarded and learns to autonomously improve its operation,” according to Trend Micro.
  • AI-assisted password guessing: For credential stuffing, Trend Micro says it found a GitHub repository earlier this year with an AI-based tool…



Read More: The Dark Side of AI: Previewing Criminal Uses

Get real time updates directly on you device, subscribe now.

Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments

Get more stuff like this
in your inbox

Subscribe to our mailing list and get interesting stuff and updates to your email inbox.

Thank you for subscribing.

Something went wrong.