Artificial Intelligence and Digital Trust | DigiCert Insights

Artificial intelligence
and Digital Trust

The latest insights into the intersection of AI
and cybersecurity

GenAI: generating new threats. And opportunity.

Artificial Intelligence is already affecting many aspects of our lives—and has been for decades. For better or worse, that’s going to continue. But as AI becomes more powerful and more deeply woven into the structure of our daily reality, it is critical for organizations to realistically assess its full potential as both tool and threat.

Generative AI | DigiCert Insights

At a glance: AI vs AI

  • AI enables both good and bad actors to work faster at scale

  • The prevalence of machine learning in business makes it an appealing tool and target

  • The hype surrounding AI has the potential to obscure the risks

  • The scope of emerging threats is enormous and varied

  • New AI-driven security approaches will be required to combat AI-generated threats.  

AI vs. AI | DigiCert Insights

1952

Arthur Samuel develops the first computer program to learn on its own—a checkers game.

97%

Of business owners believe ChatGPT will benefit their business.   

Source: Forbes Advisor, 2024

75%

Of consumers are concerned about businesses using AI.  

Source: Forbes Advisor, 2024

3/4

Organizations surveyed by CSO online have witnessed an increase in cyberattacks over 12 months, with most attributing the rise to bad actors using generative AI.

Source: CSO Online, 2023

46%

Of organizations believe that generative AI makes them more vulnerable to attack.  

Source: CSO Online, 2023

$407 B (USD)

The predicted market size of AI in 2027, up from $86.9 billion in 2022.

Buzzword bingo

Part of the problem of predicting the real implications of generative AI technology is the massive, buzzy cloud of hype that surrounds it. Even the term itself has become something of a cliché. Want to fill an auditorium at a technology event? Put AI in the title of your presentation. Want to draw attention to a machine learning feature in your software? Market it as “AI.” This has the unfortunate effect of obscuring the reality of the technology—sensationalizing benefits and dangers while simultaneously anesthetizing many to the topic as a whole.

This is compounded by the fact that many—especially the less technical—don’t really understand what, exactly, AI is. 

AI Chip | DigiCert Insights

Artificial intelligence – Machines that think

In simple terms, artificial intelligence is exactly what it sounds like: the use of computer systems to simulate human intelligence processes. 

Examples: language processing, speech recognition, expert systems, and machine vision. 

Machine Learning | DigiCert Insights

Machine learning – Machines that think by themselves

Computer systems governed by algorithms that enable them to learn and adapt automatically after they have been trained on a data set. 

Examples: Content recommendation algorithms, predictive analysis, image recognition

Deep Learning | DigiCert Insights

Deep learning – Machines that think like we do

A technique of machine learning that uses layers of algorithms and computing units to simulate a neural network like the human brain. 

Examples: Large Language Models, Translation, Facial recognition

Intelligent attacks

Content authenticity

Identity manipulation

Phishing with dynamite

Prompt injection

Machine Hallucinations

Attack sophistication

Custom malware

Poisoned data

Privacy leaks

Content authenticity

Generative AI has the ability to create highly realistic copies of original content. Not only does this present potential intellectual property risks for organizations using AI for content generation, but it also allows bad actors to steal and realistically copy all sorts of data to either pass off as an original creation or to facilitate other attacks.

Identity manipulation

Generative AI can create ultra-realistic imagery and video in seconds, and even alter live video as it is generated. This can erode confidence in a variety of vital systems—from facial recognition software to video evidence in the legal system to political misinformation—and undermine trust in virtually all forms of visual identity. 

Phishing with dynamite

Attackers can use generative AI tools to realistically simulate faces, voices, and written tone, as well as emulate corporate or brand identity which can then be leveraged for highly effective and difficult to detect phishing attacks

Prompt injection

Because many organizations are using off-the-shelf generative AI models, they are potentially exposing information used to train or give prompts to their instance to injection attacks refined by attackers to target popular models. Without stringent safeguards in place and frequent updates, an exploit for the base model could expose any organization using that model.  

Machine Hallucinations

While AI can generally produce convincing speech or text at speed, isn’t always accurate. This is particularly problematic for organizations relying on AI to generate informational or support content for users, as well as for organizations using machine learning for threat detection, where an anomalous result could be especially costly. 

Attack sophistication

Because AI is able to write functional code with superhuman speed, it could potentially be used to scale attacks with unprecedented speed and complexity. In addition, AI could be used to detect vulnerabilities in a compromised code base and could expand the scope of attackers by lowering the barrier of entry.  

Custom malware

While popular LLMs have some safeguards against users creating malicious code, sophisticated attackers can find exploits and loopholes. Stolen or copied models can also be stripped of such safeguards, allowing bad actors to rapidly generate nearly undetectable, highly customizable exploits. 

Poisoned data

Attacks don’t necessarily need to exploit the AI itself. Instead, they could target the data used to train a machine learning model in order to false output. This could then be further leveraged to create exploits within the model itself—such as falsifying a DNA sequence in a criminal database—or simply to produce results that could damage the targeted organization.  

Privacy leaks

AI that is trained with or handles sensitive data could potentially expose that data, whether through a bug, as has happened with several of the major commercial models, or through a targeted attack. 

From AI itself

We asked ChatGPT to lay out the top threats posed by generative AI. Here was its response: 

Generative AI, while offering incredible potential for innovation and creativity, also presents unique challenges and threats in the realm of cybersecurity. Here are some key points to consider:

  1. Creation of Sophisticated Fake Content
  2. Automated Cyberattacks
  3. Evasion of Traditional Security Measures
  4. Weaponization of AI
  5. Privacy Concerns
  6. Manipulation of Digital Evidence
  7. Emergence of AI-Powered Cyber Threat Actors
ChatGPT Reply | DigiCert Insights

Intelligent security

The features that make AI a useful tool for bad actors can—and must—be used to harden cybersecurity measures. Not only will this allow organizations to develop more effective and agile cybersecurity technologies, but better address human vulnerabilities as well.

Artificial Intelligence and Digital Trust 

Faster and more accurate detection

AI is capable of pattern recognition that a human mind could miss. By creating a more granular and comprehensive baseline of system and human behaviors, machine learning enables organizations to identify even the most subtle anomalies.

Artificial Intelligence and Digital Trust

Rapid assessment and adaptation

By analyzing external information, like threats detected elsewhere, and adapting security measures faster than a human could be capable, AI can enable organizations to build highly resilient and self-refining security policies in a fraction of the time.

Artificial Intelligence and Digital Trust

Reduced human error

Humans tend to be the weakest link in any cybersecurity program. By automating certain tasks and performing them with greater accuracy and speed, AI can reduce human error while freeing up more resources for human-critical tasks.

Artificial Intelligence and Digital Trust 

Education and efficiency

Organizations can use AI tools to conduct more realistic simulations and training, help teams learn advanced cybersecurity techniques and technologies more quickly, empower experts to work more efficiently, enhance innovation, and accelerate production of new cybersecurity tools.

Artificial Intelligence and Digital Trust

Network security

The instantaneous pattern recognition of AI can be leveraged to automatically react to threats by rerouting traffic from vulnerable servers, scan myriad devices faster and more frequently, isolate attacks before they can spread, and minimize exposure of sensitive data. 

Artificial Intelligence and Digital Trust

Threat response

Because AI can act instantly across a variety of systems and connections, while simultaneously processing a vast amount of data, it enables organizations to mitigate even extremely sophisticated attacks, potentially long before the threat would have been detected by conventional means.

Artificial Intelligence and Digital Trust

Automated management

From certificate expiration to patch management, AI can shoulder the burden of tedious tasks and help organizations stay on top of day-to-day security hygiene. 

Artificial Intelligence and Digital Trust

Scale and speed

AI can roll out and update policies and solutions on a global scale at a speed orders of magnitude greater than human teams could manage. 

Artificial Intelligence and Digital Trust

Phishing, identity, and IP protection

By teaching AI to recognize AI-generated content, malicious, misleading or fraudulent content can quickly be tagged and scrubbed before it has a chance to potentially fool a user. Not only will this allow for robust anti-phishing programs, but it can also prevent other forms of identity spoofing and provide protections for original content and IP.  

Related resources

WEBINAR

Preparing for a post-quantum world

Post Quantum Computing Insights
Blog

Identifying crypto-assets for PQC readiness

Preparing for the Quantum World with Crypto-Agility
Report

2022 Gartner PQC report