The National Institute of Standards and Technology (NIST) is calling attention to the privacy and security challenges posed by the increased deployment of artificial intelligence (AI) systems in recent years.
“These security and privacy challenges include adversarial manipulation of training data, adversarial exploitation of model vulnerabilities that adversely affect the performance of artificial intelligence systems, and even malicious manipulation, modification, or mere interaction with models to steal sensitive information from the people involved. data, on the model itself, or proprietary enterprise data,” NIST said.
As AI systems are rapidly integrated into online services, in part due to the emergence of generative AI systems such as OpenAI ChatGPT and Google Bard, the models that power these technologies face many threats at all stages of machine learning operations.
These include corrupted training materials, security flaws in software components, data model poisoning, supply chain weaknesses, and privacy breaches due to instant injection attacks.
“In most cases, software developers need more people to use their products so that the products can get better through exposure,” said NIST computer scientist Apostol Vassilev. “But there’s no guarantee that the exposure will be good. When prompted with carefully crafted language, chatbots can spew undesirable or toxic information.”
These attacks can have significant impacts on availability, integrity, and privacy and are broadly classified into the following categories:
- Evasion attacks designed to generate adversarial outputs after deploying the model
- Poisoning attack, which targets the training phase of the algorithm by introducing corrupted data
- Privacy attacks designed to collect sensitive information about a system or its training data by asking questions that bypass existing guardrails
- Abuse attacks designed to disrupt legitimate information sources (such as web pages containing incorrect information) in order to repurpose the system for its intended purpose
NIST says such attacks can be carried out by threat actors with complete knowledge (white box), minimal knowledge (black box), or partial knowledge of some aspects of the AI system (grey box).
The agency further noted the lack of strong mitigation measures to address these risks, urging the broader tech community to “come up with better defenses.”
More than a month ago, the UK, US and international partners in 16 other countries released guidelines for the development of secure artificial intelligence (AI) systems.
“Despite significant advances in artificial intelligence and machine learning, these technologies are vulnerable to attacks that can lead to serious failures and dire consequences,” Vasilev said. “The theoretical problem of ensuring the security of artificial intelligence algorithms has not yet been solved .If anyone disagrees, they are selling snake oil.”
3 Comments
Pingback: NIST warns of security and privacy risks posed by rapid deployment of artificial intelligence systems – Tech Empire Solutions
Pingback: NIST warns of security and privacy risks posed by rapid deployment of artificial intelligence systems – Paxton Willson
Pingback: NIST warns of security and privacy risks posed by rapid deployment of artificial intelligence systems – Mary Ashley