
Cybersecurity researchers have discovered that a third-party plug-in available for OpenAI ChatGPT could become a new attack surface for threat actors seeking to gain unauthorized access to sensitive data.
According to new research released by Salt Labs, security vulnerabilities discovered directly within ChatGPT and the ecosystem could allow attackers to install malicious plug-ins without user consent and hijack accounts on third-party sites such as GitHub.
The ChatGPT plug-in, as the name suggests, is a tool designed to run on top of a large language model (LLM) and is designed to access the latest information, run calculations, or access third-party services.
Since then, OpenAI has also launched GPT, a customized version of ChatGPT tailored for specific use cases while reducing reliance on third-party services. Starting March 19, 2024, ChatGPT users will no longer be able to install new plugins or create new conversations with existing plugins.
One of the flaws discovered by Salt Labs involves exploiting the OAuth workflow to trick users into installing arbitrary plugins by exploiting ChatGPT without verifying the fact that the user actually started the plugin installation.
This essentially allows threat actors to intercept and steal any data shared by the victim, which may contain proprietary information.

The cybersecurity firm also discovered issues with PluginLab that could be exploited by threat actors to conduct zero-click account takeover attacks, allowing them to take control of an organization’s accounts on third-party sites such as GitHub and access its source code repositories.
“‘auth.pluginlab[.]ai/oauth/authorized’ does not authenticate the request, which means an attacker can insert another memberId (aka the victim) and obtain a code representing the victim,” explained security researcher Aviad Carmel. “Through this code, he was able to use ChatGPT and access the victim’s GitHub.”
The victim’s memberId can be obtained by querying the endpoint “auth.pluginlab”[.]ai/members/requestMagicEmailCode. “There is no evidence that any user data has been compromised as a result of this flaw.”
An OAuth redirect manipulation bug was also discovered in several plug-ins, including Kesem AI, which could allow an attacker to steal account credentials associated with the plug-in itself by sending a specially crafted link to the victim.
A few weeks ago, Imperva detailed two cross-site scripting (XSS) vulnerabilities in ChatGPT that could be linked to seize control of any account.
In December 2023, security researcher Johann Rehberger demonstrated how malicious actors could create custom GPTs, obtain user credentials through phishing and transfer stolen data to external servers.
New remote keylogging attack targets artificial intelligence assistants
The findings also follow new research released this week on LLM side-channel attacks, which use token length as a stealth method to extract encrypted responses from AI assistants over the network.
A team of academics from Ben-Gurion University and the Offensive Artificial Intelligence Research Laboratory said: “The LL.M. generates and sends responses in the form of a series of tokens (similar to single words), with each token transmitted from the server as it is generated. to the user.”
“Although this process is encrypted, sequential token transmission exposes a new side channel: the token length side channel. Although encrypted, the size of the packet can reveal the length of the token, potentially enabling An attacker is able to infer the length of the token.” Sensitive and confidential information shared in private AI assistant conversations. “

This is accomplished through a token inference attack, which aims to decipher responses in encrypted traffic by training an LLM model capable of converting sequences of token lengths into their natural language sentence counterparts (i.e., plaintext).
In other words, the core idea is to intercept live chat responses with the LLM provider, use network packet headers to infer the length of each token, extract and parse the text fields, and use a custom LLM to infer the response.

Two key prerequisites for launching an attack are an AI chat client running in streaming mode and an adversary capable of capturing network traffic between the client and the AI chatbot.
To counteract the effectiveness of side-channel attacks, companies developing artificial intelligence assistants are advised to apply random padding to mask the actual length of the token, transmit tokens in larger groups rather than individually, and send complete responses immediately, Rather than sending a complete response all at once. Symbol by symbol fashion.
“Balancing security with availability and performance is a complex challenge that requires careful consideration,” the researchers concluded.