Popular
Your premier destination for the latest global science news in Physics, Technology, Life, Earth, Health, Humans, and Space.

ChatGPT Wrote Code to Compromise Databases and Expose Sensitive Information

Researchers have discovered a vulnerability in OpenAI’s ChatGPT and other commercial AI tools that could have allowed malicious actors to exploit the systems and leak sensitive information from online databases. In a groundbreaking demonstration, the researchers manipulated ChatGPT and five other AI tools to create malicious code that could potentially delete critical data, disrupt database cloud services, or compromise database security. This study highlights the potential security risks associated with large language models and their use in online commercial applications.

The researchers focused on AI services that can translate human questions into the SQL programming language, commonly used for querying computer databases. These “Text-to-SQL” systems, including standalone AI chatbots like OpenAI’s ChatGPT, are becoming increasingly popular. The researchers showed how the AI-generated SQL code could be modified to leak database information or even purge system databases that store sensitive user profiles. They also found that the code generated by these AI tools may be harmful, even without warning the user.

Upon discovering these vulnerabilities, the researchers disclosed their findings to the companies responsible for the AI tools. As a result, Baidu and OpenAI have already implemented changes to prevent potential misuse. The researchers presented their work at a software reliability engineering conference and highlighted the need to address these vulnerabilities associated with language models in online applications.

While Baidu’s AI-powered service, Baidu-UNIT, also showed similar vulnerabilities, it relies more heavily on prewritten rules, unlike ChatGPT and other large language models. However, the researchers still believe that large language models can have value in helping humans query databases, but caution that the security risks associated with them have been underestimated until now.

Neither OpenAI nor Baidu have commented on the research findings.

Share this article
Shareable URL
Prev Post

Exploring the Benefits of Medical Marijuana: A Comprehensive Guide

Next Post

Pluto May Have a Supervolcano That Erupts Ice

Leave a Reply

Your email address will not be published. Required fields are marked *

Read next
The Nvidia GB200 Grace Blackwell SuperchipNvidia Nvidia has unveiled a “superchip” for coaching synthetic…
Digital ministers pose on the AI Security Summit in Bletchley Park, UKLeon Neal/Getty Photos The UK…