Wiz finds serious information leak at DeepSeek

0
4


Researchers at cybersecurity company Wiz have revealed a serious security vulnerability in the systems of Chinese company DeepSeek, which they have dubbed DeepLeak. Wiz found that a whole database of the Chinese company containing users’ chats, secret keys, and sensitive internal information, was exposed to anyone on the Internet.

According to the report by Wiz, the Chinese company, the developer of advanced artificial intelligence systems that overnight become serious competition for OpenAI, left sensitive information completely exposed. Anyone with an Internet connection could access sensitive information of eh company with no need for identification or security checks.







Wiz’s Israeli researchers discovered the security breach surprisingly easily, Wiz said. “As DeepSeek made waves in the AI space, the Wiz Research team set out to assess its external security posture and identify any potential vulnerabilities. Within minutes, we found a publicly accessible ClickHouse database linked to DeepSeek, completely open and unauthenticated, exposing sensitive data,” the company said. It added that its research team “immediately and responsibly disclosed the issue to DeepSeek, which promptly secured the exposure.” Wiz Research has identified a publicly accessible ClickHouse database belonging to DeepSeek, which allows full control over database operations, including the ability to access internal data. The exposure includes over a million lines of log streams containing chat history, secret keys, backend details, and other highly sensitive information. The Wiz Research team immediately and responsibly disclosed the issue to DeepSeek, which promptly secured the exposure.

“While much of the attention around AI security is focused on futuristic threats, the real dangers often come from basic risks-like accidental external exposure of databases. These risks, which are fundamental to security, should remain a top priority for security teams,” Wiz researcher Gal Nagli said.

“As organizations rush to adopt AI tools and services from a growing number of startups and providers, it’s essential to remember that by doing so, we’re entrusting these companies with sensitive data. The rapid pace of adoption often leads to overlooking security, but protecting customer data must remain the top priority. It’s crucial that security teams work closely with AI engineers to ensure visibility into the architecture, tooling, and models being used, so we can safeguard data and prevent exposure,” Nagli concluded..

Published by Globes, Israel business news – en.globes.co.il – on January 30, 2025.

© Copyright of Globes Publisher Itonut (1983) Ltd., 2025.



LEAVE A REPLY

Please enter your comment!
Please enter your name here