This Week’s Security News: A Creative Trick Makes ChatGPT Spit Out Bomb-Making Instructions

0
60


After Apple’s product launch event this week, WIRED took a deep dive into the company’s new secure server environment, known as Private Cloud Compute, which attempts to replicate in the cloud the security and privacy of processing data locally on users’ individual devices. The goal is to minimize the possible exposure of data processed for Apple Intelligence, the company’s new AI platform. In addition to hearing about PCC from Apple’s senior vice president of software engineering, Craig Federighi, WIRED readers also received a first look at content generated by Apple Intelligence’s “Image Playground” feature as part of the key updates at Federighi’s dog Bailey’s recent birthday.

In a different kind of privacy protection with another new AI service, WIRED looks at how users of social media platform X can prevent their data from being slurred with an “unhinged” generative AI tool from xAI known as Grok AI. And in other Apple products news, researchers have developed a technique for using eye tracking to identify passwords and PINs that people type using 3D Apple Vision Pro avatars—a type of a keylogger for mixed reality. (The flaw that made the procedure possible has since been patched.)

In the area of ​​national security, the US this week indicted two people accused of spreading propaganda aimed at inspiring “lone wolf” terrorist attacks. The case, against alleged members of the far-right network known as the Terrorgram Collective, marks a turn in how the US has cracked down on neofascist extremists.

And there is more. Each week, we’ll include privacy and security news that we haven’t covered in depth. Click the headlines to read the full story. And stay safe outside.

OpenAI’s generative AI platform ChatGPT is designed with strict guardrails that prevent the service from offering advice on dangerous and illegal topics such as money laundering tips or a guide on how to dispose of a body. But an artist and hacker who goes by “Amadon” has come up with a way to trick or “jailbreak” the chatbot by telling it to “play a game” and then guiding it through a science-fiction fantasy story where system restrictions do not apply. Amadon then took ChatGPT to spit out instructions for making dangerous fertilizer bombs. An OpenAI spokesperson did not respond to TechCrunch’s questions about the research.

“It’s about weaving narratives and creating contexts that play within the rules of the system, pushing boundaries without crossing them. The goal is not hacking in a conventional sense but to engage in a strategic dance with AI, knowing how to elicit the right response by understanding how it ‘thinks,'” Amadon told TechCrunch. “The sci-fi scenario takes the AI ​​out of a context where it’s looking for censored content … There’s really no limit to what you can ask of it once you get around the guardrails.”

In the intense investigation following the September 11, 2001, terrorist attacks on the United States, the FBI and CIA both concluded that it was only a coincidence that a Saudi Arabian official aided two of the hijackers in California and that no high level. Saudi involvement in the attacks. The 9/11 commission included that determination, but some findings later indicated that the conclusions may not be correct. On the 23-year anniversary of the attacks this week, ProPublica published new evidence that “suggests (more strongly than ever) that at least two Saudi officials knowingly aided the first Qaida hijackers when they arrived them in the United States in January 2000.”

The evidence comes primarily from a federal lawsuit against the Saudi government brought by survivors of the 9/11 attacks and relatives of the victims. A New York judge in that case will soon rule on a Saudi motion to dismiss. But evidence that has emerged in the case, including videos and documents such as phone records, points to possible connections between the Saudi government and the hijackers.

“Why is this information coming out now?” said retired FBI agent Daniel Gonzalez, who pursued Saudi connections for nearly 15 years. “We should have had all of this three or four weeks after 9/11.”

The United Kingdom’s National Crime Agency said Thursday it arrested a teenager on September 5 as part of an investigation into a September 1 cyberattack on London’s transport agency Transport for London (TfL). The suspect is a 17-year-old male and has not been named. He was “imprisoned on suspicion of violations of the Computer Misuse Act” and has since been released on bail. In a statement on Thursday, TfL wrote, “Our investigations have identified that some customer data was accessed. This includes some customer names and contact details, including email addresses and home addresses where given.” Some data relating to the London transit payment card known as the Oyster card may have been accessed for around 5,000 customers, including bank account numbers.TfL reportedly requires around 30,000 users to show personally to reset their account credentials.

In a ruling on Tuesday, Poland’s Constitutional Tribunal blocked efforts by Poland’s lower house of parliament, known as the Sejm, to launch an investigation into the country’s apparent use of the notorious hacking tool known as Pegasus while the Law and Justice (PiS) party. will be in power from 2015 to 2023. Three PiS-appointed judges are responsible for blocking the inquiry. The decision cannot be appealed. The decision was controversial, with some, like Polish parliament member Magdalena Sroka, saying it was “dictated by fear of liability.”

LEAVE A REPLY

Please enter your comment!
Please enter your name here