Microsoft Build session on best security practices on Tuesday, May 20, 2025, shows internal communication about AI plans for Walmart.
Source: Microsoft
Microsoft AI security chief Neta Haiby showed a confidential Teams chat to a room full of people on Tuesday, revealing details from the company’s artificial intelligence plan for Walmart, according to materials viewed by CNBC.
Protesters interrupted the Microsoft Build session on best security practices and Haiby switched her screen share amid the ruckus, showing that Walmart, one of Microsoft’s most significant customers, was “ready to ROCK AND ROLL with [Microsoft’s] Entra Web and Al Gateway.”
The message, posted by Leigh Samons, a principal cloud solution architect at Microsoft, detailed the process for how Microsoft would go about integrating its technology into Walmart’s processes.
It also said that one of Walmart’s tools needed extra safeguards.
“MyAssistant is one they build that is overly powerful and needs guardrails,” the message said, referencing a tool Walmart created and built last summer that “leverages a unique build of Walmart proprietary data, technology and large language models in Azure OpenAI Service,” according to a January press release.
The tool helps store associates summarize long documents, create new marketing content and more, per the release.
The internal Teams message also cited a “distinguished” AI engineer at Walmart as saying, “Microsoft is WAY ahead of Google with Al Security. We are excited to go down this path with you.”
The Verge was first to report on the AI plans. CNBC has reached out to Microsoft and Walmart for comment.
The protest singled out Sarah Bird, Microsoft’s head of responsible AI who was part of the Build panel with Haiby. Haiby herself was formerly a member of the Israeli Defense Forces, according to a years-old Tumblr page viewed by CNBC.
Haiby did not immediately respond to a request for comment.
A demonstrator is removed from the audience as they interrupt a presentation by Microsoft Chairman and CEO Satya Nadella at the Microsoft Build 2025 conference in Seattle, Washington on May 19, 2025.
Jason Redmond | AFP | Getty Images
“Sarah Bird, you are whitewashing the crimes of Microsoft in Palestine,” Hossam Nasr, an organizer with the group No Azure for Apartheid, said, continuing, “How dare you talk about…” before the livestream audio was muted.
Nasr was one of the Microsoft employees terminated last year after planning a vigil for Palestinians killed in Gaza.
The protest and the reveal of Walmart’s AI plans followed another disruption earlier that day at Microsoft’s Build developer conference in Seattle when an unnamed Palestinian tech worker disrupted a speech by Jay Parikh, Microsoft’s head of CoreAI.
“Jay, you are complicit in the genocide in Gaza,” the tech worker, who did not wish to share his name for fear of retaliation, said. “My people are suffering because of you. How dare you. How dare you talk about AI when my people are suffering. Cut ties with Israel.”
He then called to “free Palestine” and said, “No Azure for apartheid,” a nod to the group and its petition.
On Monday, Microsoft software engineer Joe Lopez interrupted CEO Satya Nadella’s keynote speech onstage, saying, “Satya, how about you show them how Microsoft is killing Palestinians? How about you show them how Israeli war crimes are powered by Azure?”
The recent disruptions are part of a mounting string of protests at Microsoft events over the Israeli military’s use of the company’s AI products.
Microsoft Chairman and Chief Executive Officer Satya Nadella speaks during the Microsoft Build 2025, conference in Seattle, Washington on May 19, 2025.
Jason Redmond | AFP | Getty Images
At Microsoft’s 50th anniversary event last month, two Microsoft software engineers publicly protested during executive presentations. The roles of both employees were terminated soon after, according to documents viewed by CNBC.
At the April event, Ibtihal Aboussad, then a software engineer in the company’s AI division, interrupted Microsoft AI CEO Mustafa Suleyman’s speech.
“Mustafa, shame on you,” Aboussad said as she walked toward the stage at the event in Redmond, Washington. “You claim that you care for using AI for good, but Microsoft sells AI weapons to the Israeli military. Fifty thousand people have died, and Microsoft powers this genocide in our region.”
“You have blood on your hands,” she said before being swiftly escorted out. “All of Microsoft has blood on its hands.”
Although the Microsoft protests centered on the Israeli military’s use of its technology, AI companies in recent months have been walking back bans on broader military use of their products and entering into deals with defense industry giants and the Defense Department.
In November, Anthropic and defense contractor Palantir announced a partnership with Amazon Web Services to provide U.S. intelligence and defense agencies access to Anthropic’s Claude AI models. Palantir recently signed a new five-year deal worth up to $100 million to expand U.S. military access to its Maven AI warfare program.
OpenAI and Anduril announced a partnership allowing the defense tech company to deploy advanced AI systems for “national security missions.” And last month, Scale AI forged a deal with the Department of Defense for a multimillion-dollar flagship AI agent program.