This article was produced with support from the Capitol Forum.
The Secret Service spent $50,000 on Microsoft Azure and OpenAI cloud services, according to internal Secret Service documents obtained by 404 Media.
The news shows that U.S. federal law enforcement is actively moving into the world of AI, with the Secret Service saying it won’t disclose the use case because it does not discuss methods used for its “operations.” It also comes after a recent White House policy change that will require federal agencies to, among other things, ensure they have proper safeguards when using AI that could impact Americans’ rights or safety. The Secret Service recently faced a wave of criticism after two assassination attempts against former President Donald Trump, with the director resigning in July.
“The USSS [U.S. Secret Service] has a requirement to procure r [sic] Microsoft Azure-Open AI cloud-based services,” a Secret Service memorandum dated September 2023 reads. 404 Media obtained the document and others through a Freedom of Information Act (FOIA) request with the Secret Service.
The office responsible for the $50,000 purchase was the Secret Service’s Chief Information Office, according to the document. Another indicates that the work could extend through to June of this year.
The documents do not elaborate on why the Secret Service needed such a tool. Microsoft’s website for the Azure OpenAI service says customers can “build your own copilot and generative AI applications.” Users can connect their own data and then use OpenAI models on that information, it adds. Potential use cases include chat bots that develop answers based on the customer’s own data; language translation; and predictive analytics, according to Microsoft’s website.
“Out of concern for operational security, the U.S. Secret Service does not discuss the means or methods used for our operations,” Alexi Worley, from the Office of Communication and Media Relations at the Secret Service, told 404 Media in a statement. “All technology used by the Secret Service must meet the agency's strict security requirements.” The agency did not answer 404 Media’s question on whether the tool is being used to generate material that may later be used in a criminal prosecution.
In March, the White House announced that the Office of Management and Budget (OMB) was issuing the agency’s first government-wide policy to mitigate the risks of AI. The policy requires agencies to have proper safeguards in place, release annual inventories of their AI use cases, and report metrics about the agency’s AI use cases that are withheld from that public inventory “because of their sensitivity.”
In May, Bloomberg reported that Microsoft created a GPT-4 generative AI model that is geared towards U.S. intelligence agencies.
The first federal agency customer for ChatGPT Enterprise was the U.S. Agency for International Development, FedScoop reported in August.