Spies Also Wanted to Use ChatGPT. Microsoft Created One for Them

  • U.S. intelligence agencies have wanted to use AI to help them analyze sensitive information for years.

  • There is a risk, though: AI has a habit of making stuff up and making mistakes.

Spies wanted to use ChatGPT, so Microsoft made one for them
No comments Twitter Flipboard E-mail

ChatGPT, like other chatbots on the market, has a problem: It doesn’t know how to keep secrets. This creates a problem for many companies, but it’s an even bigger issue for government intelligence agencies.

The U.S. intelligence community wanted to take advantage of the benefits of generative AI models while avoiding those risks. Now, they have what they were looking for: a sort of “ChatGPT for spies.”

Microsoft is behind this new AI-for-spies model. The company developed a generative AI model that's disconnected from the Internet, which allows it to analyze highly confidential information.

William Chappell, Microsoft's director for strategic missions and technology, told Bloomberg that is the first time the company has used a large language model (LLM) in this way. Most LLMs rely on cloud services, which are connected to the Internet, to learn and infer patterns from data they analyze.

To develop the chatbot, Microsoft relied on OpenAI's GPT-4. It also used a “private cloud” that's isolated from the Internet, known as an “air-gapped” cloud. The CIA had already created a chatbot a few months ago to analyze unclassified information. However, it wasn't what they needed to work with  more sensitive data.

Sheetal Patel, who is on the team in charge of this initiative at the CIA, explained last month that “there’s a race to get generative AI onto intelligence data." She added that first country to successfully use this technology will win. “And I want it to be us,” Patel said.

Microsoft has been working on this project for 18 months, and leaned on a supercomputer in Iowa specialized in AI to develop it. The isolated version of the Internet is only accessible to the U.S. government. Only about 10,000 people have access to the system, Chappell said, according to the outlet.

The GPT-4-based model is also “static,” which means that it can read files but not learn from them or gather information from the Internet. This feature allows its developers to guarantee that the model is “clean” and will not leak sensitive information to the outside world.

The service started operating this week and will now be tested and evaluated by U.S. intelligence officials. That doesn’t mean there aren’t risks: AI models make mistakes and invent things all the time. It'll be interesting to see if this chatbot for spies delivers the results spy agencies are looking for.

Image | Universal Pictures

Related |How to Use ChatGPT to Create Excel Formulas

Comments closed
Home o Index