Big Tech Is Realizing Something: Their AI Can’t Keep Messing Up on This Scale

  • Errors in artificial intelligence models are resulting in serious damage to the companies’ reputation.

  • Apple has launched Apple Intelligence with limited and carefully controlled functions to prevent issues.

  • Other companies like Microsoft appear to be following a similar cautious approach.

Big Tech
No comments Twitter Flipboard E-mail
javier-pastor

Javier Pastor

Senior Writer

Computer scientist turned tech journalist. I've written about almost everything related to technology, but I specialize in hardware, operating systems and cryptocurrencies. I like writing about tech so much that I do it both for Xataka and Incognitosis, my personal blog. LinkedIn

Initially, it didn’t matter if ChatGPT made mistakes. It didn’t matter if it altered the ending of Game of Thrones upon request or if it struggled with basic mathematics. After all, it was only a few months old. It didn’t matter.

But now, it does matter.

That’s what Big Tech companies have started to recognize as they find themselves in a race where being first seemed more crucial than being accurate. They all rushed to be the first, but that’s when the real problems began. Microsoft’s Bing chatbot started acting really weird. Then there's Google's Bard and Gemini, which made significant errors (not just once, but twice).

In AI, it seemed more important to arrive first than to arrive in your best version.

And, of course, this led to a change in user attitudes. What was once considered humorous is now less so, especially because we’re beginning to rely on these generative AI models for more serious purposes.

Even researchers are conducting studies using ChatGPT. In addition, these assistants have become an essential tool for programmers, who use them regularly. The challenge they face, however, is that currently, 52% of ChatGPT's programming responses contain inaccurate information.

The recent events have made it clear to many what some already knew all along. The ChatGPTs of the world often don’t fully understand what they’re saying. They seem to blurt out responses that make sense, with well-constructed sentences. Their tone is confident, natural, and reasonable, giving users the impression that their answer is definitive and correct.

In many cases, however, it’s not—which is becoming a significant problem for the reputation of the companies behind them. Google seems to be the most affected by this issue. For twenty-five years, we’ve trusted its search engine to show us exactly what we need, albeit with a lot of advertising.

We trusted Google, but we don’t trust Gemini as much. Well, not just Gemini. We don't trust ChatGPT or Copilot, either.

We trusted Google, but we don’t trust Gemini as much. Well, not just Gemini. We don't trust ChatGPT or Copilot, either. And it's a good call: It’s important to check their answers because it’s not uncommon for them to give partially or totally incorrect responses and cause people problems.

Faced with this situation, companies are starting to realize how important it is to ensure that generative AI models make fewer mistakes or behave in a way that makes users trust them a little more. There are several approaches to this.

The most striking approach is also the most recent. Apple, which presented its Apple Intelligence (the term “AI” seems to be forbidden to the company) on Monday, left us wanting more. The AI features launched in its operating systems are just more of the same. In fact, they are a more of the same, in a more limited way because many of them have important brake pads installed.

Big Tech The Apple Intelligence image generator almost looks like a toy. It’s precisely what Apple was looking for.

The best example is Apple’s AI image generator, which it called Image Playground. The generator can be used to create emojis and images with finishes that are anything but photorealistic.

Image Playground won't create oil portraits of Tim Cook or possible fights between Elon Musk and Mark Zuckerberg. It won’t let you dress the Pope in Balenciaga, and it certainly won't let you explicit deepfakes of Taylor Swift.

The limitations of Image Playground may be disappointing because it almost looks like a toy, but it does help prevent problems for Apple. While you won’t be able to do much, what you do will probably be pretty good. This not only prevents misuse, but it also helps Apple avoids disasters like the one recently experienced by Stable Diffusion 3. Its AI image generator, one of the most reputable in the world, is generating abhorrent human bodies. It's safe to say that this isn’t likely to happen with Apple’s model.

Apple's AI is more of the same but in a more limited way. However, this helps prevent problems for the company.

Similarly, Microsoft seems to have reconsidered its approach. The recent presentation of its photographic memory feature Recall was attention-grabbing, but it faced criticism due to its implications for privacy and cybersecurity. What was Microsoft's response? It decided to delay Recall's rollout.

Recall was initially set to be included in the new PC Copilot+ launching on June 18. However, due to concerns about privacy and cybersecurity, the company has postponed its release. Recall will first be made available to Windows Insiders before being released to the larger public at a later date. It seems that the company is taking the time to address the criticisms and complaints before proceeding with the launch. It wasn’t worth the risk.

For its part, it seems Google is realizing that rushing into decisions may not be the best approach, especially in a rapidly evolving industry that could have a significant impact on its business. The company is in the most critical situation compared to its rivals. Apple already has an AI platform for its ecosystem, including the iPhone, while Microsoft is quickly developing its own for Windows.

Google can't risk users turning to ChatGPT over its search engine or the possibility that we might end us searching more with OpenAI's chatbot than with Google Search. But it also can't risk its AI making absurd recommendations (like suggesting gluing cheese to pizza). Sundar Pichai’s position isn’t a nice one to be in, but his company needs to act swiftly to stay ahead. Striking the right balance between speed and quality is a major challenge for these companies.

This might be the start of a new mini-era in the artificial intelligence, one where chatbots are more reliable.

Image | Irridesce using Midjourney

Related | We Thought ChatGPT Was Great for Programming. A New Study Finds That Half of Its Answers Are Wrong

Home o Index