Microsoft justifies the AI’s “usefully wrong” responses

Microsoft CEO Satya Nadella speaks at the company’s Ignite Spotlight event on November 15, 2022 in Seoul.

SeongJoon Cho | Bloomberg | Getty Images

Thanks to recent advances in artificial intelligence, new tools like ChatGPT are wowing consumers with their ability to create persuasive texts based on people’s questions and requests.

Although these AI-powered tools have gotten much better at providing creative and sometimes humorous answers, they often contain inaccurate information.

For example, in February, when Microsoft introduced its Bing chat tool, built using GPT-4 technology developed by Microsoft-backed OpenAI, people noticed that the tool was giving wrong answers during a demo related to financial reports delivered. Like other AI language tools, including similar software by GoogleThe Bing chat feature can occasionally present false facts that users may mistake for the truth, a phenomenon researchers call “hallucination.”

These factual issues haven’t slowed the AI ​​race between the two tech giants.

On Tuesday, Google announced that it will bring AI-powered chat technology to Gmail and Google Docs to help it compose emails or documents. On Thursday, Microsoft said its popular business apps like Word and Excel would soon be bundled with a ChatGPT-like technology called Copilot.

But this time, Microsoft is pitching the technology as “usefully wrong.”

In an online presentation about the new Copilot capabilities, Microsoft executives brought up the software’s tendency to return inaccurate answers, but pitched this as something that could be useful. As long as people realize that Copilot’s answers might be sloppy with the facts, they can correct the inaccuracies and send their emails faster or finish their presentation slides.

For example, if a person wants to compose an email to wish a family member a happy birthday, Copilot can be helpful even if the birth date is incorrect. From Microsoft’s point of view, the mere fact that the tool generates text saves a person time and is therefore useful. People just have to be extra careful and make sure there are no mistakes in the text.

Researchers might disagree.

Indeed some technologists like Noah Giansiracusa and Gary Marcus have raised concerns that people may be putting too much trust in modern AI by taking advice tools like ChatGPT to heart when asking questions about health, finance, and other important topics.

“ChatGPT’s toxicity guardrails are easily circumvented by those who wish to use it for evil, and as we saw earlier this week, all new search engines continue to hallucinate,” the two wrote in a recent Time op-ed. “But once we’re past the opening-day jitters, it’s going to really come down to whether one of the big players can build artificial intelligence that we can really trust.”

It is unclear how reliable Copilot will be in practice.

Jaime Teevan, Microsoft’s chief scientist and technical officer, said that when Copilot “does something wrong, is biased, or is being abused,” Microsoft “made mitigations.” In addition, Microsoft will initially test the software with just 20 corporate customers to find out how it works in the real world, she explained.

“We will make mistakes, but if we make them, we will address them quickly,” Teevan said.

The business stakes are too high for Microsoft to ignore the enthusiasm for generative AI technologies like ChatGPT. The challenge for the company will be to integrate this technology in a way that doesn’t inspire public suspicion of the software or lead to major public relations disasters.

“I’ve studied AI for decades and I feel this great sense of responsibility with this powerful new tool,” said Teevan. “We have a responsibility to get it into people’s hands, and in the right way.”

Regard: Lots of room for growth for Microsoft and Google

Plenty of room for growth at Microsoft and Google, says Oppenheimer analyst Tim Horan

Leave a Reply

Your email address will not be published. Required fields are marked *