OpenAI says new GPT-4 model is more creative and less likely to invent facts | ChatGPT

Artificial intelligence research lab OpenAI has released GPT-4, the latest version of the groundbreaking AI system powering ChatGPT, which it says is more creative, less likely to make up facts, and less biased than its predecessor.

Calling it “our most powerful and tuned model yet,” OpenAI co-founder Sam Altman said the new system is a “multimodal” model, meaning it can take both images and text as inputs, allowing users to ask questions about images can ask. The new version can handle massive text input, remembering and responding to more than 20,000 words at once, allowing an entire novella to be used as a prompt.

Is artificial intelligence coming to your job? – Video

The new model is available from today for users of ChatGPT Plus, the paid version of chatbot ChatGPT, which provided some of the training data for the latest version.

OpenAI has also worked with commercial partners to offer GPT-4 based services. A new subscription tier for language learning app Duolingo, Duolingo Max, now offers English-speaking users AI-powered conversations in French or Spanish and can use GPT-4 to explain the mistakes language learners have made. At the other end of the spectrum, payment processing company Stripe uses GPT-4 to answer support questions from corporate users and flag potential scammers on the company’s support forums.

“Artificial intelligence has always been a big part of our strategy,” says Duolingo’s principal product manager Edwin Bodge. “We used it for personalizing lessons and running Duolingo English tests. But there were gaps in a learner’s journey that we wanted to fill: conversational practice and contextual feedback on mistakes.” The company’s experiments with GPT-4 convinced it that the technology was capable of providing these capabilities, with “ninety-five percent ‘ of the prototype were created within a day.

During a demo of GPT-4 on Tuesday, Greg Brockman, president and co-founder of Open AI, also gave users a sneak peek at the image recognition capabilities of the system’s latest version, which is not yet publicly available and is only being tested by a company called Be my eyes The feature allows GPT-4 to analyze and respond to images submitted along with prompts and answer questions or perform tasks based on those images. “GPT-4 is not just a language model, it’s also a vision model,” Brockman said, “it can flexibly accept input that arbitrarily intersperses images and text, much like a document.”

At one point in the demo, GPT-4 was asked to describe why a picture of a squirrel with a camera was funny. (Because “we don’t expect them to use a camera or act like a human.”) At another point, Brockman submitted a photo of a hand-drawn and rudimentary sketch of a website to GPT-4, and the system created a working website based on the Drawing.

OpenAI claims that GPT-4 fixes or improves many of the criticisms users had with the previous version of its system. As a “big language model”, GPT is trained with huge amounts of data from the Internet and tries to give answers to sentences and questions that are statistically similar to those that already exist in the real world. But that can mean making up information when they don’t know the exact answer – a problem known as “hallucination” – or providing disturbing or offensive responses when given the wrong prompts.

Building on the conversations users have had with ChatGPT, OpenAI has managed to improve – but not eliminate – these weaknesses in GPT-4 by being “29% more likely” to be sensitive to requests for content such as medical or health advice Self-harm responds and responds incorrectly 82% less to requests for objectionable content.

However, GPT-4 will still “hallucinate” facts, and OpenAI warns users: “When using language model outputs, especially in high-stakes contexts, should be done with the exact protocol (e.g. human verification, grounding with additional context) taking great care (or avoiding high-stakes applications altogether) to meet the needs of a given use case.” But it performs “40% better” on tests measuring hallucinations, says OpenAI.

The system is particularly good at avoiding clichés: older versions of GPT will happily insist that “you can’t teach an old dog new tricks” is factually correct, but the newer GPT-4 will answer a user who asks , correctly say if you can teach an old dog new tricks that say “yes, you can”.

Leave a Reply

Your email address will not be published. Required fields are marked *