What is GPT-4 and how is it different from ChatGPT? | OpenAI

The latest version of OpenAI, GPT-4, is the company’s most powerful and impressive AI model to date, behind ChatGPT and the Dall-E AI artist. The system can pass the bar exam, solve logic puzzles, and even give you a recipe for scraps using a photo of your fridge — but its developers warn that it can also spread fake facts, embed dangerous ideologies, and even trick people into doing chores on its behalf . Here’s what you need to know about our latest AI overlord.

What is GPT 4?

At its core, GPT-4 is a text creation engine. But it’s a very good one, and being very good at writing lyrics turns out to be practically similar to being very good at understanding the world and arguing.

So if you ask GPT-4 a question from a US bar exam, it will write an essay demonstrating legal proficiency; If you give it a medicinal molecule and ask for variations, it appears to be applying biochemical expertise; and if you ask him to tell you a gag about a fish, he seems to have a sense of humor — or at least a good memory for bad cracker jokes (“What do you get when you cross a fish and an elephant? Swimming suitcase !”).

Is it the same as ChatGPT?

Not quite. If ChatGPT is the car, then GPT-4 is the engine: a powerful general purpose technology that can be customized for a number of different uses. You may have experienced it already, as it’s been powering Microsoft’s Bing Chat – the one that went a bit mad and threatened to destroy people – for the past five weeks.

But GPT-4 can be used to run more than just chatbots. Duolingo has a version of this built into its language learning app that can explain where learners went wrong instead of just telling them the right thing; Stripe uses the tool to monitor its chat room for scammers; and assistive technology company Be My Eyes are using a new feature, image input, to create a tool that can describe the world to a blind person and answer follow-up questions about it.

What makes GPT-4 better than the old version?

GPT-4 performs better than its older siblings on a number of technical challenges. It’s better at answering math questions, less likely to be tempted to give wrong answers, can do reasonably well on standardized tests – though not on those on English literature, where it’s comfortably in the bottom half of the rankings – and so on.

It also has a sense of ethics more firmly built into the system than the old version: ChatGPT took its original engine, GPT-3.5, and added filters to try and prevent it from answering malicious or harmful questions gives. Now these filters are built right into GPT-4, meaning the system politely declines to perform tasks like ranking races by attractiveness, telling sexist jokes, or providing guidelines for synthesizing sarin.

skip previous newsletter campaign

So GPT-4 can’t do any damage?

OpenAI certainly tried to achieve this. The company has published a lengthy paper with examples of damage GPT-3 could cause that GPT-4 has defenses against. There was even an early version of the system given to third-party researchers at the Alignment Research Center who were trying to see if they could get GPT-4 to play the role of an evil AI from the movies.

It failed at most of these tasks: it was unable to describe how it would replicate, acquire more computing resources, or perform a phishing attack. But the researchers managed to simulate it using Taskrabbit to convince a human worker to pass an “are you a human” test, with the AI ​​system even figuring out that it should lie to the worker and say, it was a blind person who can not see pictures. (It’s unclear if a real Taskrabbit worker was involved in the experiment).

But some worry that the better you teach an AI system the rules, the better you teach that system how to break them. Dubbed the “Waluigi Effect,” the effect appears to be the result of the fact that while it is difficult and complex to understand the full details of what constitutes ethical action, the answer to the question, “Should I be ethical? is a much simpler yes-or-no question. Make the system decide not to be ethical and it will happily do whatever is asked of it.

Leave a Reply

Your email address will not be published. Required fields are marked *