https://img-cdn.tnwcdn.com/image?fit=1280%2C720&url=https%3A%2F%2Fcdn0.tnwcdn.com%2Fwp-content%2Fblogs.dir%2F1%2Ffiles%2F2020%2F05%2FUntitled-design-58.png&signature=39438b0cce9d2199f48eb7c8ab040fe4

OpenAI’s new text generator writes sad poems and corrects lousy grammar

The notorious text bot has had a major upgrade

by

OpenAI has quietly unveiled the latest incarnation of its headline-grabbing text generator: GPT-3.

The research lab initially said its predecessor’s potential to spread disinformation made it too dangerous to share. The decision led terrified journalists to warn of impending robot apocalypses — generating a lot of helpful hype for GPT-2.

Now, OpenAI has unveiled its big brother. And it’s enormous. The language model has 175 billion parameters — 10 times more than the 1.6 billion in GPT-2, which was also considered gigantic on its release last year.

[Read: Remember that scary AI text-generator that was too dangerous to release? It’s out now]

The research paper also dwarfs GPT-2’s, growing from 25 to 72 pages. We haven’t got through the whole thing yet, but after flicking through have spotted some striking stuff.

Bigger and better?

GPT-3 can perform an impressive range of natural language processing tasks — without needing to be fine-tuned for each specific job.

It’s now capable of translation, question-answering, reading comprehension tasks, writing poetry — and even basic math:

https://cdn0.tnwcdn.com/wp-content/blogs.dir/1/files/2020/05/Screenshot-2020-05-29-at-13.20.18.png
The model can perform three-digit addition and subtraction. Credit: OpenAI

It’s also pretty good at bettering correcting English grammar:

https://cdn0.tnwcdn.com/wp-content/blogs.dir/1/files/2020/05/Screenshot-2020-05-29-at-12.55.22.png
Nothing task-specific was provided to GPT-3 apart from a few examples as conditioning and basic framing. Credit: OpenAI

GPT-2 also seems to have improved upon the vaunted writing ability of its predecessor.

The research team tested its skills by asking evaluators to distinguish its works from those created by their humans.

The one they found most convincing was a thorough report on a historic split of the United Methodist Chuch:

https://cdn0.tnwcdn.com/wp-content/blogs.dir/1/files/2020/05/Screenshot-2020-05-29-at-11.48.53.png
The evaluators were most impressed by this article on a church split. Credit: Open AI

However, my favorite example of its writing was the one that humans found the easiest to recognize as made by a machine:

https://cdn0.tnwcdn.com/wp-content/blogs.dir/1/files/2020/05/Screenshot-2020-05-29-at-12.00.57.png
Joaquin’s shape-shifting claims didn’t convince the critics. Credit: OpenAI

That report may not have convinced the reviewers, but it certainly showed some flair and a capacity for the surreal. By comparison, here’s an example of a GPT-2-penned article that OpenAI previously published:

https://cdn0.tnwcdn.com/wp-content/blogs.dir/1/files/2020/05/Screenshot-2020-05-29-at-12.29.05.png
GPT-2 did a decent job reporting on the discovery
of talking unicorns. Credit: OpenAI

GPT-3’s reporting skills led the researchers to issue another warning about its potential for misuse:

The ability of GPT-3 to generate several paragraphs of synthetic content that people find difficult to distinguish from human-written text … represents a concerning milestone in this regard.

However, the system is unlikely to take the jobs of two-bit hacks for now, thank God. Not because it lacks the skill — it’s just too damn expensive.

That’s because the system needs enormous computation power. As Elliot Turner, the CEO of AI communications firm Hyperia, explained in a tweet:

That should also reduce its powers to be used for evil, as presumably the only people who could afford it are, er, nation-states and multi-national corporations.

For now, we’ll have to wait and see what happens when the model’s released to the public.