OpenAI unleashes scary believable Fake News AI

…And releases it into the wild on GitHub. What could possibly go wrong? OpenAI, a research organization supposed to come up with ways to protect mankind from rogue AIs (think Terminator foot on human skull, I do) actually created this mind-blowing AI back in February of 2019. It’s called GPT-2 because apparently super smart people are terrible at coming up with cool names for their AIs.

But at that time, OpenAI decided the potential for abuse was too big and so they kept the information tight. Someone yesterday decided that since there has been no evidence of abuse, it’s safe to release that AI system to the public.

That might not have been a great idea…

That’s like saying, “hey, our nuclear weapons haven’t been abused by the handful of world superpowers that have them, so it’s safe to give ’em to everyone else“. Checks and balances, strict rules and regulations, fear of mutual destruction, as well as some very lucky breaks and even a few real life superheroes who said “nyet!” to war (Thank you, officer Vasili Arkhipov) are the reason humankind is even around today. And OpenAI’s argument that “someone’s gonna make something like this anyway” is frankly silly. Yes, someone probably will. But OpenAI, considering its mission, shouldn’t be the one.

Example of OpenAI’s GPT-2 Fake News

Anyway, get ready to have your mind blown when you read an example story that Open AI’s GPT-2 created. First, a human starts the story – about English speaking unicorns being found in the Andes. Yes, really. It’s only like 2 sentences long.

The Artificial Intelligence then takes over and writes the rest of the story – and it’s effing good! It makes up names of scientists supposedly working on this unicorn discovery, and bundles in real world information such as dates, places, and complex timelines. The AI’s story reads like something a human writer would come up with. If the story wasn’t about unicorns, you’d believe the whole thing.

Here’s a short excerpt of GPT-2’s story – Completely written by the AI.

The scientist named the population, after their distinctive horn, Ovid’s Unicorn. These four-horned, silver-white unicorns were previously unknown to science.

Now, after almost two centuries, the mystery of what sparked this odd phenomenon is finally solved.

Dr. Jorge Pérez, an evolutionary biologist from the University of La Paz, and several companions, were exploring the Andes Mountains when they found a small valley, with no other animals or humans. Pérez noticed that the valley had what appeared to be a natural fountain, surrounded by two peaks of rock and silver snow.

-excerpt from fake news article generated by GPT-2. Source.

Are we all doomed to Fake News hell?

So this is where we are at the butt end of 2019. Imagine what this kind of Fake News AI can do in 2 or 5 or 10 years. Imagine what will happen in 20 or 30 years when news and research can be gathered and authored faster and better by Artificial Intelligence and news outlets decide to employ those machines instead.

At first, every bit of news will likely be curated and evaluated for errors or spin by humans. But after a few years of nearly flawless performance, those news outlets might just set this stuff on auto-pilot and let those AI-generated articles and news go out to the public without prior review. I’m not saying news-creating AIs will then suddenly become sentient and evil. But I’m saying a rogue employee or hacker may well reconfigure those systems to his or her malicious intents. And that kind of scenario is something we need to seriously consider. Ironically, that’s precisely what OpenAI’s mission is supposed to be.

Please help - LIKE and TWEET ! Thank you!

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.