Reporters, fact-checkers, engineers, scientists and more have all concerned themselves with artificial intelligence (AI) for some time now, but Adam King, a machine learning/AI engineer decided to lighten things up a bit with his recent experiment called Talk to Transformer. The new web app lets anyone enter a text prompt to which the AI software will automatically complete.
While King created the site, the underlying technology comes from OpenAI’s algorithm named GPT-2, which was released earlier this year. According to King, OpenAI choose to hold back the full language model but released a smaller version, which he noticed was still interesting and entertaining.
“It quickly occurred to me that my appreciation of it didn’t require any understanding of how it worked, and so someone should make a website to enable the general public to play with it,” King said.
Modifying the open source implementation of OpenAI’s model (called 345M for the 345 million parameters it uses), the website only took about two weeks to put together. Talk to Transformer launched in early May.
King said he faced two challenges: making the website computationally efficient to accept many queries at the same time and enabling the app to generate the text incrementally so users weren’t sitting around waiting for output.
“OpenAI themselves seem to like the site so I’m happy about that,” King said. “A lot of people are (also) reaching out thanking me for making the site. They seem to enjoy the model as much as I did.”
The young engineer shared that the website is likely to remain a “very simple toy.” But as more intelligent language models come out, he will upgrade the site. He reminds E&P that OpenAI’s full GPT-2 model has 4.5 times more parameters than what we see on Talk to Transformer and is much more coherent.
In fact, King thought it was important to note that “GPT-2 was trained simply to predict what comes next on a web page, but in order to do so it incidentally learned how to translate between languages” due to the multiple pages online that have text in other languages. This is a massive example of “unsupervised learning,” King explained.
This language-model technology is already well beyond what we are seeing in King’s creation, potentially posing a threat to journalism.
“GPT-2 is a bit of a problem for journalism in that it can mass produce false stories that are hardly detectable as being computer-generated,” King said. “Finding what’s true and what’s not will become more difficult.”