AI is getting better and better at writing convincing material , and that's leading its creators to wonder whether they should release the technology in the first place. Elon Musk's OpenAI has developed an algorithm that can generate plausible-looking fake news stories on any topic using just a handful of words as a starting point. It was originally designed as a generalized language AI that could answer questions, summarizing stories and translating text, but researchers soon realized that it could be used for far more sinister purposes, like pumping out disinformation in large volumes. As a result, the team only plans to make a "simplified version" of its AI available to the public, according to MIT Technology Review .
The technology thankfully has some rough edges at the moment. It frequently writes stories that are either plagiarized or are only cohesive on the surface, and only occasionally hits the jackpot. However, OpenAI's Jack Clark warned that it might take just "one or two years" before there's a system capable of reliably producing fake news that needs a thorough fact check to disprove.
And that's the core problem. While OpenAI is focused on ethical implementations and won't knowingly enable fake news, it's just one organization. There's a larger concern that an unscrupulous (or unwitting) company or a hostile government might develop a powerful AI that disseminates falsehoods on a large scale. Social networks have enjoyed some success in fighting fake news , but they might struggle if there's a flood of machine-generated misinformation.
Source: MIT Technology Review