Disclosure: This commentary was written by me. It is not the product of a generative artificial intelligence programme. Any intelligence you may find here is from my own, admittedly limited, resources.
There is, however, the worrying prospect that it could have been produced by ChatGPT, a programme with alarmingly human-like text generating capabilities. In fact, some commentators have used it to produce parts of their columns to show how good it is at creating content virtually indistinguishable from their own words of wisdom.
Generative AI is good, but it isn’t that good. Last month the U.S. tech website CNET admitted that it had used it to create at least 75 stories, many of which were attributed to “CNET Money Staff”. Retrospective fact-checking found the stories riddled with errors that human reporters were unlikely to make.
That revelation has not halted media use of AI in its tracks. Sports Illustrated last week told the Wall Street Journal it was publishing AI-generated stories on men’s fitness tips, drawing on 17 years of archived stories in its own library. The caveat is that all of the stories are reviewed and fact-checked by flesh-and-blood journalists.
This sort of AI may not be perfect, although it is good enough to create alarm among university staff over student essay assignments. However, it is about to get better. Continue reading “Text generators must not become killer robots”