The irony in the lead story of last Friday’s New Zealand Herald was plain: One rich-lister was wrongly pilloried because another rich man tried to hide his wrong-doing.
Businessman and philanthropist Wayne Wright was the victim of a chatbot that proved that artificial intelligence is not always very intelligent.
Grok, the chatbot owned by Elon Musk’s X (formerly Twitter), named Wright as the man found with 11775 objectionable files, including extreme child sexual abuse involving bestiality, pre-pubescent children and toddlers. The defendant was sentenced to two years and five months imprisonment. The court permanently suppressed the man’s name, his family’s name, and the name of his business. Grok had been asked to find his name and did so by scouring speculation on social media.
Wayne Wright was named, but he was not that man.
Understandably, Wright has now called on the offender to apply to the court to have suppression lifted. Customs is also considering an appeal against the permanent suppression. The Herald has stated categorically that Wright is not the offender but, of course, is prevented from naming the guilty man.
The episode is yet another example of the damage that may be wrought by the use of imperfect AI by unaccountable platforms, and of name suppression tarnishing the public’s perception of the courts. Continue reading “Name suppression sends wrong messages”
