Digital Bargaining Bill should be consigned to the flames

The Fair Digital New Bargaining Bill should be placed on a figurative Viking funeral ship, pushed out into the water, and set on fire.

It was reported back to the House last week by a select committee that was unable to agree on amendments which, in the main, were bolted on to take account of generative AI. The impact of artificial intelligence had been entirely absent from the original bill.

The inability of the Economic Development, Science and Innovation Select Committee to agree on amendments probably owes more to the genesis of the proposed legislation – it was introduced by the Labour-led coalition government shortly before the last election – than to the substance of the changes. ACT, for example, is opposed to the bill as a whole, arguing “the risks may outweigh the benefits”. Labour hints that present Government members on the committee failed to give it the necessary support.

The way in which the bill was reported back to the House means it may have been fatally wounded, but it is not dead yet. It was reported back without amendment and with the admission the committee could not agree. However. a version with the amendments that had been considered was appended and the committee said that, if Parliament, decided to proceed, it should consider them.

There are several reasons why the House should simply let the poor thing die in peace. Continue reading “Digital Bargaining Bill should be consigned to the flames”

EU framework for AI laws: First steps to taming a beast

The European Union has agreed to pass the world’s first laws governing the use of artificial intelligence. It is one step on a long and winding road.

It is unsurprising that this initiative came out of the EU. It has been the only governing body to consistently put its people ahead of the wishes of the companies that control the search and social media platforms that intrude into virtually every nation on the planet.

The historic agreement came after 36 hours of solid negotiation among the EU member states and it sets out the parameters on which the laws will be based.

The move is hugely significant, but it should not be seen as a full solution to curbing negative impacts while allowing the positive aspects of AI to flourish.

It aims to ensure that AI systems used inside the EU are safe and respect fundamental rights. In other words, it is based on a harm principle. That means it will target high impact AI systems that pose potential risks and strictly limit the use of potential AI tools for state surveillance. Continue reading “EU framework for AI laws: First steps to taming a beast”

Generative AI: Be afraid, be very afraid

We humans have always had a bit of a penchant for futile exercises.

The ancient Greeks had a death-defying king rolling a large boulder up a hill for eternity. Much later, the Japanese invented an infuriating game called Whac-a Mole.

Now the media are trying to stay a step ahead of generative artificial intelligence.

Media companies around the world are grappling with editorial guidelines to deal with a digital phenomenon that can be both a tool to enhance their productivity, and an insidious weapon that can be used against them.

Some see it as an existential threat that should be banned outright but, really, artificial intelligence is like firearms and opioids – useful in the right hands but extraordinarily dangerous in the wrong ones. And, like drugs, its legitimate use needs to be carefully prescribed. Continue reading “Generative AI: Be afraid, be very afraid”

Text generators must not become killer robots

Disclosure: This commentary was written by me. It is not the product of a generative artificial intelligence programme. Any intelligence you may find here is from my own, admittedly limited, resources.

There is, however, the worrying prospect that it could have been produced by ChatGPT, a programme with alarmingly human-like text generating capabilities. In fact, some commentators have used it to produce parts of their columns to show how good it is at creating content virtually indistinguishable from their own words of wisdom.

Generative AI is good, but it isn’t that good. Last month the U.S. tech website CNET admitted that it had used it to create at least 75 stories, many of which were attributed to “CNET Money Staff”. Retrospective fact-checking found the stories riddled with errors that human reporters were unlikely to make.

That revelation has not halted media use of AI in its tracks. Sports Illustrated last week told the Wall Street Journal it was publishing AI-generated stories on men’s fitness tips, drawing on 17 years of archived stories in its own library. The caveat is that all of the stories are reviewed and fact-checked by flesh-and-blood journalists.

This sort of AI may not be perfect, although it is good enough to create alarm among university staff over student essay assignments. However, it is about to get better. Continue reading “Text generators must not become killer robots”