First, it was an image of the Pope.
The head of the Catholic Church striding around Vatican City in a $5,000 Balenciaga puffer jacket.
Shortly followed by a photo of former US President Donald Trump physically resisting arrest by half a dozen police officers.
Two examples of images that went viral around the world of events that never happened.
Sanjana Hattotuwa is one of New Zealand’s leading experts on disinformation online and is part of the Disinformation Project research group.
He and his colleagues are increasingly concerned about the lack of checks and balances around this ever-evolving technology that allows millions of people around the world free and open access to tools that can create hyper-realistic images from a few lines of text.
“This family of artificial intelligence is evolving at a pace that really hasn’t been recorded previously in computing.”
“Why are these products and platforms and tools being developed at the pace they are without any kind of ethical guardrail within the companies that are developing them?
“Some of the leading companies – including Microsoft – are divesting the teams within (their own) companies that are looking at the ethical implications of the tools they are developing at pace.”
He said the public, as a result, are the ones at risk.
“We become the guinea pigs, the lab rats as it were, globally and domestically, for a suite of tools within the family of GAI that have the potential for harm that really boggles the mind.”
It’s like giving a medical student a scalpel on their first day, he said, and leading them into open heart surgery to learn their craft.
“It defies any explanation as to why these tools are put out there when the creators and the makers, and the CEOs themselves say they don’t quite have a handle on the misuse, abuse [of their tools] and even how their own tools work internally.”
Back to Donald Trump.
While the photos themselves were able to be quickly debunked, it will have inevitably been too late for some people to ‘un-see’.
With the former President facing criminal charges, these pictures filled a void in what people may have expected to see, and so tricked some people into believing the subject of the photo was true.
And that is where the power of disinformation, combined with these latest advances in GAI, is a potent mix.
“Anybody today can create a video, a cartoon, a meme, manipulate a photo, create synthetic media, clone individuals face or clone his or her or their voice, to create content that is believable enough to motivate people to do something, to believe in something, to subscribe to something, and then act upon those beliefs. That’s a huge power,” said Hattotuwa.
But the purpose of disinformation isn’t necessarily to purport false information as true, but as important to cast doubt on what is.
“When nothing is quite certain or true, anything can be projected as authentic and true. And that is a net benefit for disinformation. Because it creates volatility, what we call truth decay, or information disorders, which essentially help conspiratorialism, misinformation and in particular disinformation, to take aim at democracy writ-large.”
He said the real-world risks associated with this technology ploughing ahead unchecked and unregulated are varied and significantly concerning.
There’s potential for artificially made images and videos to undermine our judiciary and even lead to national security concerns.
“It’s a hydra-headed beast. It’s not just the judiciary that would be impacted; it’s also policing, it’s the chain of custody, it’s the admissibility of evidence in courts, it’s what we hold and believes to be true.
“It may create stock-market implications, super-market runs, it may create the belief that a politician has said something that they did not.
“These are real-world implications that we have already seen the embryonic constructions of in disinformation narratives; they’re going to impact everybody, no matter whether you use it or not.”
The United Kingdom is currently an outlier on the international stage in having published guidelines around artificial intelligence.
In it, its champions the huge potential of the technology across a variety of sectors while also acknowledging risks to physical and mental health, privacy infringements, and potential human rights breaches.
Hattotuwa is among those calling on New Zealand policymakers to act early and swiftly to put regulatory checks and balances in place before the technology takes another step forward again.
“(Disinformation Project) wrote a ten-point analysis of what GAI may mean for this country,” said Hattotuwa, “which embraces national security threats and risks as well. This is serious. This is a growing problem, at pace, it’s not going to go away, and we need to start talking about it.”