In todayās technologically driven world, distinguishing between a real person and an AI-generated proxy, such as Nokiaās ProxyTwin, or even deepfakes, has become an increasingly difficult task. With AI now capable of mimicking human behaviors, speech patterns, and even facial expressions, the lines between reality and artificiality are blurrier than ever.
The real question is: how do we know if weāre interacting with a real expert or merely an AI construct? And more importantly, should we be concerned?
The Rise of AI Proxies: Enter ProxyTwin
Nokiaās ProxyTwin is a groundbreaking innovation designed to allow a “digital twin” to represent someone during real-time meetings, presentations, or even public speeches. These proxies are so advanced that they can simulate the individualās communication style and decision-making abilities, all while the real person might be on the other side of the world, enjoying their coffee in peace.
Now, letās tie this concept to an example you may remember. During the COVID-19 pandemic, Dr. Nick Coatsworth, Australiaās former Deputy Chief Medical Officer, played a key role in advocating for mandatory vaccinations. He presented at workplaces (including my own), urging the public to accept vaccine mandates as a necessary public health measure. At the time, this was all part of the global effort to curb the spread of the virus.
But hereās where it gets interesting: new revelations suggest that some of these presentations might have been delivered not by Coatsworth himself, but potentially by a ProxyTwin or similar AI proxy. If thatās true, were those key messages shaped by AI systems? Did the public receive information from an artificial source, rather than from the human expert we believed we were listening to?
See my Discussion on the Backflips and Use of a ProxyTwin Here
Deepfakes, AI Proxies, and Whoās Really Talking to You
Enter deepfakesāAI-generated video or audio that makes it appear as if someone is saying or doing something they never actually did. With deepfakes already being used in malicious ways (from fake celebrity videos to manipulated political speeches), itās not far-fetched to think that they could also be used in more “official” settings.
The idea of using AI proxies or deepfakes in public health messaging, political speeches, or other high-stakes scenarios is unsettling. Imagine thinking youāre getting advice from a trusted doctor, only to find out later that it was an AI-driven version of them, pushing a carefully crafted narrative. You might not be able to tell the differenceāand thatās exactly the problem.
So, how do we begin to identify whether weāre hearing from a real expert or just an AI stand-in?
Critical Thinking: The Key to Sorting Fact from Fiction
This is where critical thinking comes into play. In an age where AI can replicate human speech and even emotions, it’s more important than ever to question the information weāre receiving. Some red flags that may suggest AI involvement include:
- Messages that seem overly scripted or too perfect.
- Speakers who handle complex questions with suspiciously simple or vague answers.
- Lack of personal anecdotes, emotion, or spontaneous reactions, which are typically present in human communication.
Itās essential to cross-check information, seek second opinions, and verify the facts. If something feels too rehearsed or too smooth, it might be worth questioning whether you’re talking to a real person or a well-trained AI proxy.
The Dr. Nick Coatsworth Example
Returning to the example of Dr. Nick Coatsworth, what if the presentations he gave about vaccine mandates were shaped by AI or ProxyTwin technology? The possibility that such high-stakes public health messaging may have been influenced by automated systems raises significant ethical concerns. If Coatsworthās voice was used in a proxy format, how much of the information came from him, and how much was pre-scripted by external actors or even algorithms?
This potential blending of AI-driven messaging and real human expertise could leave the public misinformed or, worse, manipulated into making decisions without fully understanding the nuances of the information theyāre receiving. If AI systems like ProxyTwin or deepfakes were involved, who is ultimately accountable for the messaging?
Enter Digital IDs and Misinformation Laws!
But donāt worryāthereās a “perfect” solution on the horizon #sarcasm. To save us all from the terror of being misled by AI proxies or deepfakes, we can simply turn to digital IDs and sweeping misinformation/disinformation laws! Yes, the government will graciously issue us shiny new digital IDs, which will allow us to verify that every interaction we have online is with a real human. Nothing to worry about, right?
Of course, while we’re at it, letās introduce robust misinformation laws that will dictate exactly what counts as ātruthā and ensure we never stray from the path of approved information. That way, we wonāt need to worry about those pesky ProxyTwin or deepfake presentations, because the government-approved AI will tell us exactly what to believe!
See my submission to Australia’s Proposed Misinformation Bill Here
What could possibly go wrong with that? After all, who wouldnāt trust the idea of a centralized body regulating the truth in an age of AI proxies and misinformation? Itās not like such systems could ever be used to silence dissenting voices or manipulate public opinion. No, not at all..
The rise of technologies like ProxyTwin, deepfakes, and AI proxies presents fascinating opportunities but also significant risks. As these technologies become more integrated into everyday life, we must remain vigilant in how we consume and interpret information. While digital IDs and misinformation laws may be proposed solutions, they come with their own set of ethical dilemmas and concerns about privacy, freedom, and who gets to define the “truth.”
In the end, critical thinking, skepticism, and a commitment to independent verification remain our best defenses. Whether itās Dr. Nick Coatsworth or an AI proxy, we need to stay alert and ensure that the information shaping our lives and decisions comes from legitimate, accountable sourcesānot just a well-crafted AI algorithm.
References
- ProxyTwin Fact Sheet – Nokia
- Public statements by Dr. Nick Coatsworth during the COVID-19 pandemic
- Discussions on deepfakes in public health and political messaging
- Post-pandemic interviews and commentary by Dr. Nick Coatsworth on vaccination and public health policies