Large Lying Models: How Digitalization Weakens Trust
Artificial intelligence is harming our mental health and is eroding the foundations of our societies. These outcomes are not inevitable – they are shaped by incentives. It’s time to change them.
Die Deutsche Version finden Sie hier.
In an age of generative AI, synthetic relationships and widespread automation, what do we mean when we talk about «trust»?
Merriam Webster defines trust as «assured reliance on the character, ability, strength, or truth of someone orsomething». In everyday terms, trust typically refers to the confidence we place in another – whether that «other» isa person, an institution, or a technology. Viewed through a psychological lens, trust is considered one of the single most important ingredients for the development and maintenance of happy, well functioning relationships. Whenwe speak of trust, then, what we’re actually talking about is our capacity to create conditions in which healthyhuman relationships and reliable knowledge ecosystems can thrive. So what becomes of trust-based cultures when we digitise our systems, societies and psyches?
Well, this largely depends on the incentives and values that underpin the technologies. Human tools, from the wheelto the aeroplane, have always been designed with specific intentions and affordances in mind, influenced by the motives of their makers. While a knife may be used to chop carrots or stab a person, it cannot make decisions of itsown. And yet the capacity to act autonomously, manipulate and lie are characteristics that have already beenobserved in large language models from Anthropic’s Claude, to OpenAI’s ChatGPT. The systems currentlyresponsible for the digitalisation of our lives occupy a distinct order of technology in comparison to their predecessors.Imagine then, what happens if these capacities for scheming, self-preservation and autonomous action collidewith exploitative interests.
Race for our attention
Since the early days of social media, the main incentive for big technology companies has been to maximize user engagement and monetise data. This focus, driven by market logic, has consequences for users’ attention, privacy, and social relationships. Concerns around the erosion of trust, the veracity of information and the impacts ofinflammatory content on human relationships have never been high up (or indeed anywhere) on the list, and whilethe digital platforms we interact with on a daily basis race against each other for our attention, it is we the people whoare left paying the price with our sleep, our relationships and our mental health.
Where once we were naïve about the negative effects of social media, leaders today can no longer claim the sameinnocence when it comes to anticipating the harms that could arise if technologies come to exploit our attention, and more insidiously, our attachment. Increasingly I’m reminded of the well-worn observation made by GeorgeSantayana, that, «Those who cannot remember the past are condemned to repeat it». Given the oracular beauty ofhindsight, why on earth would we wait until the proverbial hits the fan before establishing guardrails?
This is where we are today with generative AI.
Ripple effects
While some would like us to believe that impacts such as AI psychosis are merely “edge cases”, unfortunatecollateral on the road to techno-utopia, in reality, these harms rarely remain contained in the individual, nor do theyonly cause damage in their most notorious form. Rather, when harms like these occur, they ripple into our relationships and out into society, affecting the structures that support us. When the most connected generationever to live is also the loneliest, and the chatbots we turn to serve to isolate us further, it’s no wonder that some aredecrying these technologies «a death sentence» not only for social wellbeing, but for the civic institutions thatenable cultures to thrive.
Societal alarm bells are rarely attended to until after the damage has been done, and despite early warning signs andcalls for regulation, we have recently witnessed one of the most predictable outcomes of a failure to regulateappropriately. When a torrent of abusive, non-consensual sexual images of children and women flooded the internet, generated using Elon Musk’s Grok, it was only after the fact that governments took steps to protect their citizensfrom this kind of abuse. And it was only due to widespread backlash that Musk took the step to stop Grok fromengaging in this activity but only in «jurisdictions where such content is illegal.»
What kind of culture do we live in, when a company can enable the generation of child sexual abuse material withimpunity, only changing tack when forced to by law? Or when chatbots are permitted to continue engaging withteenagers at risk of suicide, even aiding and abetting their research around methods through which to take their own lives? A trust-based culture would work to identify and protect underage people from predictable harms, especiallygiven the track record of human behaviour online.
As these technologies come to permeate our lives, and our social feeds bloat with fake influencers, disinformationand content designed to extract our attention, how can modern democratic societies survive? We may be witnessing a real-time answer to that question.
As doctors rally to declare the impacts of child phone use a public health emergency, and coalitions of parents,teenagers and school districts take Meta, YouTube, Snap and Tiktok to court to face accusations of making productsintentionally addictive to young people, communities are rising up together in defense of their values. The tide of AIhype appears finally to be receding, and we’re finding out who’s been swimming naked. It’s about time.
Governments are waking up
With AI-generated content, workslop and deepfakes on the rise, it’s getting harder to trust what we see (or read). That alone is reason enough to consider carefully how these powerful technologies should be regulated, developed and deployed in a way that supports the healthy functioning of our societies, economies and wider ecosystems.
Despite heavy lobbying to the contrary, many governments are starting to wise up to the harms. With outspokencritics such as Cory Doctorow, Karen Hao, Timnit Gebru, Gary Marcus, Jonathan Haidt and Tristan Harris finallymaking headway in mainstream media, heads of state are heeding the call and seeking ways in which to mitigate therisks. From the banning of phones during school hours to preventing under-16s from using social media, steps arebeing tested and taken to redress the balance and establish a more sustainable way of engaging with the digital world.
A course correction can be hijacked to bring in greater harms through the back door (bans, for instance, arenotoriously easy to circumvent, and bringing in digital IDs to identify minors raises a whole host of issues arounddata capture, surveillance and privacy). No doubt some of these efforts, however well-intentioned, will fail and runinto their own issues, yet the fact that we are starting to grapple with the risks and rewards of digitalisation should giveus cause for hope.
For too long, we have sacrificed our sanity and psyches at the altar of engagement. It is time we chose somethingdifferent, but to do so, we must recognise that we have enormous collective power and agency to shape thetrajectory of the world to come. While the most prominent tech players would have us believe in the fundamentalinevitability of an AI-dominated world (they have profits to make, after all), the truth is that the future is not yet written.We need to critically assess whether the costs we are already seeing are bringing us closer to a more vibrant,regenerative future, or if we are cannibalising the very life support of the societies and indeed ecologies thatsustains us.
The choice before us is stark: do we really want to live in a world in which AI slop floods our media and newschannels, where children lose hours of sleep and social time glued to increasingly persuasive devices, and agentic workforces lay waste to human economies? Where datacentres siphon off diminishing freshwater supplies, their smog-forming nitrogen oxide outputs contaminatingentire communities? Do we really want to pour our precious resources and human labour into the infrastructuresrapaciously draining power grids at the very moment that we’re bursting through irreversible climate tippingpoints?
I refuse to believe that this dystopian dream is the best we can do.
As artificial intelligence extends to us the ostensible power of Gods, we must remember what it is that makes lifeworth living: one another. And it is from this place that we must ask ourselves: how might we use our technologies toserve and protect all that we love?