Chat GPT
The Problem is Here, Now
As the glow from our screens has become the primary source of social interaction in an increasingly digitised age, the crisis of loneliness among the young in Western societies has spiralled.
Loneliness
At the heart of this unexpected irony, is the rise of an AI technology, designed with the intention of fostering communication and interaction, that has inadvertently played its part in deepening the crisis.
Research has shown that prolonged use of social media leads to feelings of loneliness and depression, particularly in the young, where there is currently an epidemic of mental health problems. The online world often presents idealised images of others' lives, leading to feelings of inadequacy and isolation in those who can't measure up.
AI chatbots are being increasingly used for conversation. They simulate conversation brilliantly, but do not experience human emotions or understand the depth of human experience in the way humans do. This limitation exacerbates feelings of loneliness in the long run, as the companionship desired and promised lacks the touch and feel of real friends.
"Bad actors"
Artificial Intelligence's much-touted promise, like OpenAI's GPT, has become double-edged. A tool intended to aid and enhance human interaction, it has found itself harnessed by the less scrupulous to create a chasm of isolation and misunderstanding. It has also been used to amplify social media feeds and manipulate individual and collective viewpoints.
Echo chambers and filter bubbles, sculpted by AI algorithms, have become insidious artefacts of our digital discourse. Encasing users within their existing beliefs, they have proliferated misinformation and fueled polarisation. The introduction of GPT, which can generate content intensifying these echo chambers, was another disturbing twist to the tale.
Disinformation, riding the wave of fake news, has turned the tide of public discourse. Fabricated news articles penned by GPT have swayed opinions on everything from politics to public health. The spread of these falsehoods has not only fomented confusion and mistrust but also eroded faith in legitimate news sources.
Scamming
Chat GPT's ability to write perfect prose has enabled criminals to be far more convincing than they have been previously with their scamming and phishing attacks. Historically, one telltale sign of such fraudulent communications has been imperfect language use—misspelled words, grammatically awkward sentences, or phrasing that just seemed 'off'.
Chat GPT has made scam messages far more convincing, thereby escalating their potential to deceive. Instead of clumsily constructed pleas or threats, intended victims now receive professionally crafted emails, text messages or social media posts that mirror the style of legitimate institutions or trusted individuals.
Advertising and propaganda
Advertising and propaganda have gained an insidious new tool in GPT. Its capability for crafting persuasive, targeted narratives has been exploited to subtly influence perceptions of products, ideas, or issues, thereby blurring the boundary between information and manipulation.
Meanwhile, the floodgates have opened for extremist content. GPT's capacity to generate high volumes of apparently well-reasoned material - hate speech, conspiracy theories, or violent ideologies - has amplified these damaging views and, worryingly, has expanded their reach.
Emotional manipulation
Invasive and emotionally manipulative tactics have also come to the fore. GPT, armed with an individual's writings, can generated messages that seem to know reassuring private details, appearing to be trustworthy. Its potential for emotional manipulation is starkly apparent as it gets used to craft messages exploiting fears, insecurities, and desires.
Deepfakes, the uncanny valley of realism, have started casting long shadows. Future iterations of GPT, combined with other technologies, will become instrumental in creating deepfakes to manipulate public opinion or defame individuals.
Perhaps the most disquieting revelation has been the sheer range of issues that have been manipulated. From anti-immigration to pro-gun stances, from climate change denial to anti-free healthcare views, from pro-privacy to anti-criminal justice reform sentiments, the spectrum has been staggeringly broad.
While acknowledging these challenges, we should also recognise the positive applications and potential of AI. However, in our rush to embrace the future, we must not overlook the urgency of the present. Developers, policymakers, and the public need to address immediate concerns surrounding AI use.
The problem isn't merely on the horizon; it is here, now.
The urgency of our response must match the immediacy of the issue. It's very important we don't let fears of the future divert our attention from the pressing issues of today while the AI companies continue to profit in their unregulated arena.