Artificial intelligence is being used in increasingly obscure and fascinating ways. But what are the moral, ethical, and legal repercussions of it taking over more and more of our lives?
At the start of the second season of the dystopian TV series Black Mirror, a young woman begins to communicate with an AI-powered version of her deceased partner. This new virtual boyfriend – created using his social media profiles and online communication, as well as videos and photographs that she had supplied – comforted his grieving girlfriend. It talked to her, consoled her. So far, so science fiction. And yet digital clones of the people we love are now a reality thanks to companies such as HereAfter and the South Korean firm DeepBrain AI. The latter’s Re;memory uses recorded interviews and films of the deceased to create a digital twin of a departed person. Family members can then interact with that twin as if they’re on the other end of a Zoom call. Despite ethical concerns, as the technology evolves, digital clones of the dead will become ever more convincing. The ability to mimic speech – Microsoft’s Vall-E can clone a voice based on a three-second audio snippet – already exists, while virtual personas are becoming increasingly lifelike.
Where this is all headed, nobody really knows. From creating ‘better art than humans’ to writing university papers, AI is taking us into uncharted territory. In February, the UAE’s education minister Ahmad Belhoul Al Falasi said the country was planning to develop an AIpowered tutor that could make learning more interactive. A few weeks later, the International Baccalaureate announced that content created by the chatbot ChatGPT could now be quoted by schoolchildren in their essays, provided they stated clearly where the bot’s content is quoted. AI is also being used to brew better drinks, to develop smart vacuum cleaners, and to pass exams on behalf of students.
The limits of what is possible with AI-powered chatbots is evolving rapidly. When Kevin Roose, a technology columnist at The New York Times, decided to converse with Bing’s new AI chatbot for two hours, he found himself unsettled by its desire to be destructive, as well as its declaration of love for the person it was talking to. At one point, the chatbot attempted to convince Roose that he was unhappy in his marriage and should leave his wife. The conversation concerned him “so deeply that I had trouble sleeping afterward.” It also left him worrying “that the technology will learn how to influence human users, sometimes persuading them to act in destructive and harmful ways, and perhaps eventually grow capable of carrying out its own dangerous acts.”
These concerns are mirrored in the world of art. About 18 months ago, the Canadian artist Beth Frey began experimenting with the AI art generator NightCafe. Initially impressed by the ways in which it generated the ‘feel’ of something through an imperfect representation, she went on to create an ever-increasing library of grotesque bodies using tools such as Dall-E, an AI system that creates images and art from text prompts. Yet she remains skeptical. “I’ve embraced particular AI tools, but I still remain uncomfortable with others,” she admits. “And I think I’m okay to sit in this uncomfortable position while I try to understand what these tools mean to myself as an artist and to society at large.”
Frey is not alone in her feelings of discomfort. Such developments fascinate and scare with equal measure. AI may be able to “abstract an idea in ways never considered, working in a logic that seemed beyond human capacity,” says Frey, but many argue that it could replace artists altogether, or at least lead to them becoming obsolete. And it’s not just artists. Copywriters, coders, songwriters, and even news anchors are potentially under threat. Central to this debate is the question of AI vs human creativity. For all the positives initiated by the artistic embrace of AI, people are already witnessing its downsides: poor quality content and the emergence of an AI aesthetic that already feels dated, says Karl Escritt, the CEO of Like Digital & Partners. “I’m actually not 100% pro AI art,” adds Frey. “I find a lot of it boring, and derivative, and I think we need to approach it with the same critical lens that we do with other art forms. And after generating thousands of images using AI software, I sometimes wonder what originality means anymore. If the image can now be so commonplace, what makes it special? Are we going to need to develop a new approach to aesthetic judgment to decide what makes an image interesting?”
AI raises more questions than answers. As Escritt states, possibly the only thing talked about more than Al is the ethical use of it. “We have already seen countless artists and illustrators fighting back at the use of tools like Midjourney, calling for there to be control over an unethical use of data (styles) from their existing work,” he explains. “Most recently we have David Guetta treating an audience of ravers to a new track featuring an AI version of Eminem’s voice. We don’t know if Eminem gave his permission. Did he have to? There is clearly lots of uncertainty around permissions, ethics, and everything in-between.”
The ethical questions related to AI go beyond somebody using ChatGPT to write their university paper, or the production of deepfake content. AI is central to much that socially, ethically, and morally concerns the world of science and technology. AI frightens people. Elon Musk, the billionaire entrepreneur behind Tesla and SpaceX, referred to AI as “one of the biggest risks to the future of civilization” at the World Government Summit in Dubai. The key concern being that humans are creating the means for their own destruction.
Recently, AI was wielded in a manner that caused social disruption. Deepfake images of former US president Donald Trump being forcibly arrested, and French president Emmanuel Macron taking to the streets of Paris to confront protesters, were shared tens of thousands of times on Twitter and seen by millions. It’s not the first time the political leaders have been deepfaked, but with the technology cheaper and more convincing than ever, articles were subsequently printed informing viewers the images were in fact doctored.
The moral dilemmas facing AI are best personified by autonomous self-driving vehicles. They are instructive because the ethical permutations are so varied. For example, how should such a vehicle be programmed to act in the event of an unavoidable accident? Should it minimize the loss of life, even if it means sacrificing the occupants, or should it protect the occupants at all costs? Then there’s bias. One of the fundamental challenges facing the digital age is the question of whether technology is neutral. Despite the well-meaning platitudes of Silicon Valley, technology, if not born with bias, can be used with bias. Inherent prejudice is inevitably baked into systems based on the learning they’ve undertaken, says Daniel Shepherd, regional head of strategy and product at PHD Mena, a media and marketing communications agency.
“Firstly, the systems are functions of their information which are likely biased and, secondly, they have no inherent capacity for judgment, which will lead to moral dilemmas around why one choice is chosen over another,” says Shepherd. “There are so many frames of decision making that apply in society and while technology swings towards utilitarianism, that isn’t necessarily the same as our legal frameworks and, often, diametrically opposed to our moral ones.” He wonders if it is possible that AI can ever handle moral considerations when they have no fixed points and humans struggle profoundly. “The moral questions that will be raised by a plausible scenario such as an AI powered car deciding to run up a pavement and killing an elderly person to protect two children in the vehicle will be numerous and incredibly challenging to answer,” he adds.
How we seek to answer these questions is fundamental to the debate surrounding AI. Should governments be allowed to intervene and possibly impede innovation? Will new laws and even new moral codes be required? Can international co-operation alone provide the answers? Will national or international regulation – anathema to Silicon Valley – be necessary? Achieving any form of consensus will be difficult, but the EU’s proposed AI Act is just one element of an inevitable move towards regulation. Shepherd is also concerned about the further threats to democracy caused by “the insidious side effect of finding it even harder to distinguish truth from lie, reality from fiction – a trend that AI powered technologies like deepfakes will only accelerate exponentially. While AI will create more and different jobs and livelihoods, it will also inevitably replace many as well. The political aspect of this and the potential for AI to be both weaponized and blamed by a new far-right or nationalist is a scary, and unfortunately likely, prospect.”
Of course, there are positives to AI, too. Manal Hakim, the co-founder and CEO of Geek Express, an online platform that helps students learn to develop video games, apps, and AI models, believes AI will be tremendously beneficial for humanity. But for these benefits to be realized, “We must rethink the models we are currently adopting, whether in our careers, parenting, education, lifestyle, or relationships,” she says. “Governments, corporates, social groups, and individuals will all need to be part of the conversation. For example, in education, instead of focusing on an AI program passing the bar exam, we need to think about why we are assessing future lawyers the same way we used to assess them 100 years ago. Instead of worrying if AI will replace copywriters, coders, or even artists, the focus should shift toward training today’s workforce on how to leverage AI to their advantage, and present creative inventions to this world. AI has not augmented intelligence, it’s artificial; one we can use to solve interplanetary travel, or world hunger while focusing on sharpening what makes us human – empathy, ethics, curiosity, problem-solving, and creativity.”
Escritt agrees. He believes two things will be key to its adoption and evolution, the first being micro applications that remove repetitive and non-essential tasks, as well as tools that help us to improve well-being and mental health. The second will be more substantial – and that is a deeper focus on the development of artificial general intelligence. “It’s here that we will see a more cognitive approach to AI,” says Escritt. “Will it be able to learn from its users (humans) and from other AI systems? Will it solve problems that are yet unknown and generate behaviors based on human intuition? It’s this next progression of AI that will be game changing for humankind. However, the ethical issues of living side by side with a machine that moves from known knowns to unknown unknowns will be a big challenge for us all.”
Originally published in the Spring/Summer 2023 issue of Vogue Man Arabia