Moral Implications in Generative AI: The Importance of Consent and Legal Considerations

Introduction  

2023 was certainly the year of artificial intelligence. While the technology has been around for a while, it truly took center-stage with chatbots going viral and governments taking the risks associated with AI seriously. Multimodal AI models such as Open AI’s GPT-4 showed the capability to process not just text, but audio and images as well. This revolution is only going to snowball in the coming years. Which raises the valid concern if AI will ever align with human values? We’ll have to wait and see. For now, there aren’t many limitations to AI’s capabilities.

Research has shown that 35% of today’s customers want to see more businesses use AI-powered chatbots to improve their communication and offer a better customer experience. Because customers are always right, these businesses saw a 39% improvement in sessions after training the website chatbot – the satisfaction rate skyrocketed to 90%! (Source: Forbes)

The Dark Side of Generative AI

However, we can’t ignore the dark side of this transformative technology. Biases in AI are common knowledge. Thus, it is no surprise that generative AI has come under scrutiny in the last few years, raising broader questions about the ethics and regulations that govern it. In 2023, just the tech industry saw around 260,000 employees (about half the population of Wyoming) being laid off, which was an increase of a staggering 60% from the previous year. Even though the pandemic and the recent global inflation had a role to play in this, the major cause remains the fact that AI can do the same things a human can, and that too, with higher efficiency. Unfortunately, but not surprisingly, in just the first two weeks of 2024, more than 5,500 employees have been laid off. (Source: CNN)

Generative AI is extremely powerful – it enables faster content creation in large volumes. From images to video, audio and 3D models, generative AI can produce all types of content. This poses a threat of job loss for professions such as graphic designers and marketers, and an augmentation of human creativity at the same time. Yet a bigger threat appears in the form of deepfakes.  

A deepfake is false content created by using AI techniques. With new and improved types of deepfakes being released every day, it is almost impossible to distinguish a deepfake from real content. And it is usually generated without the consent of the individual or entity it features. Given that the generation only costs about $3 and requires 250 available images, there is no obstacle to its creation. People post on social media platforms regularly and despite privacy settings and policies, the platforms are vulnerable to hacking and user data leaks. Similarly, false audios can be prepared for $10 per 50 words. In this case, convenience and affordability go hand in hand – which really should alert the regulators to dictate how far they will allow the technology to go.

Pro Tip: Take the first step towards comprehensive data security with our no-cost, no-obligation consultation.

Victims of Deepfakes

Case in Point: In 2019, a group of cybercriminals used a deepfake audio tool to successfully con the CEO of a UK-based energy firm into facilitating an illegal fund transfer of a whopping $245,000. They did this by imitating the voice of the CEO of the German based parent company. The cybercriminals demanded an urgent wire transfer to a Hungarian supplier and the UK-based CEO was guaranteed a reimbursement. After the CEO agreed, the money was then wired to an account in Mexico and then to other locations, which made tracing and identifying the fraudsters nearly impossible. Reports on the matter are alarming, with figures suggesting that cybercriminals attempt to steal US$301 million per month via BEC (business email compromise) scams. (Source: Trend Micro)

In another case, a renowned industrialist and the former chairman of the largest conglomerate in India was targeted. Deepfake videos of the industrialist were shared on Instagram, showing the person giving investment advice, seemingly luring viewers into risk free investment opportunities. In the video, the chairman was also seen addressing the Instagram user who posted the video as their manager. This type of non-consensual and false content not only damages the reputation of the person being imitated but also causes impressionable audiences to make hasty decisions and lose large amounts of money.  

Just recently, AI-generated images of an A-list celebrity went viral on social media. Due to the provocative nature of these images, many spoke out against the lack of legal protection for victims in a space where AI can generate images of nearly anything and anyone, without the consent of the targets. The celebrity is looking to sue companies that proliferated the tool which created these images. Unfortunately, the law is still ambiguous on this, with there being no federal law on the matter, in the United States.  

What does the law say on the matter?

This does lead one to wonder, in a country as prominent as the United States, what does the law look like? Out of the 50 states, only 10 states have enacted laws that address the issue of (provocative) and damaging deepfakes. Some other states have existing laws about the nonconsensual distribution of images, that may also cover AI-generated content. However, many states’ laws hold that the content should consist of aspects of the victim’s genuine physicality.  In other words, the law in these states provides no protection to victims against AI-generated false content, meaning anyone using just the victim’s face is within the sphere of the law. The 10 states which offer specific legal remedies include California, Florida, Georgia, Virginia, Hawaii, Illinois, Minnesota, New York, and South Dakota. However, this comes with issues of its own.  

The Gap in the Legal System

The celebrity targeted by AI generated images resides in Tennessee, where there is no official law that explicitly prohibits the distribution of deepfakes. However, the celebrity also spends much time in New York, which as mentioned above, has laws pertaining to this issue. Even if the celebrity chooses to sue in New York, it will be problematic in case the perpetrators are not in the United States. Considering how vast the internet is, they could very well be on the other side of the globe, and there is no way of bringing them to justice.  

Therefore, the law is divided, and that only ends up hurting the victim. Where should they file the suit? Should they sue the perpetrator or the distributing company? Should they file a criminal or a civil suit? With the law being so vague, is it even worth it? Without a federal law, all these questions arise and leave victims of deepfakes limited options, causing distress and turmoil. Where do they go, and what can they do?

The Way Forward

Protecting someone against false content is a matter of human dignity, which is not something that should be more significant in one state than the other. It should be equal across all states, regardless of where the victim happens to be, and a proper remedy should be provided if non-consensual deepfake content is generated. There is a need for a federal law that protects everyone from harm.

A more radical and constructive approach would be to change the current digital ecosystem considering the problems it generates. AI systems within this ecosystem should be adapted and trained in fundamental human rights. Currently, the ecosystem is designed in such a way that platforms increase revenues from advertisers by attracting users and their engagement. The problem with this is that the social consequences of this are disproportionally higher than the benefits. There is an urgent need to rethink and mould the system, so that human dignity is not compromised.

Moreover, the very tool causing this can be the solution. With a little help from existing verification and forensic techniques, AI-based approaches can be utilized to detect deep fakes. Generative AI is constantly evolving so it is integral that the detection methods adapt accordingly. Think of it as a cat-and-mouse game, with deep fake techniques being the mouse. The cat (AI-based detection methods) will need to keep up with the mouse, because a failure to do so will continue to damage reputations and cause losses.

By Mohaimin Rana

Get in touch for a free consultation

Fill the form and discover new opportunities for your business through our talented team.

Related Blogs