Home

Is Generative AI Harmful for Humans

We will look threadbare at three issues troubling academics, celebrities and powerful politicians of the world including the US Senate and the European Union.


 Is Generative AI harmful for mankind ?
 Has OpenAI and Microsoft violated the copy right act at will and illegally?
 Should the growth of Generative AI be curbed?


All three questions have different answers but let us not rush to conclusions but instead deep dive into the genesis of the core issues and the origin of the problem. Artificial Intelligence and Generative AI in chatbots is not new. It arrived around seventy years ago, just when the first computers arrived. Academics and intellectuals however did not take to Artificial Intelligence kindly. They feared that it was out to compete with and dominate human intelligence. Lot of scary story telling and fear mongering against AI has happened since then. Yet they have done nothing fearful to justify the allegations.
So why fear them now?


Let us look at what happened in the last decade. Around 2014, a machine learning algorithm was formulated called generative adversarial networks GANS that accurately authenticated audio and video takes of real people. The first high profile victim of this activity later known as deepfakes was none other than Republican Presidential candidate Donald Trump. Few words in many of his video speeches were manipulated by GANS to make him sound like a buffoon and a political imbecile. The fake versions had over 90% original content with bits of manipulated text that was ridiculously incorrect but was difficult to detect as fake. The Republicans hit back with deepfakes of opposition as Trump rode to power. Politicians and celebrities are prime targets of deep fake technology which has grown more sophisticated and difficult to detect by the day.

By 2020 Open AI had tested the skills of Generative AI in multiple formats and by 2022 ChatGPT with a $10billion funding from Microsoft started stoking fears of Generative AI taking over the world. Apart from deepfakes, the capability of generative AI replicating the original text, voice, graphics and images has enabled large scale academic plagiarism that will be extremely difficult to detect. Replication and plagiarism was never so easy. The AuthorsGuild of US have launched a class action suit against Open AI and Microsoft accusing them of copyright violation of thousands of books that Generative AI uses for training its Chat GPT tool and even GitHub’s Copilot for coding.


Meanwhile Microsoft backed Open AI started co-opting developers to produce amazing text and images that were close to those generated by human intelligence and years of deep learning. Generative AI powered by machine learning ML along with the publicity hype of Chat GPT made its use widespread. And as many developers started to actively use Chat GPT which gave real scalable benefits, PR firms and news agencies were fed a myth that Microsoft had discovered a ‘Google Killer’ in Chat GPT. So is Chat GPT a Google killer?


Not quite, says Yann LeCun VP and Chief Scientist at Meta AI in an interview to AIM. “I don’t think any company out there is significantly ahead of others.” he said . “But they, OpenAI have been able to deploy their systems in a way that they have a data flywheel. So the more feedback they have, those systems help them to generate more feedback and later adjust it to provide better outputs” he explains “ I do not think those systems in their current form can be fixed to be intelligent in ways that we expect them to be,” said LeCun. He explains that data systems are entertaining and impressive but not really useful. “To be useful, they have to make sense of real problems for people, help them in their daily lives as if they were traditional assistants completely out of reach,” he added, painting the real picture.


And since Generative AI in its current form is a great data gatherer but a poor data user to solve real time people problems like all other Artificial Intelligence tools, they are far away from being a Google killer or even mildly harmful to mankind. So our answer to the first question is – No Generative AI is not harmful to mankind as per Yann LeCun Chief AI Scientist at Meta.
Why ?
Because though it can collate data and churn out new texts it cannot put the same to use intelligently to solve human problems and human intelligence still has to manage that . Generative AI or AI in any form cannot independently harm or help humans without human intervention.

Have Open AI and Microsoft used Copyrighted material for free?

Is Generative AI a law breaker?


Are Open AI and Microsoft cheating millions of authors and publishers ?
By stealing copyrighted data from the net are they type testing and creating a tool
that will render all authors and writers jobless ?
– the very authors from whom they stole data.

On 20 th September 2023 AuthorsGuild of US along with a dozen authors including John Grisham and David Baldacci launched a class action suit against Open AI and Microsoft accusing them of copyright violation of thousands of books that Generative AI uses for training its Chat GPT tool and GitHub’s Copilot for coding that Microsoft is using to help develop an AI powered coding assistant.

It was a big fight out there.


By December 2023 OpenAI started approaching major publishing houses seeking collaboration for developing AI tools and tied up with a few including Axel Springer.


“We are in the middle of many negotiations and discussions with many publishers. They are active. They are very positive. They’re progressing well,” said OpenAIs Tom Robin to Bloomberg.


A week later New York Times become the first large media organisation to join the fight as they sued OpenAI and Microsoft for copyright infringement , opening a new front in the increasingly intense legal battle over the unauthorised use of published work to train artificial intelligence technologies.

Microsoft is not only backing the GitHub Copilot but is also developing its own copilot that you will get as a feature with your next windows update and loaded in your desktop with new keys that will join the control keys of your computer. The new AI powered keyboards are expected to be launched in February 2024 from Las Vegas. It will help you write code in minutes and that will possibly make many of you who wrote code as a career redundant. Software developers will start using the copilot assistant with great admiration initially but soon find that it is able to outthink the developer and render him unnecessary.

So have Open AI and Microsoft used Copyrighted material for training for free?

There is absolutely no doubt.

Open AI and Microsoft has used all copyrighted material available on the internet for training the AI tools both in Chat GPT and for the developer co pilot. There is no other way to train AI because text free from copyright would be text before 1920s that would be ancient and useless. So it is the latest copyrighted material from the internet that is used for training Generative AI without permission from creators. But that does not mean that they can be forced into paying decent compensation to the original creators of content.

Microsoft and Open AI have the resources that will help them overpower any resistance from small groups of artists, coders or authors trying to stop them from their goals free of cost. Even if few media giants and publishing companies join hands it is unlikely that they can win a major compensation amount from Open AI and Microsoft.

Software majors have been so far unrelenting in copyright violation and paying nothing for scraping the internet for data. Alphabet has done that previously despite class action suit against them. Meta has also been guilty of the same as has been Stability AI. There is nothing that suggests things will change in the near future. One must understand that Microsoft pumped in $10 billion in OpenAI to use it as a pilot to do the experimentation. In case any charges are proved and out of court settlement is to be done it will be done by OpenAI who will clear the path for Microsoft to operate the business end without hassles.

But the AI industry will perhaps not go scot-free this time around.

The twist in the tale could however be the legal action that Getty Images has taken against Stability AI for infringement of copyrighted images. They could sue OpenAI and others too. They are likely to prove that AI clones of their images will harm their regular business and can stop AI companies from using image from old files for training, cloning or any other purpose as an unfair practice. If anyone can prove that a copyright violation affects the business practice of any corporate that will be definitely stopped by the authorities.