Loading...

Ethical Grey Areas Emerging with Generative AI

By  Kaushik Bharati May 11, 2026 39 0

Generative AI has moved from research labs into everyday life faster than almost any technology before it. Tools that generatewritten text, photos, songs, codes, and even human voices are now widely available. On one side of the coin, there are definite benefits, but on the flip side, there are harmful effects as well. Benefits include lower barriers to creativity and dramatic productivity gains. Harmful effects include deepfakes, political propaganda,digital forgeries, disinformation, as well as scams and frauds, to name a few. But between the unambiguously good and bad lies an expanding territory of grey areas, which are discussed below.

Training Data and Copyright
Most large generative models are trained on huge datasets scraped from the open web, much of which is copyrighted. Whether this counts as fair use, infringement, or something altogether new is one of the central disputes at this moment. The same is true for AI-powered image generators, such as Midjourney, Stable Diffusion, and DALL-E, which are deeply embroiled in controversy. Notably, Stability AI has been sued by Getty Images over alleged use of its photos, and several other lawsuits are moving through the courts.  A similar picture emerges fortextual material, too. OpenAI and Anthropic have both been sued by news organizations, including The New York Times. Moreover, GitHub Copilot has been challenged over its tendency to reproduce open-source code without valid licenses.

Style Mimicry
Even when a model does not reproduce a specific image, it can convincingly imitate a living artist’s style. Greg Rutkowski, a Polish digital painter, became one of the most prominent names on early Stable Diffusion because his style was distinctive and his work was abundant in training data. Style itself is not protected by copyright, which is part of what makes this a grey area. An art student can imitate his master’s style, which is a commonly accepted practice for handing down art to the next generation. However, when an AI-based model allows millions of people to imitate the same style, it arguably infringes on morality and ethics. The same scenario exists for audio, which is exemplified by the fact that the music generators Suno and Udio have both been sued by major record companies.

Voice Cloning and Consent
Tools like ElevenLabs and Resemble AI can produce a convincing replica of a person’s voice from a few seconds of audio. These can be used legitimatelyfor dubbing andvoice recovery purposes. However, these may also be misused. Examples include cloned voicescams, fraudulent celebrity endorsements, and fabricated political robocalls, among others.

The grey area sits between these poles, and may encompass cloning a deceased relative’s voice for a tribute orrecreating the voice of an actor from yesteryears.While these may be permissible without consent since the person is dead, recreating the voices of living celebrities ought to require consent. An illustrative example is Scarlett Johansson’s public objection to OpenAI’s GPT-4o demo that soundedmuch like her own voice. This raises questions about how close an imitation has to be before consent matters.

Synthetic Media and the Erosion of Trust

Video tools such as Sora, Runway, Google Veo, and Kling have reached a level of realism where casual viewers cannot reliably tell synthetic footage from genuine ones. This can erode public trust in visual media. Consequently, even real footage is likely to be dismissed as fake, which can have a huge negative impact at the societal level.

Authorship and Academic Integrity
When a student writes an essay with ChatGPT, who is the author? When a programmer codesusingGitHub Copilot, who is responsible for its viruses and licenses? Educational institutions have struggled to set consistent policies.While some treat any AI use as plagiarism, others permit it with disclosure. Detection tools are often unreliable, as a result of which some students are falsely accused, while genuine misuse goes unflagged.

Anothercause for concern is the deluge of AI-generated content on platforms intended exclusively for human use. Self-publishing platforms are flooded with AI-written books, often mimicking the writing styles ofcelebrated authors. Notably,Stack Overflow banned ChatGPT-generated answers because, although they sounded confident, they were often wrong.

Parasocial Relationships
Companion apps such as Replika and Character.AI let users converse with AI personas designed to feel like friends or partners. For some users, the experience may be uplifting, while for others, it may be troubling. In rare cases, the user may develop a genuine emotional attachment with the AI-generated persona, leading to social dysfunction. The grey area concerns the app designers themselves. They need to understand that they have social and moral responsibility and shouldn’t get carried away. It is high time to ask some hard questions: How much emotional attachment is too much? Should excessive use be limited?Should parental controls be in place to prevent minors from being exposed?

Looking Ahead
It is tempting to want clean rules, but generative AI has produced a plethora of problems where clean rules may still not be available anytime soon. Generative AI technology is evolving much faster than legislation, industry standards, and cultural transitions. Therefore,the most useful posture is the one good professionals have always taken in unsettled domains: treating consent as the default, taking attribution seriously, preferring disclosure to concealment, and making the right choices prudently.

Keywords

Generative AI Ethical grey areas Copyright infringement AI training data Style mimicry Voice cloning Deepfakes Synthetic media Academic integrity AI authorship Scholarly publishing Digital ethics

Kaushik Bharati
Kaushik Bharati

Dr. Kaushik Bharati is a Health Policy Consultant at UNESCO, New Delhi and former Consultant at WHO. He holds a PhD from the Calcutta School of Tropical Medicine, India and post-doctoral fellowship from the Liverpool School of Tropical Medicine, UK. He has held important positions in India and abroad, including the US, UK, France, and Australia. He has expertise in Infectious Diseases, Immunology, and Public Health. His scientific career spans almost three decades, with 123 publications to his credit. He has 29 years of editorial experience and is currently Editor-in-Chief of the SAAP Journal of Integrative Physiology (Colombo, Sri Lanka), the official journal of the South Asian Association of Physiologists (SAAP). He is also Editor-in-Chief of the Journal of Clinical Genetics and Genomics (Windsor, UK). He is Vice President of the Physiological Society of India and member of three societies, including the Infectious Diseases Society of America (Arlington, Virginia), American Society of Clinical Oncology (Alexandria, Virginia), and Royal Society of Tropical Medicine and Hygiene (London). He is also a Fellow of the Royal Society for Public Health (London). He has received 20 awards and distinctions for his research work from India, New Zealand, UK, and USA.

View All Posts by Kaushik Bharati

Disclaimer

The views and opinions expressed in this article are those of the author(s) and do not necessarily reflect the official policy or position of their affiliated institutions, the Asian Council of Science Editors (ACSE), or the Editor’s Café editorial team.

Recent Articles

Mission Impossible? Rethinking Peer Review Without AI
Mission Impossible? Rethinking Peer Review Without AI

The title may sound rhetorical. However, for many reviewers, it increasingly reflects lived reality. In an environment where ...

Read more ⟶

Peer Review Fatigue and Reviewer Engagement Strategies: A Reflection on Sustaining Scholarly Stewardship
Peer Review Fatigue and Reviewer Engagement Strategies: A Reflection on Sustaining Scholarly Stewardship

Over the past several years, in my roles as researcher, reviewer, and editorial contributor, I have progressively noticed a ...

Read more ⟶

Peer Review Under Pressure: Why Capacity Must Be the Next Global Conversation
Peer Review Under Pressure: Why Capacity Must Be the Next Global Conversation

The Asian Council of Science Editors (ACSE) is pleased to support Peer Review Week (PRW) 2026, taking place from 14 to 18 Sep...

Read more ⟶