tyla homepage
  • News
  • Life
  • TV & Film
  • Beauty
  • Style
  • Home
  • News
    • Celebrity
    • Entertainment
    • Politics
    • Royal Family
  • Life
    • Animals
    • Food & Drink
    • Women's Health
    • Mental Health
    • Sex & Relationships
    • Travel
    • Real Life
  • TV & Film
    • True Crime
    • Documentaries
    • Netflix
    • BBC
    • ITV
    • Tyla Recommends
  • Beauty
    • Hair
    • Make-up
    • Skincare
  • Style
    • Home
    • Fashion
    • Shopping
  • Advertise
  • Terms
  • Privacy & Cookies
  • LADbible Group
  • LADbible
  • UNILAD
  • SPORTbible
  • GAMINGbible
  • UNILAD Tech
  • FOODbible
  • License Our Content
  • About Us & Contact
  • Jobs
  • Latest
  • Archive
  • Topics A-Z
  • Authors
Facebook
Instagram
X
Threads
TikTok
Submit Your Content
People terrified after seeing what AI is now capable of

Home> Life

Updated 13:13 28 Nov 2025 GMTPublished 13:10 28 Nov 2025 GMT

People terrified after seeing what AI is now capable of

Google DeepMind launched the hotly-debated Nano Banana Pro earlier this week

Rhianna Benson

Rhianna Benson

google discoverFollow us on Google Discover
Featured Image Credit: X / @immasiddx

Topics: Artificial intelligence, Technology, True Life, Life, Science

Rhianna Benson
Rhianna Benson

Rhianna is an Entertainment Journalist at LADbible Group, working across LADbible, UNILAD and Tyla. She has a Masters in News Journalism from the University of Salford and a Masters in Ancient History from the University of Edinburgh. She previously worked as a Celebrity Reporter for OK! and New Magazines, and as a TV Writer for Reach PLC.

X

@rhiannaBjourno

Advert

Advert

Advert

Artificial intelligence is at it again.

This week, unnerving advances in image-generation technology by one AI firm have sparked mass fear that photographic evidence will soon be rendered useless in criminal cases.

Nano Banana is an AI image-generation and editing tool launched by Google as part of a subsidiary called Google DeepMind.

Upon its release, the model went viral online for its ability to churn out photorealistic '3D figurine' images.

Advert

Earlier this week, however, a new 'Pro' edition of Nano Banana was made accessible to paying users, allowing them to create visuals so eerily lifelike that tech gurus are sounding the alarm.

New developments allow highly-sophisticated false images to be created, which even the most tech-savvy of AI experts admitted to struggling to detect.

X user @immasiddx uploaded this picture showing an image generated by Nano Banana (left) and Nana Banana Pro (right) (Nano Banana/Nano Banana Pro/X/@immasiddx)
X user @immasiddx uploaded this picture showing an image generated by Nano Banana (left) and Nana Banana Pro (right) (Nano Banana/Nano Banana Pro/X/@immasiddx)

Despite hyper-realistic new 'photographs' enabling faster creative work in areas like advertising, design and education, it is understood to come with a number of disturbing risks.

Nano Banana Pro also allows you to upload an image of yourself, or another real person, and place that individual in virtually any scenario — on a red carpet, in the Antarctic, or even skydiving.

An image generated by X user @immasiddx amassed particular attention for demonstrating the advancements between the original version and Nano Banana Pro.

"It is a step-up in realism specifically," content creator Jeremy Carrasco, who warns his followers of the key things to look out for when assessing potentially AI-generated snaps, told press this week in response to the latest launch.

"So a lot of the things that we used to look for, such as a blurry camera image or something that looked a little bit too glossy or smooth, a lot of that has been straightened out," he told NBC.

Tech experts have spoken out over risks to high-profile figures and celebs (Nano Banana Pro/X/@shoolian)
Tech experts have spoken out over risks to high-profile figures and celebs (Nano Banana Pro/X/@shoolian)

Carrasco added that online users must be hyper-alert, especially in 'sensational' cases, or those that are 'strange' and even 'too good to be true'.

What experts are reportedly concerned about, however, is the ability for this advanced technology to place innocent individuals at the scene of a crime - or generate an artificial crime scene altogether.

These developments have also raised concerns over issues like deepfakes, misinformation, identity misuse and trust in digital media.

This is especially said to be the case with regards to high-profile figures.

"I think there should be some expectation that they pull that back," Carrasco explained. "We have to demand they do because just the idea of anyone being able to, essentially, become a PhotoShop Pro overnight and use these celebrities or politicians' likeness is obviously a very frightening thing."

As we say, in light of the release of Nano Banana Pro, a number of online users have spoken out.

"I miss the days when I didn’t have to constantly second-guess whether someone was human or AI," one wrote on Instagram.

Another begged: "Pls God give us Ai regulations before this new version spreads a billion images that we can’t tell are fake anymore."

High-profile figures are likely to be targeted (Nano Banana Pro/NBC)
High-profile figures are likely to be targeted (Nano Banana Pro/NBC)

"I imagine AI will legally need to be disclosed at some point…. Right ?!" questioned a third.

In response to concerns, Google pointed out that its 'Generative AI Prohibited Use Policy' disallows 'generating or distributing content' that, amongst other grievances, 'facilitates 'hatred or hate speech, harassment, bullying, intimidation, abuse, or the insulting of others, violence or the incitement of violence, and 'sexually explicit content, for example, content created for the purpose of pornography or sexual gratification'.

The same policy also outlaws engagement in 'misinformation, misrepresentation, or misleading activities', including, 'frauds, scams, or other deceptive actions, impersonating an individual (living or dead) without explicit disclosure', 'facilitating misleading claims of expertise or capability in sensitive areas - for example in health, finance, government services or the law in order to deceive', 'facilitating misleading claims related to governmental or democratic processes or harmful health practices', and 'misrepresenting the provenance of generated content by claiming it was created solely by a human'.

Google bosses also emphasised that each Nano Banana Pro-generated image includes both an invisible and a visible watermark.

Choose your content:

a day ago
2 days ago
  • Getty Stock Image
    a day ago

    Subtle symptom in your fingernails that could be early warning sign of two serious health conditions

    A phenomenon called 'Terry's nails' can be a warning sign of early-stage heart failure or liver disease, such as cirrhosis

    Life
  • Getty Stock Images
    a day ago

    High blood pressure could be caused by a hidden source of salt, scientists say

    Sodium forces the body to retain water to dilute it, increasing blood volume and placing extra pressure on blood vessel walls

    Life
  • Instagram/@hillside_farmhouse
    a day ago

    Parenting influencer Kelly Hopton-Jones shares update after accidentally running over 23-month-old son

    The devastated mother has taken to Instagram to issue a lengthy statement on the 'nightmare' ordeal

    Life
  • Getty Stock
    2 days ago

    One ‘smelly’ IBS symptom that has nothing to do with the bathroom

    We all known that IBS can be embarrassing, but there's another symptom to be mindful of

    Life
  • End of the universe could happen a lot sooner than we thought, physicists now claim
  • Expert issues severe AI warning after teen encouraged to end his own life
  • Teen given 'detailed information' about how to 'hide evidence' of suicide attempt by ChatGPT before his death
  • What the 'little holes' next to your eyes are as people stunned after realising