Is It Safe To Upload Your Photos On ChatGPT For A Studio Ghibli Makeover? Find Out Here
OpenAI's Ghibli-style AI image generator has gained massive popularity, but concerns over privacy and data security have also emerged. Since its launch last week, the tool has taken social media by storm, with everyone from politicians and celebrities to everyday users sharing AI-generated portraits in the distinctive style of legendary animator Hayao Miyazaki. The latest version allows users to transform their own photos-or even viral internet memes-into stunning Ghibli-style artwork.
However, not everyone is embracing the trend. Digital privacy advocates on the social media platform X have raised alarms, suggesting that OpenAI could be using this viral craze to collect vast amounts of personal images for AI training. While users enjoy experimenting with the feature, critics warn that they may unknowingly be providing fresh facial data to OpenAI, raising serious concerns about privacy.

Ethical Concerns and AI's Use of Copyrighted Art
This trend has also reignited ethical debates surrounding AI tools trained on copyrighted creative works, sparking questions about the future of human artists. Miyazaki, 84, known for his hand-drawn animations and whimsical storytelling, has been openly skeptical about AI's role in animation.
Activists argue that OpenAI's data collection strategy extends beyond AI copyright issues. They claim that by encouraging users to voluntarily upload their photos, the company bypasses legal restrictions that typically apply to web-scraped data. Under GDPR regulations, OpenAI must justify scraping images from the internet under the legal basis of "legitimate interest," which requires strict safeguards to protect user privacy and ensure compliance. This includes proving that data collection is necessary, does not infringe on individuals' rights, and follows strict transparency and accountability standards.
Privacy Expert Raises Concerns Over AI Data Collection
In a detailed post on X, Luiza Jarovsky, co-founder of the AI, Tech & Privacy Academy, highlighted the legal loophole OpenAI benefits from. She explained that when users voluntarily upload their images, they provide explicit consent under GDPR's Article 6.1.a, shifting the legal framework in OpenAI's favor. Unlike web-scraped data, these voluntarily submitted images are not subject to the same "legitimate interest" balancing test.
"OpenAI's privacy policy clearly states that the company collects personal data input by users to train its AI models unless they have opted out," Jarovsky wrote.
She further warned that OpenAI is gaining unrestricted access to private and personal images-images that only OpenAI retains in their original form. Other AI companies and social media platforms will only have access to the transformed, "Ghiblified" versions. "The trend is ongoing, and people are realizing that instead of using third-party providers, they can simply upload their pictures to ChatGPT whenever they want a fun avatar," she added.
Cybersecurity Experts Urge Caution
Cybersecurity experts have also joined the debate, cautioning users about potential risks. Himachal Cyber Warriors, a group claiming to be cybersecurity professionals, issued a warning:
"Think before you #Ghibli. That cute Ghibli-style selfie? It might cost more than you think. Your photo could be misused or manipulated. AI may train on it without your consent. Data brokers might sell it for targeted ads. Stay cyber smart. Your privacy matters."
What ChatGPT Said About Uploading Personal Photos
OpenAI has yet to release an official statement regarding the privacy and security of user data in the Ghibli-style AI generator. However, when HT asked ChatGPT directly whether it was safe to upload personal photos, the AI responded:
"No, it's not safe to upload personal photos to any AI tool unless you're certain about its privacy policies and data handling practices. OpenAI does not retain or use uploaded images beyond the immediate session, but it's always best to avoid sharing sensitive or personal images with AI services. If privacy is a concern, consider using offline tools or apps specifically designed for secure image processing."
Facial Data Breach: Risks of Uploading Photos for AI Transformations
The Outabox data breach has raised serious concerns about privacy and the risks of AI-driven surveillance in public spaces. Initially deployed during the COVID-19 pandemic for temperature monitoring, Outabox's facial recognition kiosks were later repurposed to identify individuals in gambling self-exclusion programs. This shift in usage has sparked ethical debates over the handling of sensitive biometric data.
A website called "Have I Been Outaboxed," allegedly created by former employees in the Philippines, claims that over a million personal records were mishandled. The leaked data reportedly includes facial biometrics, driver's licenses, addresses, and other sensitive information. The site also alleges that membership data from gaming supplier IGT was compromised, though IGT has denied this claim.
Outabox has acknowledged the breach and is working with affected clients to address the issue. In response, New South Wales police and federal agencies launched an investigation, which resulted in the arrest of a Sydney man on blackmail charges. Authorities confirmed that the leaked database contained over one million records, heightening concerns about data security and privacy.
The breach highlights the growing risks of facial recognition technology, not just in security applications but also in AI-powered photo transformations. Many people use these tools to create digital avatars or stylized portraits, often without considering the potential privacy risks. If platforms lack strong encryption and data protection measures, uploaded photos can be exploited, leading to identity theft or other cybercrimes.
Facial recognition systems, even those used for entertainment, can be prime targets for cybercriminals. In previous incidents, poorly secured databases containing facial recognition data have been leaked, exposing millions of people to fraud and impersonation. Without proper safeguards, personal biometric data can be misused in ways that threaten privacy and security.
This incident has reignited debates over the ethical implications of facial recognition technology, particularly its accuracy and the risks of misidentification. Privacy advocates stress the need for stricter regulations to protect biometric data and prevent future breaches. Before uploading any photos to online transformation services, users should ensure the platform has clear privacy policies, strong encryption, and robust data security measures. While AI-generated avatars and facial modifications can be fun, protecting personal data from unauthorized access and misuse is essential.
-
Gold Rate Today 29 March 2026: Latest IBJA Rates With Tanishq, Kalyan, Malabar, Joyalukkas Prices -
Gold Rate Today 28 March 2026: Latest IBJA Rates With Tanishq, Kalyan, Malabar, Joyalukkas Prices -
Kerala 2026 Elections: Opinion Poll Shows LDF-UDF Neck-and-Neck Race; NDA Emerges as Decisive Factor -
Bengali Actor Rahul Arunoday Banerjee Dies At 43 After Reported Drowning In Digha -
Gold Silver Rate Today, 28 March 2026: City-Wise Prices Rise Slightly, MCX Gold Rebounds Above Recent Lows -
Who Is Rajat Dalal’s Wife? Bigg Boss 18 Fame Star Announces Wedding, Shares Dreamy Photos -
Tamil Nadu Elections 2026: TVK Announces Candidate List; Vijay To Contest From Perambur And Trichy East -
Hyderabad Gold Silver Rate Today, 29 March 2026: Gold And Silver Continue Upward Trend After Recent Dip -
Hyderabad Weather Alert: Intense Thunderstorms, Hail And Lightning Likely On March 30-31 -
Bihar Board 10th Result 2026: Where and How to Check BSEB Matric Scorecard -
Khushbu's Husband Sundar C To Contest Tamil Nadu Polls From Madurai -
Pakistan Mediation Advances In US Iran Talks And Regional Diplomacy












Click it and Unblock the Notifications