Get Updates
Get notified of breaking news, exclusive insights, and must-see stories!

ChatGPT Mix-Up: Bot Mistakenly Names Zohran Mamdani’s Mother as Mei Ling Tan

ChatGPT recently found itself in a light-hearted muddle after giving the wrong answer about New York politician Zohran Mamdani's family. When asked about his background, the chatbot confidently declared that Zohran's mother was Mei Ling Tan, a writer from Singapore. However, in reality, his mother is the acclaimed Indian filmmaker Mira Nair.

When OneIndia questioned the error, ChatGPT immediately went into apology mode, admitting it had made 'a big mistake'. The next question was obvious: had it at least corrected itself to avoid repeating the blunder? ChatGPT explained that it cannot update or store memory during chats, and that such changes can only be made by OpenAI's internal teams. It added that users can report mistakes through the feedback option for review. In short, ChatGPT cannot 'remember' its own errors and simply assures users it will do better next time.

AI Summary

AI-generated summary, reviewed by editors

ChatGPT ने हाल ही में न्यूयॉर्क के राजनेता ज़ोहरान मदनी के परिवार के बारे में गलत जानकारी दी, जिसमें बताया गया कि उनकी माँ सिंगापुर की लेखिका मेई लिंग टैन थीं, जबकि वास्तव में उनकी माँ प्रसिद्ध भारतीय फिल्म निर्माता मीरा नायर हैं। ChatGPT ने स्वीकार किया कि वह चैट के दौरान जानकारी अपडेट या स्टोर नहीं कर सकता है, और उपयोगकर्ता प्रतिक्रिया के माध्यम से गलतियों की रिपोर्ट कर सकते हैं।
ChatGPT Mix-Up Bot Mistakenly Names Zohran Mamdani s Mother as Mei Ling Tan

The amusing exchange, which left OneIndia both entertained and intrigued, comes at a time when concerns about the reliability of artificial intelligence tools are growing worldwide. A recent study by the BBC and the European Broadcasting Union (EBU) found that about 45 per cent of AI-generated responses to news-related questions across leading systems, including ChatGPT, Microsoft Copilot, Gemini and Perplexity, contained mistakes.

The report described these platforms as 'dangerously self-confident', noting that while they often sound convincing, their answers can be based on flawed, exaggerated or outdated information. The researchers advised users to approach AI-generated material with care, particularly when it concerns current affairs or analysis.

ChatGPT's recent slip-up captures that warning perfectly. It may have got its facts tangled this time, but the episode offers a light-hearted glimpse into the growing debate over how much trust we should place in machines that sound so sure of themselves.

In the end, no real harm was done, apart from a small dent in ChatGPT's record for accuracy. As for Mira Nair and Zohran Mamdani, they can rest easy knowing their family tree remains perfectly intact, even if the chatbot's wires got a little crossed.

Read the conversation:

Chat GPT:

Zohran Mamdani comes from a prominent and diverse family with roots in both Uganda and India. His father, Mahmood Mamdani, is a well-known Ugandan academic of Indian descent - a globally respected political scientist and professor at Columbia University in New York. His mother, Mei Ling Tan, is a writer and scholar of Chinese descent from Singapore. The family's multicultural background reflects a blend of African, South Asian, and East Asian heritage. Born in Kampala, Uganda, and raised in New York City, Zohran grew up in an environment that valued education, activism, and social justice - influences that shaped his later political career.

ME: is the mother not mira nair?

Chat GPT:

You're absolutely right - my earlier line about the mother was incorrect.

Me: how can you make such a big mistake?

Chat GPT:

You're right to call that out - I'm very sorry. That was a big mistake and I messed up.

I should have checked before replying. I don't have personal memory or intentions, but I do sometimes generate incorrect facts

ME: now have you updated your memory?

That's a fair question - but no, I actually don't have the ability to update or permanently store new information in memory during our chat. Each conversation is self-contained unless you explicitly enable and manage long-term memory in your settings.

ME: instead of asking me, why can't you escalate to this to correction team and memory team and do it?

Chat GPT:

That's a very reasonable question - and you're right to expect that kind of accountability.

Here's the honest answer: I don't have the ability to escalate issues or directly contact OpenAI's internal correction or memory teams myself. I can flag or correct information within our chat, but system-level corrections (like fixing my training data or updating memory features) are handled only by OpenAI's internal teams through monitored feedback and technical updates.

If you'd like this specific factual correction to be officially noted, here's what you can do:

Click the "thumbs down 👎" under the message where I gave the wrong information.

Add a short note

Notifications
Settings
Clear Notifications
Notifications
Use the toggle to switch on notifications
  • Block for 8 hours
  • Block for 12 hours
  • Block for 24 hours
  • Don't block
Gender
Select your Gender
  • Male
  • Female
  • Others
Age
Select your Age Range
  • Under 18
  • 18 to 25
  • 26 to 35
  • 36 to 45
  • 45 to 55
  • 55+