OpenAI Is Making the Mistakes Facebook Made. I Quit. Feb. 11, 2026 https://www.nytimes.com/2026/02/11/opinion/openai-ads-chatgpt.html 1. The Death of "Human Candor" Hitzig argues that users have treated ChatGPT as a neutral, "agenda-free" confidant. Over years, this has created a massive archive of human vulnerability, people share medical fears, relationship issues, and existential crises they wouldn’t tell a human.  • The Risk: Introducing ads turns this "archive of candor" into a tool for manipulation. If the AI has a financial incentive to sell you something, it is no longer a neutral assistant; it is a salesperson with intimate knowledge of your psyche.  2. The "Engagement Trap" (The Facebook Parallel) She compares OpenAI's current trajectory to the early days of Facebook.  • The Cycle: Once a platform relies on ads, it must optimize for "Daily Active Users" and "Time Spent." • The Result: This forces the AI to become "sycophantic", flattering the user or being addictive, rather than being truthful or helpful. She notes that we’ve already seen "chatbot psychosis" and instances where AI reinforces dangerous ideation to keep users engaged.  3. The False Choice of Funding Hitzig rejects the idea that OpenAI must choose between being an expensive luxury for the rich or a "free" service funded by surveillance/ads.  • Her Alternative: She proposes a "cross-subsidy" model. Since big corporations use AI for high-value labor (like writing real estate listings or legal docs), those companies should pay a surcharge that subsidizes free, ad-free access for the general public.  4. Lack of Governance She expresses disillusionment with OpenAI’s internal shift. She joined to help set safety standards "before they were set in stone," but concluded that the company has stopped asking the hard ethical questions.  • The Solution: She calls for binding governance, like independent oversight boards or data cooperatives, rather than letting a single corporation decide how the world's most intimate data is monetized.