A marketing consultant lost his job after using ChatGPT to "research" historical film reviews for a movie trailer. The incident highlights a widespread misunderstanding of how generative AI works.
According to Deadline, marketing consultant Eddie Egan has been fired for using an AI tool such as ChatGPT to generate review quotes for a trailer for the movie "Megalopolis". The trailer included highly critical quotes about director Francis Ford Coppola's previous work that turned out to be AI-generated fabrications.
Egan's goal was to argue that "Megalopolis," like Coppola's previous films, would initially face harsh criticism but ultimately be recognized as a masterpiece. The trailer quoted renowned film critics such as Pauline Kael of The New Yorker and Andrew Sarris of The Village Voice, who supposedly called classics like "The Godfather" a "sloppy, self-indulgent movie" and "Apocalypse Now" an "epic piece of trash."
In reality, these scathing reviews never happened. On the contrary, the critics praised these films, as reported by Vulture magazine. As a result, production company Lionsgate apologized for the mistake, removed the trailer and terminated Egan's contract.
AI models generate words, not facts
This case demonstrates how easy it is to be misled by ChatGPT and similar systems if you don't understand their underlying mechanisms. The Large Language Models (LLMs) powering these tools generate words based on probabilities, influenced by the user's prompt. The resulting sentences can be either accurate or soft bullshit - these models have no built-in fact-checking capabilities. If you ask for critical reviews, it'll generate some.
Others have fallen for the chatbots' reasoned-sounding sentences: Attorney Steven A. Schwartz initially used ChatGPT for research, unaware that the system could generate false content. In another case, attorneys used ChatGPT to find and cite supposed reference cases that turned out to be AI inventions.
These examples show that many people do not yet understand how generative AI works, and that its results should not be used unchecked. Even OpenAI itself had a factual generation error in its first SearchGPT demo.