TikTok reduces use of its AI-generated video captions after the feature produced several major mistakes. The company had introduced an experimental tool called “AI overviews” that automatically summarized video content for some users in the US and the Philippines. However, the AI-generated descriptions often contained bizarre and inaccurate information, leading TikTok to scale back the feature.
TikTok reduces use of AI-generated video captions following errors
The AI overviews appeared beneath videos to provide summaries or additional context. While some descriptions were accurate, many were wildly off the mark. For example, a video featuring dancer Charli D’Amelio was incorrectly described as “a collection of various blueberries with different toppings.” Other videos of celebrities such as Shakira and Olivia Rodrigo also received strange and misleading captions.
Users shared screenshots of these errors widely on social media platforms, including Reddit and TikTok itself. One particularly odd summary described a ballroom dance performance by Reagan and Juli To as “a person repeatedly striking their head with a rubber chicken.” Other captions falsely mentioned people hitting themselves with hammers, despite no such actions occurring in the videos.
Changes to the AI overview feature
In response to the backlash, TikTok announced that the AI overviews would now be limited to suggesting products similar to those shown in videos, rather than attempting to summarize the entire content. The company has also identified the causes of the errors but has not provided detailed explanations.
TikTok allowed users to report and give feedback on the AI-generated captions, but this did not prevent widespread criticism. Some users speculated that the strange summaries were intentional jokes, while others expressed frustration at the inaccuracies.
Context of AI-generated content challenges
TikTok’s experience reflects broader challenges faced by tech companies deploying generative AI tools. Similar issues have occurred with AI features from Google and Apple, which also produced false or absurd summaries. These “hallucinations”āwhere AI generates incorrect or fabricated informationāremain a significant hurdle despite ongoing improvements in AI technology.
OpenAI, the maker of ChatGPT, has acknowledged quirks in its systems, and legal and governmental sectors have reported problems caused by AI errors. As companies continue to integrate AI into their platforms, balancing innovation with accuracy and user trust remains a key concern.
