Social media platform X has announced that creators who post AI-generated videos depicting armed conflicts without clearly disclosing their artificial origin will face a 90-day suspension from the company’s revenue-sharing programme.
The policy update, revealed by Nikita Bier, head of product at the Elon Musk-owned platform, comes amid heightened concerns over misinformation during the ongoing conflict involving the United States, Israel and Iran.
“During times of war, it is critical that people have access to authentic information on the ground,” Bier said, warning that advances in artificial intelligence have made it increasingly easy to produce highly realistic but misleading content.
Under the revised rules, creators who repeatedly violate the disclosure requirement risk permanent removal from X’s Creator Revenue Sharing programme, which distributes a portion of advertising revenue to eligible users based on engagement with their posts.
X said it would continue refining its policies and platform tools to ensure users can rely on credible information during sensitive global events.
The move marks a significant shift for the platform, which has faced criticism over its approach to content moderation since Elon Musk completed his $44 billion acquisition of Twitter - later rebranded as X - in October 2022. Since then, the company has rolled back several misinformation policies, arguing that strict moderation amounted to censorship.
According to the company, enforcement of the new AI disclosure rules will rely on its Community Notes system - a crowd-sourced fact-checking feature - along with metadata and other technical markers that help identify AI-generated material.







