Meta is stepping up efforts to stop AI ‘nudify’ apps from advertising on its platforms. These apps use artificial intelligence to generate explicit images without consent. The apps target celebrities, influencers, and ordinary people alike. Researchers recently uncovered thousands of ads promoting such apps on social media.
One app, Crush AI, has attracted particular attention. Since last fall, it has run over 8,000 ads on Meta’s platforms. The company behind Crush AI, Joy Timeline HK Limited, has repeatedly ignored Meta’s rules. Despite multiple ad removals, it continued to push its ads. Now, Meta has filed a lawsuit against the Hong Kong-based firm.
In its official statement, Meta emphasized the company’s commitment to stopping these ads. The social network developed new technology to spot AI ‘nudify’ apps quickly. This technology can detect suspicious ads even when no nudity appears. Meta also expanded its list of flagged terms, emojis, and phrases. These tools help catch copycat ads faster.
Furthermore, Meta plans to collaborate with other tech companies. Sharing information with app store owners will help block these harmful services. Experts and safety teams also advise Meta on improving ad enforcement. This crackdown follows growing concerns raised by researchers and journalists.
Besides AI ‘nudify’ apps, Meta faces challenges with other AI-driven scams. Some advertisers use deepfake videos of public figures to promote fraudulent schemes. The company’s independent Oversight Board criticized Meta for weak rule enforcement. The board called for stronger action against these deceptive ads.
In conclusion, Meta’s lawsuit and new ad detection tools mark a clear stand against AI ‘nudify’ apps. These efforts aim to protect individuals from nonconsensual and harmful image manipulation. As Meta enhances its defenses, it hopes to reduce the spread of such invasive content on its platforms.
for more tech updates, visit DC Brief.