
@ Tim Bouma
2025-05-18 12:03:03
Criminal Scams Flood Instagram and Facebook
BY JEFF HORWITZ AND ANGEL AU-YEUNG
The Wall Street Journal
May 17, 2025
Meta profits from ads for fake puppies, phony bargains
For almost two years, Edgar Guzman has been getting calls from irate customers who responded to his ads on Instagram and Facebook. Or at least they thought they were his ads.
Guzman’s company, called Half-Off Wholesale, offers home-improvement supplies and garden equipment in bulk out of a suburban Atlanta warehouse. The ads offer steep discounts that are mouthwatering to bargain hunters: $29 for a pallet of power tools, and mixed boxes of returned Amazon.com merchandise with a starting bid of just $1, for example.
But when people send their payments, the products never arrive. When they call Guzman to complain, he has to deliver the bad news: They’ve been swindled.
“What sucks is we have to break it to people that they’ve been scammed—we don’t even do online sales,” he said, noting that customer complaints about the rip-offs generate bad online reviews for his real business. “We keep reporting pages to Meta, but nothing ever happens.”
Guzman’s experience isn’t unusual.
Meta Platforms, the parent company of Facebook and Instagram, is increasingly a cornerstone of the internet fraud economy, according to regulators, banks and internal documents reviewed by The Wall Street Journal.
The company accounted for nearly half of all reported scams on Zelle for JPMorgan Chase between the summers of 2023 and 2024, according to a person familiar with the service. The peer-to-peer payment platform is owned by several banking giants, including JPMorgan, the country’s biggest bank, and Wells Fargo. Other banks that offer Zelle have experienced similarly high fraud claims originating on Meta, according to people familiar with the matter.
British and Australian regulators have found similar levels of fraud originating on Meta’s platforms. An internal analysis from 2022 described in company documents likewise found that 70% of newly active advertisers on the platform are promoting scams, illicit goods or “low quality” products.
Asia crime groups
In Guzman’s case, a search of Meta’s ad library this spring showed that more than 4,400 different ads listing the address of his business have run on Meta’s platforms over the past year. Guzman’s actual business was responsible for 15 of them.
Account information for the scam pages shows they are run out of China, Sri Lanka, Vietnam and the Philippines, but they use stolen pictures of Half-Off’s warehouse and list its address.
With more than three billion daily users on Meta’s platforms, fraud is hardly a new phenomenon for the company. But fed by the rise of cryptocurrencies, generative AI and vast overseas crime networks based out of Southeast Asia, the immensity of Meta’s scam problem is growing and has been regularly flagged by employees over the past several years.
Current and former employees say Meta is reluctant to add impediments for ad-buying clients who drove a 22% increase in its advertising business last year to over $160 billion. Even after users demonstrate a history of scamming, Meta balks at removing them.
One late 2024 document reviewed by the Journal shows that the company will allow advertisers to accrue between eight and 32 automated “strikes” for financial fraud before it bans their accounts. In instances where Meta employees personally escalate the problem, the limit can drop to between four and 16 strikes.
Adding to the problem is Marketplace, its online secondhand-market that in less than a decade since its launch has surpassed Craigslist to become the internet’s most heavily used repository of free classified ads. Its peer-to-peer model has also made Marketplace a popular hunting ground for scammers.
A Meta spokesman said the company is working to address “an epidemic of scams” that has grown in scale and complexity in recent years, driven by cross-border criminal networks.
“As this scam activity has become more persistent and sophisticated, so have our efforts,” he said. He added that Meta is testing the use of facial-recognition technology, adding warnings to users on its platforms and building partnerships with banks and tech companies “since this crime affects many industries and cuts across different parts of society.”
Meta has also argued in U.S. federal court that it bears no legal responsibility to address the issue.
“The alleged underenforcement of Meta’s monitoring policies cannot give rise to liability,” the company wrote last year in a motion to dismiss a lawsuit alleging negligence in removing cryptocurrency impersonation scams, adding that it “does not owe a duty to users” to address fraud on its platforms.
The spokesman said the company nonetheless makes efforts to do so.
“We of course don’t want scammy activity on our platform and nothing about this statement—a nuanced legal argument—should be read to suggest otherwise,” he said.
Fake giveaways
Meta chief Mark Zuckerberg’s recent move to scale back fact checking and policing of hate speech and other problem content has infuriated some and pleased others. But his company’s ongoing ineffectiveness at governing fraud on its platforms has drawn less scrutiny—even though it causes widespread and tangible problems for consumers and businesses.
Meta’s lax approach has helped fuel the professionalization of social-media fraud by international criminal networks in Southeast Asia, says Erin West, a recently retired Santa Clara County prosecutor who helped author a U.S. Institute for Peace report last year on such activity.
The report estimated organized scamming operations—
often called “pig butchering” groups—comprise hundreds of thousands of people, many trafficked after falling for fraudulent social-media employment ads. Kept in prisonlike compounds, the workers are forced to work under threat of “extreme forms of torture and abuse.”
West said the growth of this nightmarish industry stems directly from the inaction of Meta and, to a lesser extent, its social-media peers.
“If there’s anybody who could make a huge dent here, it’s Meta,” she said. “But there’s no hammer over their head.”
Losses to scams on Meta can range into the hundreds of thousands of dollars, but there’s seemingly no target too small.
Some of the frauds reflect substantial complexity and effort. In recent months, accounts featuring grandmotherly photos have been running Facebook and Instagram ads for a supposed giveaway of a McCormick & Co. spice rack and a selection of its products. Users are asked to provide only a nominal $9.99 shipping fee via a website featuring McCormick branding, a user survey and a game to win prizes.
Marah Johnson of Orange County, Calif., has regularly encountered and reported scams on Facebook for years. But she fell for this one. After she entered her credit card information on the McCormick-branded website, she was billed for a series of fraudulent purchases totaling hundreds of dollars.
“If their revenue is coming from fraud, what is their incentive to protect people?” asked Johnson, a 58-year-old artist and jewelry-maker. “It feels like Meta is helping the scammers out.”
The scam is sufficiently widespread that McCormick Brands warned its two million followers on Facebook and Instagram of it on Jan. 31, eliciting hundreds of comments from users who said they had been scammed.
“I feel so stupid,” wrote one.
“I got hit for $70 I can’t afford,” responded another.
“Why can’t Facebook police these scammers?” asked a third.
A McCormick spokesperson declined to comment.
Puppies for sale
One of the most common scams seen by banks involves the sale of pets, despite Meta’s rules banning “peer-to-peer sales or trade” of live animals outside of narrow contexts.
A recent search for “puppies” over several days yielded thousands of ads, most stating no affiliation with a known dog breeder or rescue organization as Meta’s rules require.
Other red flags abounded. Many of the results displayed common hallmarks of scams, including stolen photos of specific pets and ads from sellers supposedly “near me” who were actually operating out of Cameroon.
JPMorgan has repeatedly raised concerns with Meta about its policing of scams, according to a person familiar with the matter. The person said the volume of scams attributable to Meta has shown some improvement in recent months.
During that period both Meta and Zelle began issuing warnings to their respective users about scam risks arising from peer-to-peer payments.
Documents reviewed by the Journal show that Meta has deprioritized scam enforcement in recent years, emphasizing the avoidance of erroneous ad takedowns over safety concerns. The company has also been cutting costs and shifting resources to other issues in the reshuffling.
The company abandoned plans for advertiser verification requirements similar to what it mandates for political ads, people familiar with the matter say, on the grounds that it worried about losing revenue from marketers unwilling or unable to pass identity checks.
Meta has periodically declared “site events”—company slang for internal emergencies of varying severity—for spikes in scams. But the company has generally treated scams as what documents describe as a “low severity” user experience issue rather than a significant threat to vulnerable people or vector for organized international crime.
One 2022 document described “a lack of investment” in building automated tools to identify scams. The work’s low priority within Meta was codified in an update that year of its content moderation processes that put more resources into addressing human trafficking and content promoting suicide and selfharm, and less on anti-spam measures.
A mid-2022 analysis of the reprioritization found that it resulted in a 46% drop in likely scams reviewed within Facebook Groups.
The Meta spokesman said that the documents seen by the Journal are old and that the company has been ramping up its investments in antispam work since the second half of 2022. He said it took down more than two million accounts linked to organized fraud operations last year. Of the advertiser accounts it shut down, he said, nearly 70% were caught within a week of their creation.
Because of a safe harbor in U.S. telecommunications law known as section 230, platforms are generally shielded from liability for user-created content. Whether those protections apply to Meta’s ads is now being tested by Andrew Forrest, an Australian mining billionaire and philanthropist who became frustrated in 2019 with Meta’s failure to remove fraudulent investment advertisements using his image and AI-cloned voice.
In a motion to dismiss the case last year, Meta argued that it is under no obligation to require investment advertisers to verify their identities or demonstrate that they are licensed to sell such products.
“Because Meta has no duty to protect users from thirdparty content on its platform, Plaintiff cannot state a negligence claim,” the company said.
Shared via PressReader
connecting people through news