Google has been bolstering its efforts to prevent advertisement scams through upgrades to its AI tools.
Last year, Google revealed it removed over 5.1 billion advertisement scams, 9.1 billion ads were restricted, and suspended 39.2 million advertiser accounts last year, all through the upgrades it has applied to its large language models.
The Big Tech firm made 50 upgrades to its large language models designed to identify bad actors who use advertisements as a vehicle to target and scam consumers.
Developments in abusive tactics in both AI and fraud, as well as global events, have shaped the rise in advertisement scams, according to Google. The company stated it has had to adapt to and apply constant agility to this new environment.
Of the 9.1 billion ads that were restricted by Google last year, 268.3 million were financial services-related scams. Others include the most prevalent ad scam, legal requirements (428.8 million), gambling and games (108.9 million), and copyright (115.1 million).
Whilst these results pertain only to the US, the rise in ad scams is often a byproduct of authorised push payment (APP) fraud, which has taken on much more significance in the UK, where this has also increased over the past several years.
193.7 million financial services ads were blocked or removed last year, according to Google’s data, ranking as the fifth-highest amongst categories that were blocked or removed by Google’s AI solutions.
The tech giant credits much of this prevention to the upgrades of its large language models, which uncover ad scams in real-time and process solutions at a much faster and efficient rate than manual processes.
Google revealed the importance of AI as an anti-scam tool in its 2024 Ad Safety Report, detailing how advancements in large language models are fighting “fraud at scale”.
The report said: “Large language models have not only advanced policy enforcement, but they have also improved our ability to be more proactive than ever in preventing abuse. These AI-powered tools accelerate complex investigations, enhancing our ability to uncover and prevent networks of bad actors and repeat offenders.
“These preventative efforts kept billions of policy-violating ads from ever showing to a consumer, while simultaneously ensuring that legitimate businesses can show ads to potential customers quickly. One way we do this is by fighting advertiser fraud at scale, using signals like business impersonation and illegitimate payment details as early indicators of potential consumer harm.
“Throughout 2024, we continued to invest in stopping this fraud early in the account set up process, enabling us to stop countless harmful ads before they could run. To put this into perspective: we suspended over 39.2 million accounts in total, the vast majority of which were suspended before they ever served an ad.”
Can social media platforms follow suit?
As previously mentioned, APP fraud is becoming one of the fastest-growing sources of fraud in both the US and the UK, with many fraudsters posting ads for fraudulent goods or services on social media platforms.
A recent report by Revolut highlighted that social media platforms have become fraudsters’ preferred breeding ground for fraudulent ads. The report highlighted that 54% of all fraud cases reported to the UK digital bank have emanated from Meta-owned platforms, such as Facebook, Instagram and WhatsApp.
This has become a common concern amongst financial institutions like Revolut, which have been calling on UK regulators to place a portion of the burden on social media platforms where these fraudulent activities are taking place.
Calls were being made by financial service providers last October when the Payment Systems Regulator introduced new rules around APP fraud, which would see the payer and receiver split the lost amount 50/50 at a cap of £85,000.
While social media platforms have yet to be brought under the scope of the UK’s APP fraud rules, Google’s 2024 Safety Ad Report highlights the benefits AI can bring to preventing potential scam ads before they hit their targets.