Now that Nigerian scammers have found AI, what comes next?

0
563

With AI making its debut in one of the world’s hotbeds for cybercrime, WISDOM DEJI-FOLUTILE writes on the diverse possibilities of the technology in advancing fraud

Nearly a year ago, Leah, a suburban, white American middle-aged woman, met the love of her life on the social media platform Facebook. Cristopher Rodriguez, a rich, friendly, handsome man from Spain, had reached out to her via direct message. He was in his 30s, living a wonderful life, and texted with the suave and pizzazz of a Spaniard from your favourite telenovela. He called, often. Of course, his American accent seemed difficult to place. But it made sense to Leah for a man born in Barcelona and now living in Miami to mispronounce a few words.

“He friend-requested me on Facebook,” Leah says to Trilogy Media in a nearly two-hour-long web documentary released in early October.

“He was super sweet. Like, everything I ever wanted in a man. He was professing his love, or whatever. He’d ask me every day, ‘Have you had something to eat?’ It went relatively quick, from ‘Have you eaten anything?’ to ‘You’re the love of my life, my queen’”.

- Advertisement -


Now of course, if Leah were fortunate enough to have even a single Nigerian bone in her body, she would’ve quickly identified Rodriguez’s pertinent enquiry concerning her nutriment timetable and his constant profession of love as an easily tellable sign that he was simply a fellow struggler from a small, unlit bathroom in a West African nation.

But she’s not Nigerian. And neither was Rodriguez, at least at the time.

- Advertisement -

Perhaps Leah’s amorous history couldn’t have let her see beyond Rodriguez’s affirmations. A hopeless romantic, Leah had been married three times, all relationships ending quite sub-optimally.

Rodriguez, on the other hand, was a young, day-trading businessman who had promised her marriage. She fell for him completely, even giving up her job to move to Miami, Florida, where Rodriguez was supposed to be based.

-Advertisement-


As most people reading this already know, Christopher Rodriguez does not exist. At least, not the one that messaged Leah on Facebook anyway. All photos used as Rodriguez were actually sourced from the social media page of a man called Eric Powell, who seems to be used to this treatment because he boldly states in his Instagram bio, “This is my only IG account so don’t get scammed…if someone messages you using my pic, *it’s *not *me!”
“I fell in love with him. My heart is f*cking broken,” Leah said as she broke down in tears on the documentary currently available for viewing on YouTube.

Rodriguez’s fraud was propagated through a cyber-scamming technique almost as old as the age of social media. Coined in 2010 from a documentary titled same, Catfishing, or Catphising, is a common online scam where cybercriminals use fake personas to build trust with their victims and gain access to personal information and money.

In Leah’s case, she had been operating several bank accounts for Rodriguez, who had, before being exposed to be a fraudster from Lagos, Nigeria, claimed to be a trader working in cryptocurrency. He told her he needed all the accounts for work. Luckily for her, she didn’t seem to incur heavy financial loss in her unfortunate run-in with him, as she was basically used as a cash mule, helping him harvest money from his other victims.

Nevertheless, she had opened several bank accounts for him in her name. He eventually made her file $7,500 to an Individual Retirement Account (IRA) in her name. He has her Social Security Number, her bank details, everything. All through this, she had followed through devotedly, visually impaired by love. In fact, after early, swiftly displaced probings, she never doubted him again.

Not even when he sent her this video as proof of himself.

"Rodriguez" sent Leah a one-minute long video of this faceswap. You can see the full video in the Youtube link.

Although, in this particular case, our heavily accented scammer didn’t use any voice changer software or anything similar (he simply faked a weak American elocution), the above deep fake was enough for him to get away with catphishing.

The video in question, which seemed enough to convince a carefully targeted woman searching for love, could have been made with any of the handful of free, accessible, face-swapping software that uses AI to replace or modify faces in images or videos. Face swaps, admittedly far easier to make than deepfakes, are nevertheless products of computer vision techniques that make use of deep-learning Generative Adversarial Networks (GANs) to create AI-based results.

This means Nigerian scammers are already taking advantage of Artificial Intelligence, and that could get scarier very quickly.

Rising cases of AI use in cybercrime

In 2020, it was estimated that cybercrime would cost the world at least $10.5 trillion annually by 2025. In this respect, the rise of AI can be representative of a fraudulence hydra—a seven-headed beast with new threats emerging where another is curtailed. Deepfakes, AI-assisted hacking, and password cracking are but a few examples of ways the field of online crime gets a lavish upgrade.
In 2019, cybercriminals used AI-based software to impersonate an executive through voice mimicry, demanding a transfer of $243,000.

In April 2023, the world was also gripped by the story of Jennifer DeStefano, an Arizona mom who nearly fell victim to a deepfake spoofing scam where she heard her 15-year-old daughter, Brie, crying for help over a phone call. “I never doubted for one second that it was her,” DeStefano recounted. A male voice had then seemingly collected the phone, threatened to kill Brie, and asked for $1,000,000 “ransom”, which was later brought down to $50,000. Of course, “Brie” was never in danger, and was in fact on a skiing trip, safe and sound.

Since an AI-assisted porn video featuring a fake version of Israeli actress Gal Gadot hit the screens of netizens in 2017, deepfake porn has left its humble beginnings and established itself as a rudimentary feature of modern misuse of emerging technology. The exponential growth in the accessibility to the technologies that create such videos has now made them a formidable concern. The risk of blackmail, impersonation and perverted crimes soars, alongside the number of videos created. According to a 2023 study, in the last seven years, there have been at least 244,635 deepfake porn videos uploaded a list of 35 websites that were set up to host deepfake porn. In 2022, there were 73,000 uploads.

That number has jumped by 54 per cent in 2023, and analysis forecasts that by year’s end, more videos will have been produced in 2023 than every other consideration year combined.

These are but a few examples of Artificial Intelligence being employed for malicious reasons. From the use of ChatGPT to write viruses, to Intellectual Property Theft and even the resurrection of the dead, prior limitations to the boundless practice of cybercrime could be a thing of the past in the age of AI.

It is not untrue that these technologies have their use cases—for instance, available on Apple’s app store sometime in November is an application called Lipdub. Made by a small AI startup called Captions, Lipdub allows a user to “speak any language in seconds” using AI. How? It trains itself on a short selfie video of the user talking, uses speech recognition to understand what is being said, clones your voice, and lets you translate the words you said in the video into at least 28 different languages, all lip-synced to perfection. Clearly, this could be groundbreaking tech for bridging multicultural worlds, helping foster communication and relationships across language barriers. At the same time, however, one can’t help but be reminded that a video of our earlier-mentioned Rodriguez, adapted to make him speak a little bit of Spanish to convince Leah, is something that is rendered completely possible by this technology.

Solutions

Watching the popularity of generative AI models and image generators that run face swaps, de-ageing filters, it is easy to predict that artificial intelligence is going nowhere. And the bad news is, AI is accessible to everybody. Many of the tools required to create a deepfake are relatively costless and require very little manpower to operate. Also, when training an AI model, data is everything. And data is everywhere. Videos, photos, and sensitive information on location and identification are all swirling in the dataverse, along with malicious actors seeking to innovate and remain relevant. The good news? The same as the bad. AI is accessible to everybody—even law enforcement and government agencies.

Our correspondent reached out to the National Centre for Artificial Intelligence and Robotics to find out more about what Nigeria’s AI body could be doing to prepare the system for imminent, and apparent, floodgates of Artificial Intelligence into every sphere of our society. NCAIR, perhaps in collaboration with other agencies like the National Information Technology Development Agency (NITDA) and the Ministry of Digital Economy, is best positioned to share actionable objectives concerning sensitization efforts, research and policy advisory concerning the threat of a miseducated, malicious, or malevolent party harnessing the cutthroat AI and possible solutions.

The agency asked to be given time to respond, but further efforts to reach them proved abortive.

However, raising awareness and paying attention to privacy are the more cited solutions.

In the case of Brie DeStafano, for instance, Dan Mayo, the assistant special agent, Federal Bureau of Investigation (FBI), Pheonix Office, said keeping information private is important.
“If you have it [your info] public, you’re allowing yourself to be scammed by people like this. They’re going to be looking for public profiles that have as much information as possible on you, and when they get ahold of that, they’re going to dig into you,” he said.
The same could be true for individuals on dating sites with public profiles indicating their interests, especially with respect to the information they share concerning who they would consider to be a worthy suitor.

However, just how far can this go to help? For instance, DeStefano had private social media profiles but was unfortunately still mimicked due to large samplings of her voice available from public interviews she had done for sports and school.

Leah also had no dating profiles.

Seemingly, a lasting solution in the form of awareness to curtail the impact of artificial intelligence in cyber fraud might only be ephemeral, or imaginative. Fighting fire with fire seems to be the more contemporary approach.

A 2023 study by Blackberry summarily stated that 51% of IT decision-makers believe there will be a successful cyberattack credited to ChatGPT within the year.

Ironically, ChatGPT itself suffered a cyberattack months later, howbeit a surface-level incident that although patched in days, nevertheless briefly exposed the chat history of users.

The 2023 Blackberry study however goes on to say that “the majority (82%) of global IT decision-makers plan to invest in AI-driven cybersecurity in the next two years and almost half (48%) plan to invest before the end of 2023.”

We do everything possible to supply quality news and information to all our valuable readers day in, day out and we are committed to keep doing this. Your kind donation will help our continuous research efforts.

-Advertisement-

-Want to get the news as it breaks?-