AI Scams and Threats – How to Stay Protected

by kdizzle
0 comment
A robot typing on a laptop with "Congratulations You Won!" written on the screen and a button that reads "Claim Prize" below it.

Well, it’s official folks – we are living in an advanced AI world. And it happened so fast … almost at the speed of artificial intelligence! 

We say “advanced” because AI has actually been making its way into our lives bit by bit (no pun intended). Chatting it up with Siri, Alexa or OK Google? That’s AI. Taking advantage of the suggested autofill in your Gmail messages? Also AI. But it’s the emergence of the recently released AI tools that have made an abrupt and significant change to the way we live and work. We’re looking at you, ChatGPT.

The AI advancements can do incredible things, like generate content for articles, presentations, even movies. It can also program computer code and create art and music. But like any new technology, AI has a darker side.

Now, before you smash your electronic devices and run off to an underground bunker, rest assured knowing that there are some things you can do to keep the metaphorical “Skynet” from making you a technology victim.

AI-based Voice Fraud

Imagine getting a phone call from a loved one saying they’ve been kidnapped and need you to pay a ransom that’s about as much as your life savings. You panic and agree to do whatever it takes to free your loved one from harm. There’s just one problem… even though the voice on the other line sounds undeniably like your beloved family member, none of this is true. In fact, the voice that you are hearing is not even a human!

Scammers are now able to use AI to clone people’s voices in order to scam unsuspecting people out of their money. They create what’s known as a “deepfake,” a fabricated video or audio clip using a person’s voice or likeness to create situations or conversations. With a few online tools and only a few seconds of a person’s actual voice recording, cybercriminals translate audio files into voice replicas and make the voice “speak” whatever is typed on a computer program. Then they fabricate emotionally stressful situations, hoping to scare you out of your wits and respond by sending them what they’re asking for without question.

Some cybercriminals even take it a step further and do what’s known as “number spoofing.” So not only does the voice of the caller sound like the person you know, the number calling you also appears to be that person’s number.

What You Can Do:

If you ever receive a call like this, the important thing is to first verify the identity of the caller before providing any information or taking any action. Hang up and call the person back directly. Number spoofing only works for the call coming in to you, but not going out. If your loved one picks up and has no idea what’s going on, you’ll know you just avoided a terrifying AI scam. Phew!

Another way you protect yourself is by creating a secret family code word or a “challenge question,” – something only your loved one would know. If someone claiming to be a loved one calls asking for financial information, have them reveal the code word. If the caller doesn’t reveal it, you may be a scam target. 

Also, who needs another good reason not to answer phone calls from a number you don’t recognize or that are identified as potential spam? Let’s not give criminals the opportunity to get a great recording of your voice as you sit in dead air repeating to yourself “hello?” “Hello?” “Is anyone there?”

Phishing for a Scam

Remember back when scams were simple? The son of an unnamed third world prince would just send a sketchy, mis-spelled email promising you a fortune in return for your help. Fast forward to 2023 and fraudsters are using AI Chatbots to craft sophisticated email phishing scams. And since AI bots don’t make spelling, grammar or vocabulary errors, those types of dead giveaways are becoming a thing of the past.

These newer email scams also appear more genuine and professional. They’re hyper-personalized, meaning they use data collected about the target (aka you!) to appear even more legitimate. These emails could look like a notification from your bank telling you there’s an issue with your account. Or a message from your favorite online retailer telling you a product you ordered could not be delivered and they need you to verify your information.

What You Can Do:

So, what can you do to keep your identity and information safe? Well, here’s the deal: when it comes to those fancy AI email phishing scams, you need to be smarter than the bot. Start by giving suspicious emails the side-eye, because if it looks fishy it probably is. Check out these tips to avoid getting phished, even if the spelling, grammar and vocabulary are perfect!

If you’re unsure about the validity of a specific email, a quick internet search can usually tell you if a company is dealing with email phishing scams. For example, if you receive an email that appears to be from Apple asking you to click a link and submit account information, simply search “Apple email scam” in your browser to learn about known scams. You can also contact a company directly to ask if they did, in fact, send the email in question. Just make sure you don’t click any links in the suspicious email for their contact information. Look up the company online and visit their website directly.

AI Romance Scams

Another way scammers are using AI is to enhance their ability to deceive online dating targets. When a person sets up a dating profile with someone else’s photo to deceive a target, this is called a catfish. Scammers pose as potential romantic partners, using charming and persuasive tactics to manipulate their victims’ emotions and trust in order to ultimately deplete your bank account.

With the new AI technologies like voice synthesizers and chatbots, scammers can trick you easier than ever. They can sound like a real person on a phone call, showering victims with flattery. They use AI tools to write misleading messages, send believable voice mail messages, videos and other content to convince you they’re someone else, using emotional manipulation to create a false sense of trust and even love. But don’t be fooled … love’s got nothing to do with it. That’s when they start asking for money or personal, sensitive information. They’ll also come up with any excuse in the book to avoid meeting you in person.

What You Can Do:

To protect yourself from these high-tech heartbreakers, stay alert! Be wary of unsolicited messages and think twice before sharing personal information or sending money to someone you’ve never met.

Look out for inconsistencies and impersonal, scripted conversations. And if someone seems too perfect or too eager to establish a relationship, take a step back and question their motives. Then, perform a reverse image search. This is an easy trick available on most search engines. It allows you to track down the original source of an image if you suspect you’re being catfished. In most browsers you can upload their image or paste the image URL and hit “search.” If the picture was taken from a website or online platform, you’ll know right away.

With a little bit of detective work you can stay one step ahead of these romance scams. Love may be blind, but you shouldn’t be blind to the possibility that you’ve been targeted by an AI-powered scammer!


Remember, prevention is key. Educate yourself and your loved ones about the signs of AI-generated fraud. Be cautious when sharing personal information and always verify the authenticity of requests, even if they seem legitimate. You will never regret being TOO careful.

For even more steps you can take to keep yourself and your family safe, check out Moneytree’s Staying Secure page.

0 1

Share

The content contained in this article is for general informational purposes only. It is not intended as a substitute for professional advice, and you should consult with your own qualified professional advisor before making any decisions. All liability with respect to actions taken or not taken in reliance on the contents of this article are hereby expressly disclaimed. For more information, please see our Terms of Use.

You May Also Like

Leave a Comment