The original version of this article was published by Marcus by Goldman Sachs UK here.
Since the release of highly accessible generative AI tools such as ChatGPT, interest and excitement around artificial intelligence’s potential has surged. With the increased coverage of news surrounding the space, our Fraud Team has provided an overview of generative AI and the potential fraud risks.
Whereas traditional AI models primarily focus on recognizing patterns from existing data, generative AI can produce new instances of data based on existing sets of data and inputs. Outputs aren’t limited to just analytical answers: Generative AI can create novel content in the form of text, imagery, video and more.
Interest in generative AI has exploded over the last 12 months and the potential of generative AI to change our world is staggering, with recent research published by our Goldman Sachs Research team suggesting that it could drive a 7% increase in global GDP, amounting to nearly $7 trillion over a decade. The research suggests it could play a significant role in breaking down communications barriers and driving productivity growth, bringing about sweeping changes in the global economy.
But as exciting as this new technology may be, fraudsters could also use generative AI to their advantage. Just like the internet, despite the positive usage of generative AI, there are some users that may act with malicious intent.
Our Fraud Team outlined some potential risks to be aware of and a few tips on staying safe online.
More convincing phishing attempts: Fraudsters could use generative AI to produce more convincing (and more persuasive) phishing emails and messages. This technology brings the ability to tailor content and correct typos or grammatical errors, which could dramatically reduce our ability to notice when something is off. Generative AI could also learn how to mimic the style of a specific brand, making it easier for fraudsters to impersonate a company and trick people into sharing their data. You can read more about phishing scams here.
More frequent attacks: Generative AI can enable fraudsters to automate huge chunks of their work so they can target victims farther and faster. By removing language barriers, this technology can also help cybercriminals to increase their global reach.
Deep fakes: Across voice, image and video, AI has improved its ability to mimic real humans. Although cases involving the impersonations of family, friends and colleagues to extort money are likely to be low volume, they could be highly damaging when they do occur. Fraudsters will need to invest more time in researching and formulating their approach. However, it is easy to see how deep fake content could cause someone to panic and transfer money to the wrong hands. Generally, it’s a good idea not to answer calls from numbers you don’t recognize as a way to avoid exposing yourself to scams involving deep fakes.
Many of the scams we are already aware of (e.g., cloned websites, romance and charity relief scams) will continue to cause problems. But generative AI can make them more convincing and more prevalent.
This means that much of the commonsense tips to stay safe not only remains relevant but increases in importance.
However, as traditional markers of fraud start to disappear (e.g., spelling errors in phishing emails), it becomes even more important to know when something is not right – trust your instincts. Having strong controls, passwords and security on social media also becomes a bigger priority, as oversharing or sharing with the wrong people can provide content that can be analyzed or mimicked.
Find out more about steps you can take to protect yourself from fraud here.
This article is for informational purposes only and is not a substitute for individualized professional advice. Articles on this website were commissioned and approved by Marcus by Goldman Sachs®, but may not reflect the institutional opinions of The Goldman Sachs Group, Inc., Goldman Sachs Bank USA, Goldman Sachs & Co. LLC or any of their affiliates, subsidiaries or divisions.