CategoriesAnalytics Cybersecurity IBSi Blogs IBSi Flagship Offerings

Cyberattacks: 2023’s Greatest Risk to Financial Services  

Miguel Traquina, Chief Information Officer at iProov 
Miguel Traquina, Chief Information  Officer at iProov

New year, same big problem. Without doubt, cyberattacks have posed and continue to pose the single biggest threat to the UK’s financial services industry

by Miguel Traquina, Chief Information Officer at iProov 

Three in four industry execs in the UK deem a cyberattack to be their highest risk factor and, as the economy enters choppier waters, this threat is rising, with those expecting a high-impact cyberattack in the next three years rising by 26% in the second half of 2022 versus the first.  

2022 has been another year of seismic change in the cybercrime space. Types of attacks are evolving rapidly, and consumer awareness is growing. Now, more than ever, we’re starting to see huge end-user demand for greater online protection from identity theft and other online threats.  

Public and private sector organisations around the world are responding by exto increase digital trust and enables with the goal of increasing digital trust and enabling their customers to prove they are who they claim to be securely and easily.  

The pace of advancements in digital identity verification will only accelerate more in the coming year, especially in a high-value and highly sensitive industry like financial services, with more innovation and regulation on the horizon. As we welcome 2023, here are my top four predictions for the year ahead.  

Biometrics + device will overtake password + device for 2FA  

Calling out the ineffectiveness of passwords as an authentication method isn’t new, but what will be new next year is that finally this stubborn, outdated mode of authentication will be overtaken by the use of biometrics in twThroughout-factor (2FA and MFA) use cases.  

Over the course of 2023, password + device will be replaced by biometric + device. 

The uptake of MFA has been steadily rising in recent years, especially since the enactment of PSD2 for electronic payment services in Europe. While passwords are technically compliant as a strong authentication factor, they and other knowledge-based techniques leave a lot to be desired when it comes to security and user-friendliness. Biometrics and other inherence-based security hit the perfect balance between providing the necessary protection to make 2FA and MFA truly secure while also delivering an effortless user experience.  

Liveness checks become mandatory for online identity verification in financial services 

Speaking of regulation, 2023 will also see the European Banking Authority mandate all regulated financial service providers in the EU complete biometric liveness checks when remotely enrolling customers. These new guidelines will help ease new account of theft, and money laundering. What we’ll also see is consumers feeling more comfortable with, and demanding more, biometric verification at other points of their user journey.   

As this becomes mandatory for financial services in Europe, attackers will turn their attention elsewhere – which will require the UK and other regions to follow suit. 

Synthetic identity fraud will break records 

Synthetic identity fraud exploded in many regions in 2022, even becoming its own industry. That is set to continue in 2023, with Aite Group estimating $2.43bn of losses from synthetic identity fraud this year. Nearly every organisation is at risk of onboarding a fake person and the implications that come with that: financial loss, data theft, regulatory penalties, and more. Organisations throughout the financial services world will need to ramp up their online security to identify synthetic identity crime attacks. 

Deepfakes become ubiquitous as the next generation of digital attacks 

The technology to create convincing deepfakes is now so readily available that even the novice cyberattacker can do serious damage.  

Any financial services organisation that isn’t protecting its systems against deepfakes will need to do so as a matter of urgency. More sophisticated bad actors have already moved on to advanced methods, and in 2023 we’ll see a proliferation of face swaps and 3-D deepfakes being used to find security vulnerabilities and bypass the protocols of organisations around the world. 

 Privacy-enhancing government-backed digital identity programs will pick up pace – and they’ll be interoperable 

Consumers globally are realising they don’t want to give their addresses and other personal data to every website or car rental firm or door-person outside a bar. As demand for secure identity services grows, more state and federal governments will begin to roll out interoperable digital ID programs that use verifiable credentials to enable citizens to cryptographically confirm details about themselves. 

Device spoofing will grow exponentially  

The increase in reliance on devices as a security factor has attracted the attention of cybercriminals, who are exploiting vulnerabilities for theft and other harm. In 2023, we will see an increase in the sophistication of criminals spoofing metadata to conceal their attract top made to appear like a mobile device) to circumvent enterprise security protocols. In 2023, organizations – especially those that rely on mobile web – will recognize the limitations of once-trusted device data and move verification services to the cloud. 

CategoriesAnalytics Artificial Intelligence IBSi Blogs

Can ChatGPT help fight cybercrime?

Open AI’s ChatGPT has taken the world by storm, with its sophisticated Large Language Model offering seemingly endless possibilities. People have put it to work in hugely creative ways, from the harmless scripting of standup comedy to less benign use cases, from AI-generated essays that pass university-level examinations to copy that assists the spread of misinformation.

Iain Swaine, Head of Cyber Strategy EMEA at BioCatch

Iain Swaine, Head of Cyber Strategy EMEA at BioCatch
Iain Swaine, Head of Cyber Strategy EMEA at BioCatch

Chat GPTs (Generative Pretrained Transformers) are a deep learning algorithm that generates text conversations. While many organisations are exploring how such generative AI can assist in tasks such as marketing communications or customer service chatbots, others are increasingly questioning its appropriateness. For example, JP Morgan has recently restricted its employees’ use of ChatGPT over accuracy concerns and fears it could compromise data protection and security.

As with all new technologies, essential questions are being raised, not least its potential to enable fraud, as well as the power it may have to fight back as a fraud prevention tool. Just as brands may use this next-gen technology to automate human-like communication with customers, cybercriminals can adopt it as a formidable tool for streamlining convincing frauds. Researchers recently discovered hackers are even using ChatGPT to generate malware code.

From malware attacks to phishing scams, chatbots could power a new wave of scams, hacks and identity thefts. Gone are the days of poorly written phishing emails. Now automated conversational technologies can be trained to mimic individual speech patterns and even imitate writing style. As such, criminals can use these algorithms to create conversations that appear to be legitimate but which mask fraud or money laundering activities.

Whether sending convincing phishing emails or seeking to impersonate a user and gain access to their accounts or access sensitive information, fraudsters have been quick to capitalise on conversational AI. A criminal could use a GPT to generate conversations that appear to be discussing legitimate business activities but which are intended to conceal the transfer of funds. As a result, it is more difficult for financial institutions and other entities to detect patterns of money laundering activities when they are hidden in a conversation generated by a GPT.

Using GPT to fight back against fraud

But it is not all bad news. Firstly, ChatGPT is designed to prevent misuse by bad actors through several security measures, including data encryption, authentication, authorisation, and access control. Additionally, ChatGPT uses machine-learning algorithms to detect and block malicious activity. The system also has built-in safeguards against malicious bots, making it much harder for bad actors to use it for nefarious purposes.

In fact, technologies such as ChatGPT can actively help fight back against fraud.

Take Business email compromise fraud (BEC). Here a cybercriminal compromises a legitimate business email account, often through social engineering or phishing, and uses it to conduct unauthorised financial transactions or to gain access to confidential information. It is often used to target companies with large sums of money and can involve the theft of funds, sensitive data, or both. It can also be used to impersonate a trusted business partner and solicit payments or sensitive information.

As a natural language processing (NLP) tool, ChatGPT can analyse emails for suspicious language patterns and identify anomalies that may signal fraud. For example, it can compare email text to past communications sent by the same user to determine if the language used is consistent. While GPT will form an essential part of anti-fraud measures, it will be a small part of a much bigger toolbox.

New technologies such as GPT mean that financial institutions will have to strengthen fraud detection and prevention systems and utilise biometrics and other advanced authentication methods to verify the identity of customers and reduce the risk of fraud. For example, financial organisations already use powerful behavioural intelligence analysis technologies to analyse digital behaviour to distinguish between genuine users and criminals.

In a post-ChatGPT world, behavioural intelligence will continue to play a vital role in detecting fraud. By analysing user behaviour, such as typing speed, keystrokes, mouse movements, and other digital behaviours, behavioural intelligence will aid in spotting anomalies. These can indicate that activities are not generated or controlled by a real human. It is already very successfully being used to spot robotic activities which are a combination of scripted behaviour and human controllers.

For example, a system can detect if a different user is attempting to use the same account or if someone is attempting to use a stolen account. Behavioural intelligence can also be used to detect suspicious activity, such as abnormally high or low usage or sudden changes in a user’s behaviour.

As such, using ChatGPT as a weapon against fraud could be seen as an extension of these strategies but not as a replacement. To counter increasingly sophisticated scams, financial service providers such as banks will need to invest in additional control such as robust analytics to provide insights into user interactions, conversations, and customer preferences and comprehensive audit and logging systems to track user activity and detect any potential abuse or fraudulent activity.

And it’s not all about fraud prevention. Financial institutions should also consider how they use biometric and conversational AI technologies to enhance customer interactions. Such AI-driven customer service platforms can ensure rapid response times and accurate resolutions, with automated customer support services providing quick resolutions and answers to customer queries.

Few world-changing technologies arrive without controversy, and ChatGPT has undoubtedly followed suit. While it may open some doors to criminal enterprise, it can also be used to thwart them. There’s no putting it back in the box. Instead, financial institutions must embrace the full armoury of defences available to them in the fight against fraud.

Call for support

1800 - 123 456 78
info@example.com

Follow us

44 Shirley Ave. West Chicago, IL 60185, USA

Follow us

LinkedIn
Twitter
YouTube