CategoriesAnalytics IBSi Blogs IBSi Flagship Offerings

Banks have the Generative AI advantage, but must overcome challenges to fully utilise its benefits

Jay Limburn, VP of AI Product Management, IBM
Jay Limburn, VP of AI Product Management, IBM

Despite the many challenges the industry has faced, the banking sector has continued to prioritise digital transformation and it is only accelerating quicker. Generative artificial intelligence (AI) is the latest in a wave of disruptive technologies that will drastically transform the financial services and banking industry.

By Jay Limburn, VP of AI Product Management, IBM

Many banks and financial institutions are as good as, if not better than most industries when it comes to technological maturity. We have been working on generative AI with banks for several years, and they have been experimenting with the operational advantages of AI across their business. The IBM 2023 CEO Decision-Making in the Age of AI report showed that 75% of CEOs surveyed believe the organisation with the most advanced generative AI will have a competitive advantage. However, executives are also concerned about the potential risks around security, ethics and bias.

Leaders are looking to fuel their digital advantage to drive efficiencies, competitiveness and customer satisfaction, but they have not been able to fully operationalise AI as they face key challenges around implementation.

The biggest challenge and opportunity…data

Banks are continuing to digitally innovate, and data has emerged as one of the biggest challenges to fully utilising generative AI across the industry. Platforms like ChatGPT caught people’s imaginations and created excitement in the sector. But while they rely on Large Language Models (LLM) to analyse vast amounts of data, the banks need to be able to choose from multiple models and embed their own data sets for analysis.

Instead of having one model to rule them all, banks will need to evaluate which models can be applied to their individual use cases. Banks are aware of the benefits generative AI can bring, so in place of summary capabilities of what the technology can do, they need to look at how to modernise different elements of their business. This requires models to be trained on the bank’s own data sets to get high-level accuracy and to fully operationalise the technology.

The amount of data is overwhelming many organisations, and banks are not excluded. To succeed, financial institutions will need to embed their own data into generative AI models to fully operationalise the technology.

Banks can help shape regulation and governance

One of the other key challenges facing banks with regards to generative AI is regulation and governance. As a new and emerging technology, regulators will not necessarily understand AI, so the natural inclination is to say we cannot use it. Equally, some models cannot explain why it has made a decision. For trust and compliance, financial institutions need to explain their decision-making process.

The more AI is embedded into organisations, the more important it is that leaders have a proactive approach to governance, which means having a legal framework to ensure AI is used responsibly and ethically, helping to drive confidence in its implementation and use.

But in order to help shape the AI regulatory environment and meet these requirements, banks need to take an active part in shaping the regulatory framework and move to models which can explain the decision-making process.

Generative AI will help not lead

The response we have seen from banks to generative AI has been phenomenal. As an industry, financial services and banking can lead the charge around AI regulation and explore new models to leverage their own data for better outcomes.

However, this isn’t without its challenges. Operationalising generative AI has proved difficult due to potential risks, compliance and evolving regulatory requirements, and concerns would be heightened as banks introduce their own data to AI models – which is why most generative AI use cases have so far focused on the customer care space.

Despite these challenges, banks have a huge opportunity to leverage generative AI, which will fundamentally change how we bank and how banks serve customers, and governance will play an active role in ensuring trust as we continue to explore the benefits of generative AI. Importantly, AI is here to help banks, not be the lead in most use cases.

CategoriesAnalytics Artificial Intelligence IBSi Blogs

Can ChatGPT help fight cybercrime?

Open AI’s ChatGPT has taken the world by storm, with its sophisticated Large Language Model offering seemingly endless possibilities. People have put it to work in hugely creative ways, from the harmless scripting of standup comedy to less benign use cases, from AI-generated essays that pass university-level examinations to copy that assists the spread of misinformation.

Iain Swaine, Head of Cyber Strategy EMEA at BioCatch

Iain Swaine, Head of Cyber Strategy EMEA at BioCatch
Iain Swaine, Head of Cyber Strategy EMEA at BioCatch

Chat GPTs (Generative Pretrained Transformers) are a deep learning algorithm that generates text conversations. While many organisations are exploring how such generative AI can assist in tasks such as marketing communications or customer service chatbots, others are increasingly questioning its appropriateness. For example, JP Morgan has recently restricted its employees’ use of ChatGPT over accuracy concerns and fears it could compromise data protection and security.

As with all new technologies, essential questions are being raised, not least its potential to enable fraud, as well as the power it may have to fight back as a fraud prevention tool. Just as brands may use this next-gen technology to automate human-like communication with customers, cybercriminals can adopt it as a formidable tool for streamlining convincing frauds. Researchers recently discovered hackers are even using ChatGPT to generate malware code.

From malware attacks to phishing scams, chatbots could power a new wave of scams, hacks and identity thefts. Gone are the days of poorly written phishing emails. Now automated conversational technologies can be trained to mimic individual speech patterns and even imitate writing style. As such, criminals can use these algorithms to create conversations that appear to be legitimate but which mask fraud or money laundering activities.

Whether sending convincing phishing emails or seeking to impersonate a user and gain access to their accounts or access sensitive information, fraudsters have been quick to capitalise on conversational AI. A criminal could use a GPT to generate conversations that appear to be discussing legitimate business activities but which are intended to conceal the transfer of funds. As a result, it is more difficult for financial institutions and other entities to detect patterns of money laundering activities when they are hidden in a conversation generated by a GPT.

Using GPT to fight back against fraud

But it is not all bad news. Firstly, ChatGPT is designed to prevent misuse by bad actors through several security measures, including data encryption, authentication, authorisation, and access control. Additionally, ChatGPT uses machine-learning algorithms to detect and block malicious activity. The system also has built-in safeguards against malicious bots, making it much harder for bad actors to use it for nefarious purposes.

In fact, technologies such as ChatGPT can actively help fight back against fraud.

Take Business email compromise fraud (BEC). Here a cybercriminal compromises a legitimate business email account, often through social engineering or phishing, and uses it to conduct unauthorised financial transactions or to gain access to confidential information. It is often used to target companies with large sums of money and can involve the theft of funds, sensitive data, or both. It can also be used to impersonate a trusted business partner and solicit payments or sensitive information.

As a natural language processing (NLP) tool, ChatGPT can analyse emails for suspicious language patterns and identify anomalies that may signal fraud. For example, it can compare email text to past communications sent by the same user to determine if the language used is consistent. While GPT will form an essential part of anti-fraud measures, it will be a small part of a much bigger toolbox.

New technologies such as GPT mean that financial institutions will have to strengthen fraud detection and prevention systems and utilise biometrics and other advanced authentication methods to verify the identity of customers and reduce the risk of fraud. For example, financial organisations already use powerful behavioural intelligence analysis technologies to analyse digital behaviour to distinguish between genuine users and criminals.

In a post-ChatGPT world, behavioural intelligence will continue to play a vital role in detecting fraud. By analysing user behaviour, such as typing speed, keystrokes, mouse movements, and other digital behaviours, behavioural intelligence will aid in spotting anomalies. These can indicate that activities are not generated or controlled by a real human. It is already very successfully being used to spot robotic activities which are a combination of scripted behaviour and human controllers.

For example, a system can detect if a different user is attempting to use the same account or if someone is attempting to use a stolen account. Behavioural intelligence can also be used to detect suspicious activity, such as abnormally high or low usage or sudden changes in a user’s behaviour.

As such, using ChatGPT as a weapon against fraud could be seen as an extension of these strategies but not as a replacement. To counter increasingly sophisticated scams, financial service providers such as banks will need to invest in additional control such as robust analytics to provide insights into user interactions, conversations, and customer preferences and comprehensive audit and logging systems to track user activity and detect any potential abuse or fraudulent activity.

And it’s not all about fraud prevention. Financial institutions should also consider how they use biometric and conversational AI technologies to enhance customer interactions. Such AI-driven customer service platforms can ensure rapid response times and accurate resolutions, with automated customer support services providing quick resolutions and answers to customer queries.

Few world-changing technologies arrive without controversy, and ChatGPT has undoubtedly followed suit. While it may open some doors to criminal enterprise, it can also be used to thwart them. There’s no putting it back in the box. Instead, financial institutions must embrace the full armoury of defences available to them in the fight against fraud.

CategoriesAnalytics Cybersecurity IBSi Blogs IBSi Flagship Offerings

Chargeback fraud is growing – can AI and Big Data stem the tide?

Monica Eaton, Founder of Chargebacks911
Monica Eaton, Founder of Chargebacks911

According to our research, 60% of all chargeback claims will be fraudulent in 2023. This means not just that merchants have to consider that chargebacks claims are more likely to be fraudulent than legitimate, but that individual merchants and the anti-fraud industry need to lay the groundwork to collect and analyze data that will show them what fraud looks like in real-time.

By Monica Eaton, Founder of Chargebacks911

While many industries are benefiting from so-called ‘big data’ – the automated collection and analysis of very large amounts of information – chargebacks face a problem. The information that is given to merchants concerning their chargeback claims tends to be very limited, being based on response codes from card schemes (‘Reason 30: Services Not Provided or Merchandise Not Received’), meaning that merchants would have to do a great deal of manual work to reconcile the information that the card schemes supply with the information that they have on hand.

While Visa’s Order Insight, Mastercard’s Consumer Clarity, and the use of chargeback alerts have reduced the number of chargebacks, merchants still have very little data on chargeback attempts. This article will look at how merchants can improve the level of data they receive on chargebacks and how they can use this data to create actionable insights on how to improve their handling of chargebacks.

What is big data?

2023’s big tech story is undoubtedly AI – specifically generative AI. Big data refers to the large and complex data sets that are generated by various sources, including social media, internet searches, sensors, and mobile devices. The data is typically so large and complex that it cannot be processed and analyzed using traditional data processing methods.

In recent years, big data has become a crucial tool for businesses and organizations looking to gain insights into customer behavior, improve decision-making, and enhance operational efficiency. To process and analyze big data, companies are increasingly turning to advanced technologies like artificial intelligence (AI) and machine learning.

One example of a company that is using big data to drive innovation is ChatGPT, a large language model trained by OpenAI. ChatGPT uses big data to learn and understand language patterns, enabling it to engage in natural language conversations with users. To train ChatGPT, OpenAI used a large and diverse data set of text, including books, websites, and social media posts. The data set included over 40 gigabytes of text, which was processed using advanced machine-learning algorithms to create a language model with over 175 billion parameters.

By using big data to train ChatGPT, OpenAI was able to create a language model that is more accurate and effective at understanding and generating responses than previous models. This has enabled ChatGPT to be used in a wide range of applications, including customer service chatbots, language translation services, and virtual assistants. Currently, technology very similar to ChatGPT is being used by Bing to replace traditional web searches, with mixed results, but, like self-driving cars, it is a matter of ‘when’, not ‘if’ this technology will become widespread.

AI and fraud

Chargeback fraud is a growing problem for businesses of all sizes. The National Retail Federation estimates that retailers lose $50 billion annually to fraud, with chargeback fraud making up a significant portion of that total. With the significant rise of online shopping, this type of fraud has become even more prevalent, as it is much easier for fraudsters to make purchases using stolen credit card information, forcing victims of fraud to then dispute the charges with their credit card issuer.

Chargeback fraud occurs when a customer disputes a valid charge made on their credit card, claiming that they did not make the purchase or that the merchandise they received was not as described. If the dispute is upheld, the merchant is forced to refund the money to the customer, along with any associated costs, and is typically charged a penalty fee by their payment processor. This not only results in a financial loss for the merchant but can also damage their reputation and lead to increased scrutiny from payment processors.

Where can machine-learning technology help with fraud? To understand this, we have to first understand its limitations. ChatGPT and Large Language Models (LLMs) like it are not Artificial General Intelligence (AGI) – the sci-fi trope of a thinking computer like HAL 9000. Although they can pass the Turing Test, they do so not by thinking about the given information and answering accordingly, but by matching what looks like an appropriate answer from existing text.

This means that while they can produce perfect text by copying existing text rather than ‘thinking’ about the substance of the question, they are prone to producing errors. This is something that isn’t acceptable when it comes to fields like fraud prevention – nonsense answers with a veneer of truth won’t work in the binary world of whether a particular transaction was fraudulent and unfounded accusations of fraud can damage a merchant’s reputation.

What is needed then are AI solutions built specifically for chargebacks. Companies like Chargebacks911 have been working on this for years now, and their solutions are based on big data models that have been built up over that time. Because of their extensive experience working in that field, they are the ideal partner to work with to bring AI up to speed and address the problem of chargebacks.

Call for support

1800 - 123 456 78
info@example.com

Follow us

44 Shirley Ave. West Chicago, IL 60185, USA

Follow us

LinkedIn
Twitter
YouTube