CategoriesAnalytics Artificial Intelligence IBSi Blogs

Can ChatGPT help fight cybercrime?

Open AI’s ChatGPT has taken the world by storm, with its sophisticated Large Language Model offering seemingly endless possibilities. People have put it to work in hugely creative ways, from the harmless scripting of standup comedy to less benign use cases, from AI-generated essays that pass university-level examinations to copy that assists the spread of misinformation.

Iain Swaine, Head of Cyber Strategy EMEA at BioCatch

Iain Swaine, Head of Cyber Strategy EMEA at BioCatch
Iain Swaine, Head of Cyber Strategy EMEA at BioCatch

Chat GPTs (Generative Pretrained Transformers) are a deep learning algorithm that generates text conversations. While many organisations are exploring how such generative AI can assist in tasks such as marketing communications or customer service chatbots, others are increasingly questioning its appropriateness. For example, JP Morgan has recently restricted its employees’ use of ChatGPT over accuracy concerns and fears it could compromise data protection and security.

As with all new technologies, essential questions are being raised, not least its potential to enable fraud, as well as the power it may have to fight back as a fraud prevention tool. Just as brands may use this next-gen technology to automate human-like communication with customers, cybercriminals can adopt it as a formidable tool for streamlining convincing frauds. Researchers recently discovered hackers are even using ChatGPT to generate malware code.

From malware attacks to phishing scams, chatbots could power a new wave of scams, hacks and identity thefts. Gone are the days of poorly written phishing emails. Now automated conversational technologies can be trained to mimic individual speech patterns and even imitate writing style. As such, criminals can use these algorithms to create conversations that appear to be legitimate but which mask fraud or money laundering activities.

Whether sending convincing phishing emails or seeking to impersonate a user and gain access to their accounts or access sensitive information, fraudsters have been quick to capitalise on conversational AI. A criminal could use a GPT to generate conversations that appear to be discussing legitimate business activities but which are intended to conceal the transfer of funds. As a result, it is more difficult for financial institutions and other entities to detect patterns of money laundering activities when they are hidden in a conversation generated by a GPT.

Using GPT to fight back against fraud

But it is not all bad news. Firstly, ChatGPT is designed to prevent misuse by bad actors through several security measures, including data encryption, authentication, authorisation, and access control. Additionally, ChatGPT uses machine-learning algorithms to detect and block malicious activity. The system also has built-in safeguards against malicious bots, making it much harder for bad actors to use it for nefarious purposes.

In fact, technologies such as ChatGPT can actively help fight back against fraud.

Take Business email compromise fraud (BEC). Here a cybercriminal compromises a legitimate business email account, often through social engineering or phishing, and uses it to conduct unauthorised financial transactions or to gain access to confidential information. It is often used to target companies with large sums of money and can involve the theft of funds, sensitive data, or both. It can also be used to impersonate a trusted business partner and solicit payments or sensitive information.

As a natural language processing (NLP) tool, ChatGPT can analyse emails for suspicious language patterns and identify anomalies that may signal fraud. For example, it can compare email text to past communications sent by the same user to determine if the language used is consistent. While GPT will form an essential part of anti-fraud measures, it will be a small part of a much bigger toolbox.

New technologies such as GPT mean that financial institutions will have to strengthen fraud detection and prevention systems and utilise biometrics and other advanced authentication methods to verify the identity of customers and reduce the risk of fraud. For example, financial organisations already use powerful behavioural intelligence analysis technologies to analyse digital behaviour to distinguish between genuine users and criminals.

In a post-ChatGPT world, behavioural intelligence will continue to play a vital role in detecting fraud. By analysing user behaviour, such as typing speed, keystrokes, mouse movements, and other digital behaviours, behavioural intelligence will aid in spotting anomalies. These can indicate that activities are not generated or controlled by a real human. It is already very successfully being used to spot robotic activities which are a combination of scripted behaviour and human controllers.

For example, a system can detect if a different user is attempting to use the same account or if someone is attempting to use a stolen account. Behavioural intelligence can also be used to detect suspicious activity, such as abnormally high or low usage or sudden changes in a user’s behaviour.

As such, using ChatGPT as a weapon against fraud could be seen as an extension of these strategies but not as a replacement. To counter increasingly sophisticated scams, financial service providers such as banks will need to invest in additional control such as robust analytics to provide insights into user interactions, conversations, and customer preferences and comprehensive audit and logging systems to track user activity and detect any potential abuse or fraudulent activity.

And it’s not all about fraud prevention. Financial institutions should also consider how they use biometric and conversational AI technologies to enhance customer interactions. Such AI-driven customer service platforms can ensure rapid response times and accurate resolutions, with automated customer support services providing quick resolutions and answers to customer queries.

Few world-changing technologies arrive without controversy, and ChatGPT has undoubtedly followed suit. While it may open some doors to criminal enterprise, it can also be used to thwart them. There’s no putting it back in the box. Instead, financial institutions must embrace the full armoury of defences available to them in the fight against fraud.

CategoriesAnalytics Artificial Intelligence IBSi Blogs IBSi Flagship Offerings Loans

Specified user FinTechs are helping lenders ride the AI wave for origination and underwriting

Raman Vig and Sudipta K Ghosh co-founders of Roopya
Raman Vig and Sudipta K Ghosh co-founders of Roopya

The Indian digital lending industry is undergoing a major transformation due to its unprecedented pace of growth. As per the recent stats – more than 200m people have availed of retail loans in a year and this is growing at 20% CAGR.

By Raman Vig and Sudipta K Ghosh co-founders of Roopya

The significant rise in the disbursement volume not only exhibits the uptick in the number of borrowers but also demonstrates the emergence of digital lending players in the market.

Many FinTech companies are overshadowing brick-and-mortar lending institutions by digitising every aspect of the lending process. This can be attributed to the rapid adoption of Artificial Intelligence (AI) and Machine Learning (ML) models that expedite and enhance the lending process. Given the scenario, the new-age lenders are moving from traditional risk models to a data-backed approach to be more relevant in the market.

A major step towards addressing gaps in the lending ecosystem

Data is the most critical element for any AI / ML model. In lending, credit bureau data and alternate data becomes the base for any propensity model for loan origination, preparing scorecards for underwriting, or even creating early warning signals on existing portfolio.

Hence data becomes the most powerful and significant force that drives the digital lending industry. In the present ambiguous scenario, the Indian lending industry has flagged several concerns on the dynamics of the data distribution of borrowers among lenders.

India has more than 1200 active lenders, out of which, only 1% have access to advanced data and analytics tools. This creates a significant gap on the supply side as small and mid-sized lenders lose out on the data-driven lending race. The new-age loan origination and underwriting tools which are accessible only to large-sized lenders create a huge disparity in data intelligence. Consequently, these lenders have to incur high acquisition and underwriting costs, ultimately leading to high-interest rates for borrowers.

Grappling with an unregulated lending scenario, the Reserve Bank of India (RBI) planned to put a guardrail on the ecosystem. The apex bank announced the appointment of a new set of FinTech companies as ‘Specified Users’ of Credit Information Companies (CICs) under the Credit Information Companies (Amendment) Regulations Act, 2021 based on stringent eligibility criteria. These Specified User FinTechs get access to credit data, run analytics and help digital lenders make data-driven decisions.

The appointment of Specified User FinTech players has not only regulated credit data distribution but also resulted in more streamlined and secure digital loan processing.

AI underwriting models

Every year, over 15 million ‘New to Credit’ borrowers enter the credit ecosystem. This makes loan underwriting a tricky process for lenders under the existing conventional models. Every customer or borrower has unique financial circumstances which bring uncertainty many inches closer to making credit decisions.

If an underwriting practice is not backed by data and analytics, it can lead to economic meltdowns for lenders. And that’s where Specified User FinTechs come to the rescue, providing lenders with the ability to interpret enormous data amounts much faster and more accurately than conventional underwriting practices. It equips lenders with AI and ML-backed underwriting models, adding an extra layer of better oversight on how data sets can be used strategically to come up with personalized solutions for each borrower.

FinTech players are one of the early adopters of technology. The advent of Specified User FinTechs helped lenders to venture into segments that were deemed high-risk by conventional lenders. Simply put, they have been successful in bridging the accessibility gap for underserved lenders, making them ride the wave of AI.

Predictive algorithm to streamline the lending process

In practical terms, AI works intuitively like predicting defaulted or paid loans. Specified User FinTech combines AI algorithms with ML classification mechanisms to create probability models for lenders to have better credit decision ability. The technologies are applied to improve credit approval, and risk analysis and measure the borrowers’ creditworthiness, which further helps small and mid-sized lenders scale with ease.

FinTech companies that are recognized as Specified Users have competencies to store huge amounts of credit data and build AI and ML models on structured and unstructured data sets. This provides more streamlined and better insights for borrower segmentation, predicting loan repayment, and helping in building better collection strategies. Besides this, Specified User FinTechs are helping lenders to be on top of automation whether in loan underwriting or pricing for personalized offerings.

On a similar backdrop, lenders’ ability to recognize early warning signs proves to be highly beneficial for lenders with credit risk management. Recognized by RBI, lenders can be certain of the credibility of Specified User FinTechs in terms of data and analytics.

Specified User FinTechs leverage the intuitive yet data-backed behavior that detects any suspicious borrower and red flags as fraud. Unlike traditional tools of analysis, it can alleviate the possibility of human errors arising from biases, discrimination, or exhaustive processing practices. By utilizing NLP (Natural Language Processing), lenders can accurately generate warning signals instantly.

Final Thoughts

The landscape of digital lending in India is continuing to evolve. Lenders can reap the benefits of data hygiene performed by AI and ML infrastructure established at the Specified User FinTech’s end. By automating and bringing all significant practices to one place, lenders are empowered to improve customer experience, take leverage of predictive analysis, enhance risk assessment, and improve credit decisions and breakthrough sales bottlenecks.

Call for support

1800 - 123 456 78
info@example.com

Follow us

44 Shirley Ave. West Chicago, IL 60185, USA

Follow us

LinkedIn
Twitter
YouTube