CategoriesIBSi Blogs Uncategorized

Financial services firms are seeing rapid growth in data volumes and diversity. Various trends are contributing to this growth of available data across the sector. One of the drivers is that firms need to disclose more to comply with the continuing push towards regulatory transparency.

by Neil Sandle, Head of Product Management, Alveo

Neil Sandle, Product Management Director, Alveo

In addition, a lot more data is being generated and collected through digitalisation, as a by-product of business activities, often referred to as ‘digital exhaust,’ and through the use of innovative new techniques such as natural language processing (NLP), to gauge market sentiment. These new data sets are used for a range of reasons by finance firms, from regulatory compliance to enhanced insight into potential investments.

The availability of this data and the potential it provides, along with increasingly data-intensive jobs and reporting requirements, means financial firms need to improve their market data access and analytics capabilities.

However, making good use of this data is complex. In order to prevent being inundated, firms need to develop a shopping list of companies or financial products they want to get the data from. They then need to decide what information to collect. Once sourced, they need to expose what data sets are available and highlight to business users, what the sources are, when data was requested, what came back, and what quality checks were taken.

Basically, firms need to be transparent about what is available within the company and what its provenance has been. They also need to know all the contextual information, such as was the data disclosed directly, is it expert opinion or just sentiment from the Internet and who has permission to use it?

With all this information available it becomes much easier for financial firms to decide what data they wish to use.

There are certain key processes data needs to go through before it can be fully trusted. If the data is for operational purposes, firms need a data set that is high-quality and delivered reliably from a provider they can trust. As they are going to put it into an automated, day-to-day recurring process, they need predictability around the availability and quality of the data.

However, if the data is for market exploration or research, the user might only want to use each data set once but are nevertheless likely to be more adventurous in finding new data sets that give them an edge in the market. The quality of the data and the ability to trust it implicitly are nevertheless still critically important.

 Inadequate existing approaches

There is a range of drawbacks with existing approaches to market data management and analytics. IT is typically used to automate processes quickly, but the downside is financial and market analysts are often hardwired to specific datasets and data formats.

With existing approaches, it is often difficult to bring in new data sets because new data comes in various formats. Typically, onboarding and operationalising new data is very costly. If users want to either bring in a new source or connect a new application or financial model, it is not only very expensive but also error-prone.

In addition, it is often hard for firms to ascertain the quality of the data they are dealing with, or even to make an educated guess of how much to rely on it.

Market data collection, preparation and analytics are also historically different disciplines, separately managed and executed. Often when a data set comes in, somebody will work on it to verify, cross-reference and integrate it. That data then has to be copied and put in another database before another analyst can run a risk or investment model against it.

While it is hard to gather data in the first place, to then put it into a shape and form, and place it where an analyst can get to work on it is quite cumbersome. Consequently, the logistics don’t really lend themselves to faster uptime or a quick process.

The benefits of big data tools

The latest big data management tools can help a great deal in this context. They tend to use cloud-native technology, so they are easily scalable up and down depending on the intensity or volume of the data. Using cloud-based platforms can also give firms a more elastic way of paying and of ensuring they only pay for the resources they use.

Also, the latest tools are able to facilitate the integration of data management and analytics, something which has proved to be difficult with legacy approaches. The use of underlying technologies like Cassandra and Spark makes it much easier to bring business logic or financial models to the data, streamlining the whole process and driving operational efficiencies.

Furthermore, in-memory data grids can be used to deliver a fast response time to queries, together with integrated feeds to streamline onboarding and deliver easy distribution. These kinds of feeds can provide last mile integration both to consuming systems and to users, enabling them to gain critical business intelligence that in turn supports faster and more informed decision-making

Maximising Return on Investment

In summary, all firms working in the finance or financial services sector should be looking to maximise their data return on investment (RoI). They need to source the right data and ensure they are getting the most from it. The ‘know your data’ message is important here because finance firms need to know what they have, understand its lineage and track its distribution, which is in essence good data governance.

Equally important, finance firms should also ensure their stakeholders know what data is available and that they can easily access the data they require. Ultimately, the latest big data management tools will make it easier for finance to gain that all important competitive edge.

Leave a Reply

Your email address will not be published. Required fields are marked *

Call for support

1800 - 123 456 78

Follow us

44 Shirley Ave. West Chicago, IL 60185, USA

Follow us