If there’s one positive thing about social media, it’s that it’s keeping everyone on their toes – especially service providers. Woe to the retailer, airline, bank, etc. that can’t keep its operations running so that they are available when and how users want them, 24/7, regardless of volume, transaction level, network congestion, or any other factor.
And the users are often merciless; just ask the folks in the IT department at banks like Natwest, Lloyds Bank, HSBC, Nationwide UK, or any of the other banks that experienced temporary service outages in December alone. Angry customers who couldn’t access their accounts, move their money, pay bills, or otherwise access banking services angrily vented their frustrations, using language that would make even sailors, in an ongoing barrage of rants against the institutions.
Ask any IT person whose managers are breathing down his or her neck for answers: It’s not an experience one would want to repeat. In fact, IT personnel likely resent being the ones left holding the bag when there is an outage; they may have recommended more advanced monitoring systems that management baulked at paying for, for example. They’re forced to make do with what they have – and what they have may not be up to the task at hand, ensuring service stability and presence during times of network stress, due to extra volume, network congestion, etc.
On the other hand, you can’t blame management for baulking at investing in the latest and greatest system that might solve outage issues, as opposed to systems that definitely will solve them. Vendors wax eloquently about how their solution is the solution to, for example, cybersecurity issues, but despite the money, companies throw at these solutions, hacking is as bad as ever. You can’t blame the C-suite folks from being sceptical when it comes to outage solutions, as well.
While IT departments might dither on cybersecurity solutions, the answer to their outage issues is already at hand – in their often overlooked but always important log files. These files provide a wealth of information about everything that goes on in an organization. Data from infrastructure, applications, security and IoT areas can provide insight into CRM, marketing, ERP and other initiatives for the business – as well as provide insights into why outages occur, and what to do about them.
But parsing through log files searching for actionable insights is a difficult job – too difficult for human beings. What’s needed is a machine learning, artificial intelligence-powered log analysis system – a system that enables its users to parse through unstructured data in order to develop actionable insights. Such systems allow users to define what they are looking for with a data structure, and feature an analytics system smart, fast, and robust enough to parse through thousands, if not millions of files and data streams.
It makes sense. Just think about the installation of a new piece of network software: How many DLL’s get written, how many dependencies are created, how many config files are adjusted? Too many to count, that’s for sure – and go figure out where all those changes were made. Yet one small “adjustment” in a config file could be enough to halt network traffic for hours. With AI-based log file analysis, however, it would be possible to prevent such outages; as soon as an unwelcome change is made, the system could alert IT managers and provide them with the exact information they need to resolve the issue.
And that AI-powered system could be used to analyze log files for many other purposes – providing organizations with insights about customer behaviour, expenses, better ways to do marketing – the list is endless. What’s needed is not a “new” system that will promise to solve a problem, like outages – but one like AI-powered log analysis, that will unlock the data companies already have.
By: Dror Mann, VP of Product, Loom Systems