This is the second edition of this spotlight paper; the previous version having been published in 2014. Three years ago, discussions about big data were still mainly focused on the volume, variety or velocity of data and big data deployments were only starting to go beyond the trial stage and into deployment. Today, while there remain large numbers of companies still testing Hadoop (and similar environments) out, there are plenty of live implementations that have demonstrated its value.
However, today, while there remain large numbers of companies still testing Hadoop (and similar environments) out, there are plenty of live implementations that have demonstrated its value. However, there is also a second trend beyond that of conventional analytics, whereby it is becoming increasingly popular to offload functions such as real-time log analytics, to third party platforms such as Splunk (which is both a big data analytics platform and a security information and event management (SIEM) enabler), in order to support security and IT service functions. These are big data implementations in the same way that running something like sentiment analytics, on Hadoop, is; and have the same issues.
The purpose of this paper is to examine those issues, which arise when big data implementations transition beyond skunk works and into general-purpose use across an enterprise. In particular, we will be focusing on the issues that arise when organisations are integrating their mainframe system of record alongside big data implementations. Note that by "mainframe" we mean a z/OS environment, for which we are using "mainframe" as shorthand.
If you want to find out more, call +44 (0)207 043 9750 or email us.