Changed data processing requirements are driving a need for new hardware architectures - Data architectures need revisiting as well

Written By:
Published:
Content Copyright © 2023 Bloor. All Rights Reserved.
Also posted on: Bloor blogs

Changed data processing requirements are driving a need for new hardware architectures banner

New data sources, such as IoT sensors and wearable devices, are delivering vast amounts of additional data. Streaming analytics are enabling the development of new business models, and new revenue streams bringing forth new applications that demand high availability and low latency. These are all are putting a real strain on existing data architectures. In one of my primary areas of research, I am now seeing a real acceleration in AIOps vendors having to rearchitect their data movement, storage and processing environments to meet the demand for real-time analysis and resolution of performance and availability issues in highly complex, widely distributed systems. But the same issues are becoming apparent across industries and use-cases.

One vendor, N5 Technologies, had already addressed these issues in a very innovative fashion with its RUMI platform. Initially this platform was very tightly focused on financial services and, in particular the need for speed in areas like High Frequency Trading. But, according to N5 Technologies’ CEO, Girish Mutreja, the focus of attention has switched from raw transactional performance to the wider issue of handling very large datasets at speed. For N5, this has opened up a number of new vertical market opportunities that in turn is requiring new thinking about its go-to-market strategy.

I’ll reference some potential new use cases for RUMI that open up much wider opportunities in a moment. But first it is probably sensible to summarise briefly what data architecture changes might be needed, and why. Let’s take credit card authorisations as an example.

The cardholder presents their card to a merchant in exchange for goods or services. The request might originate from a credit card terminal or in-store point of sale, an eCommerce website gateway, through mobile or in-app payment acceptance.

The merchant sends a request for payment authorization to their payment processor.

The payment processor submits transactions to the appropriate card association, eventually reaching the issuing bank.

Authorisation requests are made to the issuing bank, including parameters like the credit card number and CVV, merchant id, customer location (latitude/longitude) and charge amount.

The issuing bank approves or declines the transaction. Transactions can be declined for insufficient funds or available credit, if the cardholder’s account has been closed or expired, if a payment is past due or other factors.

Transactions can also be declined for potential fraud in addition to insufficient funds or credit. For example, if the current location of the customer is physically impossible to reach given date/time of the location of the last known transaction by customer, then the request is declined or switched to two factor authentication (2FA) to protect against fraud.

The issuing bank then sends the approval or denial status back along the line to the card association, merchant bank and finally to the merchant.

That’s the credit card authorisation process in a nutshell. From the user’s perspective, the end-to end card authorisation process should only take one to two seconds. In that time, a number of transactions take place between the various participants. These transactions are data intensive and touch multiple domain objects. Each of these transactions need to have very low latency, at least as low as <100ms and sometimes even as low as <10ms, to ensure meeting end user response time expectations. Now consider that Visa alone authorises between sixty thousand credit card transactions a second and you get an idea of the scale of the data handling challenge.

In modern, micro-services based multi-tier architectures data handling is separated from transaction processing. Overall, separating the architectural tier responsible for handling high volumes of data from the transactional tier promotes scalability, performance, and maintainability of the system when dealing with significant data loads. But, while multi-tier architectures provide scalability and performance benefits, they are not without their limits.

With specific regard to data handling, the performance of a multi-tier architecture can be influenced by the data access patterns of the application. For example, if the application frequently requires complex joins or aggregations across multiple tiers, it can impact performance due to the network overhead and data transfer between tiers. Optimising the data access patterns and minimising unnecessary round trips can help mitigate these issues.

In my initial review of the RUMI platform, I had taken the description of data being co-located with the transaction at face value without thinking about its architecture implications. It was only as I was investigating, more generally, the impacts of handling large volumes of data in high performance, low latency systems that I began to question how Girish and his team had overcome the potential data handling challenges mentioned above. Indeed, at one point I even began to question if RUMI was in fact a multi-tier, micro-services based system at all.

Let me state unequivocally here that RUMI is definitely a micro-services based, multi-tier architecture platform. Overcoming the latency challenges inherent in such systems has involved some innovative thinking in both application design and the design of the hardware architecture to both minimise the need for network hops and ensure the lowest possible latency for any hops that do become necessary.

Now, this is a blog, not a technical white paper, so let me try and keep things simple. If you take the credit card analogy above, the customer, the merchant, the credit card etc. are all domain objects. RUMI co-locates as much data about the object as possible and any transaction that does not need to be linked to another domain object can be processed without any network hop. In this instance the transaction and data tiers have been collapsed. But where two or more domain objects, e.g. customer and credit card, need to be linked there will need to be a network hop, so the transaction and data tiers remain distributed

This is where the hardware architecture design becomes important. Girish has developed a high-powered messaging backbone that is akin to using a Remote Direct Memory Access (RDMA) fabric with all the data residing in NVMe (non-volatile memory express), which is a new storage access and transport protocol for flash and next-generation solid-state drives (SSDs) that delivers the highest throughput and fastest response times yet for all types of enterprise workloads.

Modern global trading systems are really data intensive that need to touch multiple domain objects in multiple different systems. Each step illustrates the delicate balance between co-located data access and data access across network hops. Minimising the number and frequency of network hops and the use of the most modern, fastest in-memory storage is one of the keyways RUMI delivers exceptional data handling and transaction performance. Having demonstrated how it can deliver performance in High Frequency Trading environments far more cost effectively than in-house custom developments, RUMI is now well placed to do the same for other areas in retail, logistics and financial services where handling large amounts of data from a wide variety of sources in real-time is a key business differentiator.

Most enterprises don’t have the depth of available skills, the time and the money to develop such a platform in-house. After all, 50 years ago, enterprises weren’t expected to develop their own mainframes. RUMI is a cloud ready platform that you should consider if your business relies on handling large amounts of data in real time to gain competitive insights and deliver exceptional digital customer experiences.