It’s replication, but not as you know it

Written By:
Published:
Content Copyright © 2014 Bloor. All Rights Reserved.
Also posted on: The IM Blog

I have had an update from Attunity, specifically with respect to both Attunity Replicate and Gold Client Solutions. I will discuss the latter in a second article.

Attunity Replicate does what you would normally expect replication to do and it’s used for the sort of use cases that you would expect for replication. Note, however, that there is a separate product—Attunity CloudBeam—for replication to the cloud.

However, there’s something about Attunity Replicate that you wouldn’t normally expect. Typically, replication products use bulk loading or change data capture (CDC) to collect the data—there’s a transform engine and then you use target connectors to (bulk) load the data into the target. You can use Attunity Replicate this way if you want to, but the interesting thing about the product is Attunity has built in-memory stream processing into its engine, which is optimised for high-volume CDC input, as well as a stream loader at the back-end. Of course this makes a lot of sense: if a major use case is to support real-time business intelligence then you really don’t want to slow things down by having to land the data somewhere to do your transformations.

Note that if you have some heavy duty, complex transformations to do, then you probably don’t want to do them in the Attunity environment: do the lightweight processing using Attunity and use the data warehouse for the heavy lifting.

Attunity hasn’t just updated its replication engine, it also has updated its user interface—and when the company talks about this being “modern” they really mean it. It’s not really something I can describe in words but the easy-to-use “Click-2-Replicate” design is worth seeing. There’s also a new monitoring capability. More particularly, in version 3.1, the company has introduced a WAN-optimised data transfer protocol. This works by recognising the type of data being transported: if these consist of large tables then they are automatically compressed whereas if they are small tables or consist of CDC records then they are batched together to make network transfer more efficient.

This last feature is especially interesting, not just in its own right but also when one considers that one of the major arguments that Cisco made when it acquired Composite Software was that transport across the network needs to be optimised. Of course, that was for data virtualisation (or federation—a market from which Attunity has withdrawn) rather than replication, but the same argument applies.

Just to go back to the streaming engine for a moment—which Attunity calls TurboStream—this is a pretty obvious use of streaming technology. So why didn’t I think of it? What haven’t other vendors? Hats off to Attunity.