Oracle in-memory

Written By:
Published:
Content Copyright © 2013 Bloor. All Rights Reserved.
Also posted on: The IM Blog

Oracle has announced an in-memory option for the 12c version of the Oracle Database. It’s expected to be available sometime in the early part of next year but no official release data has been announced.

Basically, the idea is that data will be stored on disk in a conventional row-format and a second copy of the data will be stored in columnar format in memory. This will mean that you won’t have to define analytic indexes for the data on disk as the columns act as self-indexing constructs. This in turn means less administration and tuning for the data on disk and you should improve analytic performance significantly (because data is in memory) and OLTP performance (because there are fewer indexes).

Oracle promises that there will be no requirements to change any code and that existing applications will run unchanged within the new environment.

So, what’s not to like?

Well, the first thing is that all that memory is going to be relatively expensive. More importantly, you will either need to deploy a lot of memory or the DBA will need to include/exclude specified columns (which will be an option) from the table to be loaded into memory. However, this is a manual step (therefore the DBA must know in advance what queries will come into the database so they can set those columns up in memory or otherwise you default to putting the full table in memory), which means that what you gain on the non-indexing administration you lose on the memory administration. Secondly, the DBA must allocate a set amount of memory for the in-memory tables.  Once that memory fills up that’s it: you can’t load any more tables into memory. And that means that you have to statically set the amount of space you need for your in-memory objects – whereas what you would really like is a dynamic way of re-allocating memory as required – which means a further administrative burden.

The other question that you might ask is how this in-memory column store relates to the hybrid columnar compression (HCC) used in Oracle Exadata? It turns out that the compression for in-memory is different than HCC.  You can still have the tables stored on disk using HCC but when they are loaded into memory you have to decompress the data, then break the table apart and recompress it using the new in-memory compression algorithms for each column.  Frankly, and to use an old English term: that sounds barmy.

The bottom line is that there are some clear performance advantages to this use of in-memory technology but there really don’t appear to be any administrative savings (they may even get worse), despite claims to the contrary. Moreover, for most companies, who cannot afford inordinate amounts of memory, the need to know in advance what tables to put in-memory is contrary to the whole thrust of the analytic world towards self-service BI: you can’t have self-service if you are limited by IT’s implementation.

I have to say that while the top line story sounds good, I am less impressed when you look under the covers. Oracle does not appear to have done the in-depth re-engineering that you would really like to support this sort of feature. No doubt this will come in due course but from what we know now this is in contrast to, for example, IBM’s BLU Acceleration. In this context, IBM seems to have really gone down into the weeds of the technology to make sure not just that it works but that the different elements of BLU Acceleration complement one another and do not take away with one hand what they give with the other.