I'm always a little cynical about benchmarking - which (even if done honestly and well) doesn't always deliver what people might expect. For instance, perhaps the infrastructure used for a sector-winning benchmark tells you as much about the potential shortcomings of a product as its performance - did the winning performer need vast amounts of memory, for example (so it might be a real memory hog); or lots of fast disks; or an extreme spec CPU; or a huge cluster; or a really high performance network connecting everything? If you don't have easy access to the technology used in its benchmark, the benchmark winner might not be for you.
And what benchmarks aren't available can be informative too. Would it be interesting, for example, if the price/performance benchmark winners stopped competing at the very large scale, leaving just something like mainframe DB2 in the race?
I was reminded of all this when Oninit, a distributor for Informix (a high-performance enterprise database now owned by IBM), bought the apparent lack of official TPC benchmarks for Informix to my attention. Leaving aside my somewhat cynical view that TPC benchmarks largely show how good a product is at processing the TPC jobmix (heaven forfend that anyone has ever optimised their product for benchmark processing), the suggestion is that IBM is more interested in benchmarking DB2 than Informix.
Whatever the truth of that (and IBM might reasonably take the view that benchmarking on real workloads is more informative), it is interesting that an Informix customer felt the need to do his own TPC benchmarks. Eric Verceletto performed a "TPC-like" test on a $1,200 Linux box and got some pretty impressive results (see here - PDF file).
I'm not sure how strictly comparable these results will be with "proper" TPC benchmarks but they surely suggest that Informix shouldn't be overlooked, even for comparatively modest installations.
Databases are, to an extent, commoditised these days and there is a tendency for people not to look beyond Oracle, DB2 and SQLServer (and, perhaps, MySQL) - yet a lot of business-critical data is still held in and processed from IBM's IMS, say, or Sybase ASE (Adaptive Server Enterprise), or embedded databases like Pervasive PSQL - or, of course, IBM's Informix.
With model-driven IA (Information Architecture) approaches to data processing, what you use to physically implement your logical data architecture is a free choice, based on a particular product's vendor (or community) support, ability to scale, pricing model and so on - one of the benefits of producing a logical information architecture is that it frees you to make such choices. And the "market leaders" aren't always the best choice for a business' particular circumstances.
Perhaps the "big data" kerfuffle is starting up the database wars once again - if big data just means "processing much more data than you can at the moment" (a moot point - see here - but it'll do for now) then industrial strength databases such as IMS fast path and Informix are probably already coping with what you think of as large data volumes. There is something to be said for "tried and tested" even in the fashion-conscious world of IT!