Some thoughts on the Internet of Things

Written By:
Content Copyright © 2013 Bloor. All Rights Reserved.
Also posted on: The Norfolk Punt

The Internet of Things is the world in which every device is an Internet server and the complete “system of systems of systems…of devices” works by means of of devices and systems talking to each other and making autonomic decisions (see this paper for examples of what this means), using Semantic Web protocols, in near, and increasingly nearer, real-time.

This is a scalability problem beyond that which most organisation-level IT has had to cope with. However, I firmly believe that extreme scalability can also be compromised safely and economically for small scale solutions, if necessary (by switching off expensive extreme-scalability features, which can always be switched on again). Whereas, small-system approaches, fundamentally compromised in the interests of low cost, often become increasingly expensive, complex and unreliable as people try to scale them up – always assuming that the attempt to scale up works at all. Architectures have to be designed for extreme scalability and/or real-time operation from the first, in order to be effective, in real time, at scale.

I’ve often suggested that we are probably going to need radically new approaches, in IT terms, that assume massive data and user volumes, and real-time processing of data generated from connected devices and users, as a matter of course – not a scale-up of technology developed for a less demanding world. I talk a bit about this, in connection with RTI data-centric architectures, here and here.

I think that the new SymKloud Cloud Server from Kontron may be, like RTI, potentially part of the solution for the Internet of Things, rather than part of the problem, although I do realise that there’ll probably be many choices of technology in this space, if I’m right about it being an emerging opportunity. The Internet of Things potentially affects all businesses and all industry sectors and I think most players are thinking about the implications; especially, perhaps, IBM (with its Smarter Planet initiatives), TIBCO and RTI.

SymKloud offers a scaled-down carrier-grade communications infrastructure for cloud; rather than a scaled-up company-grade solution. It seems to me that SymKloud could well offer a more robust basis for next-generation environments as part of a whole architecture (it can’t solve all the problems by itself, but it might help avoid building advanced systems on sand). If SymKloud runs out of steam, Kontron has tried and tested upgrade options available.

The difference between a carrier-grade telco infrastructure and ordinary IT is as much a cultural as a technology thing. In the telco world, we expect the ring-tone to be always there; in IT we tolerate unavailability and poor performance and treat ‘high availability’ as a special case. As I’ve said, it seems to me that technologies developed in a world where ‘near enough is (usually) good enough’ are unlikely to scale easily to a world where telco carrier-grade service levels are the norm; while it should be relatively cheap to scale down expensive high availability technologies by compromising service levels in a planned, appropriate and reversible way. And a scaled-back solution based on a high-availability architecture designed for real-time will be easy to scale up again.