User Experience 2.1

Written By:
Published:
Content Copyright © 2009 Bloor. All Rights Reserved.
Also posted on: The Norfolk Punt

Some time ago I had a bit of a rant about how the overall user experience available in the web today simply doesn’t match the expectations of Web 2.0.

Slow response times—or, even worse, inconsistent response times—kill the “cognitive flow” which is presumably where the value of all this “web 2.0” stuff resides. If the user experience of using a web site is painful, you go somewhere else—and if there is nowhere else, you do as little as possible and leave as soon as possible. If you work all day, you “have” to use on-line banking to pay your bills, but if you don’t enjoy the experience, you won’t be encouraged to think about innovative ways to manage your finances and you probably also won’t be tempted to take up the bank’s offer to sell you a loan (and go through even more pain) either.

And there’s an even more worrying scenario. Companies are widely adopting web communications channels for internal communications. If the user experience of this is frustrating and annoying, internal communications will be disrupted and the effectiveness of the company compromised. How many enthusiasts for Web 2.0 or Process 2.0 have thought of the user experience it provides and how resilient their intended, presumably, good user experience will be? Have they thought of the consequences if “User Experience 2.0” turns out to be dire?

At the end of the last century, this was recognised and addressed at the server end by people like F5 and Citrix, with caching and compression and with moving copies of servers out to the edge of the network. However, you do still need to get the server performance design right—a piece of stupid SQL in your database access code can put end user experience on hold for minutes, or even hours.

Now, however, Ed Robinson, CEO of Aptimize Software has been telling me that 80–90% of the time today, the bottleneck is in the browser. Web pages were simply never designed to carry Web 2.0 loads—embedded pictures, videos, smart applets and so on. If web applications are badly designed, ever time you reload a page there can be dozens of interactions with the server as the browser checks for the latest versions of things that perhaps haven’t changed or aren’t relevant at the moment.

One aspect of this to consider is the laws of physics. The speed of light in glass fibre is an absolute limit (well, until we get quantum entanglement into routine web applications) and every switch or router in the network path adds more delay. A round trip to a server on the far side of the world takes appreciable time; make too many of them and the user experience turns out to be worse than swimming in treacle (in fact, swimming in treacle apparently isn’t too bad, by comparison, because although you meet more resistance, you have more resistance to push against).

Of course, you could always design web applications better—there are established “good practice” principles for web application design. But how many web application developers are seasoned IT professionals with good design skills—even if they are very productive with scripting? And how many SME’s could afford to employ an experienced web development team anyway? What you want is an application tool which automatically refactors web designs written to be impressive and work well on a local network, so that they can deliver a consistently good user experience in the real world. And, you want a user experience monitoring tool, so that you can check that you have a problem and whether, in fact, you’re fixing it.

Ed Robinson has the first of these tools for sale (and is doing something about the second) and it seems that it’s attracting lots of interest—Microsoft for example (where Ed has his background) is using it to make the SharePoint user experience more reliably pleasant. But his sweet spot is with medium size companies at present; small companies want to spend as little as possible at the moment and espouse DIY tuning (although I’d question whether web programming is their real business, in most cases) and big companies can put a professional programming team onto the problem (or, possibly, buy Ed’s company).

The Aptimize concept is simple, which is good. It merges the files making up a web page into fewer files, which mean fewer HTTP requests, and faster page load times. It claims to double the speed of SharePoint, ASP.NET or Linux Apache intranets and websites. Even more interestingly, Aptimize is starting to work with COBIT-style Plan/Organise – Acquire/Implement – Deliver/Support – Monitor/Evaluate IT governance process, if web developers bother with such things.

Ed reports hard metrics in support of his claims: Shopzilla found speeding up their pages by 5 seconds resulted in a 25% increase in page views, and a 7–12% increase in revenue, he says (see blog Marissa Mayer (VP of Search product and user experience at Google) is reported describing experiments showing that half a second delay caused a 20% drop in traffic to their site and also killed user satisfaction (although commentators raise some interesting points around the interpretation of her results)..

Aptimize’s product seems useful and, I’d suggest, worth taking a look at. But is it going to be enough? Well, I do worry that bad practice is bad practice and that just addressing its symptoms can institutionalise it. Moreover, once you fix one bottleneck another usually appears and it seems possible that Aptimize may mostly fix one kind of problem with one style of web development, albeit both popular ones.

So, I had a chat with a couple of other players in the User Experience space. Bernd Greifeneder of dynaTrace, for example, suggests that people no longer program JavaScript but “squeeze GUI components into ugly AJAX code on the client”. Bernd is also concerned about clumsy use of the Hibernate Object Relational mapping framework: “Most customers who use O.R.-mapping (e.g. hibernate) speed development with the O.R. mapper, but then have transactions with hundreds of generated SQL statements in it, of which 90% can be easily eliminated. DBAs tune SQL statements well, but what DBAs don’t see is the access layer to the database, and architects are surprised to see with dynaTrace what’s going on there”. Bernd says he’s heard a chief architect complaining that he wasted 5 hours just trying to figure out what one click on the browser caused to happen on the server side—and no matter how much time you waste on the client rendering complex web pages, wasting time in inefficient server code can still be a problem.

Bernd also points out that monitoring real “End User Experience” may not be as straightforward as you might think (dynaTrace integrates with Coradiant for this). Sometimes, all that gets measured is network HTTP request performance or the time taken to load a document. But what about the time the web browser needs for interpreting (or executing in case of Chrome) and rendering that page (which is, as I see it, what Aptimize is mostly addressing) or updating a complex page with loading significant XML content from the server, where rendering takes a lot longer than requesting the content?

Bernd also mentioned that (as well as preparing for the imminent launch of dynaTrace 3.1) dynaTrace is in the process of launching “dynaTrace AJAX edition” on dynaTrace Labs for measuring and diagnosing AJAX applications. “It’s available on for early birds, and will be updated in very short cycles” and it focuses on AJAX frameworks and Web 2.0 architecture and performance, rather than debugging.

I also talked to Jim Foley and Alan West (CEO and CTO, respectively) of eoLogic – its eoSense is a debugger and profiler for JAVA EE application server based developments, although its analysis extends out to the client activity. It captures framework, pattern, anti-pattern and best practice knowledge in a codified form that can be used to create abstracted derived models as the application code is exercised. These models are then analysed, as they are created, to detect issues, together with root-cause analysis and possible re-factoring strategies. As you might expect, Jim’s experience (presumably, mostly with eoSense customers) is that performance problems still often relate to the sort of architectural and design issues eoSense identifies and addresses, not just to client page performance bottlenecks.

Jim emphasises use of eoSense throughout the lifecycle (or continuously with Agile development), starting at the early stages—fundamental architectural and design issues, if neglected, are the most expensive to deal with in production systems. However, he also says that “the largest portion of our market usage at the moment is in the latter stage testing of applications as performance and other issues manifest themselves as a result of poor architectural QA earlier in the development process”. This suggests that the testing and defect-removal marketplace is not terribly mature and bodes well for adoption of Aptimize’s approach!

Alan says that the detection of poorly designed web pages is not a major focus for eoSense at present: “to detect some of the problems fixed by Aptimize we would probably have to parse and analyse the contents of the actual webpages and we don’t do that, although it is possible. On the plus side we have recently added detection of non-compressed pages and detection of large requests and responses (for Infosys, so these are WebSphere only for the current release)”.

Alan does share some of my concerns that the Aptimize approach allows you to fix user experience issues without understanding them at a deep level—which, from one point of view, is great, but it does risk institutionalizing bad practice. He points out: “what if you are masking some other underlying problem—you may have speeded things up by serving your web pages more rapidly but your real problem might be the database. Its important to look at the whole picture [and] you are, to some degree, making yourself dependent on Aptimize“.

There are a lot of players with something to say about the “User Experience”—I haven’t mentioned Amberpoint, for example, and I’m sure there are others. It is important to match user experience tools to the environment you are developing for and to the issues you have.

In the end there are three things to remember:

  • Choose the right metrics and concentrate on the bottlenecks your users actually experience. No matter how many other users experience latency-related Web 2.0 browser delays, if your users have to wait 5 minutes while the server gets their details off a badly-designed database, say, fix that first. Good metrics are vital, so that you can identify real bottlenecks.
  • Focus initially on holistic user satisfaction (from surveys, if no better indicators can be found), not just on technical performance metrics. If your users hate using your website because it uses obscure language and has a dreadful colour scheme, telling them how fast it is probably won’t keep them coming back to it.
  • When you do discover the sources of your users’ unhappiness, there are technologies that’ll help you to monitor and remove them. They really are worth using! Nevertheless, fixing the root causes as well as the symptoms is probably cost effective in the longer term.