Kindle and Surface new interfaces accessible? Usable?

Written By: Peter Abrahams
Content Copyright © 2007 Bloor. All Rights Reserved.
Also posted on: Accessibility

Recently Microsoft announced Surface, a tabletop computer interface, and Amazon announced Kindle, an electronic book reader with a digital paper interface. The question I am asking myself is do these new interfaces provide new opportunities or new barriers in terms of usability and accessibility.

Looking first at Surface the new aspects of the interface include:

  • It is a tabletop and therefore horizontal rather than vertical.
  • You can put things on it and they can interact with the surface (e.g. a digital camera can share photos).
  • The main interface device is your hand. Pointing and gesturing are the equivalent and more than the mouse.
  • More than one person can interact with the Surface at the same time.

The potential advantages for people with disabilities includes:

  • The big screen and the strong support for touch and gesture means it could open up new opportunities for users who need screen magnification. For example put all five fingers on the screen and push them apart will magnify the screen with the hand as the focal point, pulling the fingers together would reduce the image. Moving all fingers across the screen would move the area of the window visible on the screen, whilst putting one finger down and dragging would be the equivalent of dragging the mouse. These two motions could be combined with the left hand (five fingers) moving the whole screen and the right hand (one finger) dragging an object around.
  • The idea of touching an object and moving it is much more intuitive than moving a mouse around the desk to move a cursor. This may make the interface much easier for people with cognitive disabilities. It may also help people who have never used a computer before, the mouse seems like second nature to many of us but it cannot be described as natural.
  • People who suffer from RSI find using the mouse painful because of the small movements, the accurate positioning, the tension holding the mouse and the continual left clicks. Surface could remove all these problems with bigger area movements, easy zooming in to remove any need for millimetre accurate positioning, no mouse to hold and left clicks with larger hand movements.
  • People with limited control of their upper limbs may find the larger movements easier to control.
  • People who can only use their legs may be able to interact with Surface more easily by sitting above it and using their feet.
  • The ability to put objects on the Surface and for them to interact with it raises some interesting opportunities. It is easier to put a smart card on the surface than it is to position it exactly into a small reader slot. The cards could hold information about the user.
  • But a slightly more way out idea is to have a box that sits on the Surface and the top of the box changes shape depending on the colours underneath, a blind user could then push the device around and feel the shape of say a map on the screen; if this was combined with a text to speech facility that read the name of the road, or the site of interest, we could have a map that is accessible both to a blind user and someone with 20/20 vision.

What I am waiting to find out is how more mundane interactions are done. For example is there a standard keyboard that can be used for entering text, and what about speech input and output.

The real question is, now we have a new input paradigm can we find ways to use it to make computing more accessible?

I have written more on Surface than I expected so I will leave Kindle to your imagination for a few days until I can come up with some thought provoking suggestions.