What is the use of a signing avatar?

Written By: Peter Abrahams
Published:
Content Copyright © 2008 Bloor. All Rights Reserved.

The Economic and Social Research Council (ESRC) has just run a week-long Festival of Social Science. One of the events was an afternoon discussion on signing avatars (avatars that have been animated to produce British Sign Language (BSL) to help communication with the deaf community).

The discussion split into two parts:

  • Presentations on the state of the art and some examples of its use
  • Discussion on the quality of the signing and what it could be usefully used for

It provided a fascinating insight into the challenges faced by developers of assistive technology (AT) in making sure that what is produced is really useful for the target audience.

It is essential to understand that there is a distinct difference between the deaf and other disability groups. The Deaf are a ‘community’ because they communicate in their own language (BSL). I have been chastised, quiet rightly, for talking about ‘disabled communities’ but deaf people undoubtedly see themselves as part of a community with clubs, societies, a language (with dialects) and a unique and vibrant culture.

The main research on signing avatars has been done by Prof. John Glauert at the University of East Anglia. This research has now been used by the BBC to produce educational materials for deaf children.

The conversion of written English into a signing avatar has two separate challenges:

  • The first is that the syntax of a BSL is not the same as English so word-by-word conversion is not appropriate.
  • The second is to create an avatar that reflects the complexity of the movements used to ‘speak’ good quality BSL.

Most of the research has been into the second of these. Last year IBM ran an Extreme Blue project, called Say it See it (SiSi), that showed the possibilities of converting spoken English into BSL. The project used voice recognition technology, the output of which was analysed and converted into BSL syntax. This syntax was then used to instruct the University of East Anglia avatars to sign in BSL. As yet there is no indication from IBM as to whether this will be turned into a publicly available product.

The concentration on the avatar is important because, before they will be accepted by deaf people, their signing needs to be of high quality and easily understandable. The most obvious example of the improvements that have been made in avatars is the inclusion of facial expressions, which are essential to the full appreciation of the language.

The level of the challenge was graphically described by one participant who used his whole body to describe a horse race in BSL. He agreed that he did this to be provocative and make a point but it was obviously similar to a hearing-person listening to a novice read word-by-word from a teleprompt or listening to a professional Shakespearian actor.

Having said that, the avatars on display obviously could do the job—and do it better than a novice teleprompter. The best examples were for children, where cartoon characters signed. These examples were interactive, the children could develop their own stories, but were also immersive as the signing was not an adjunct to the action but was an integral part of the story.

The discussion that followed was heated and a challenge for the organisers who had to ensure that the interpreters (BSL to/from English) only had to interpret one speaker at a time. The main arguments against the avatars were:

  • The quality was not good enough and there was a concern that the deaf community was being fobbed-off with something sub-standard.
  • Using avatars might create a standard BSL which would be less expressive and complete.
  • The money spent on the research could be better spent on more live interpreters.
  • Videos of real signers would be more useful.
  • Deaf people can read English (although it is a second language) and therefore using avatars for short messages might not be very effective.

The main arguments for the signing avatars were:

  • This is still early research and the technology will improve.
  • The research identifies the real challenges and requirements.
  • The children exposed to the comic avatars really liked them and responded well to them—this could particularly be useful in adding signing to video game characters.
  • In an interactive context, the avatars can do what video of real signers cannot—it is possible to sequence signs from an avatar together to present information that is changing, but this is not possible via sequencing video clips.

Based on these views the final question was what, if anything, signing avatars could or should be used for. The consensus of the deaf audience considered:

  • Interpreting (no): this is currently not possible, due to the complexities of interpretation between English and BSL—think of the “Franglais” of Google’s translation. Also interpreting is normally two-way and there have been no attempts to get computers to understand BSL.
  • In-Vision (no): this is currently easier, cheaper, and better quality to do with interpreters than avatars.
  • Short information clips (probably not): examples suggested were weather forecasts and train announcements, where the ability of avatars to sequence changing information may be useful. The problem in both these cases is that written English is cheaper, useful to a wider audience, and just as easy for most deaf people.
  • Translation of web sites (possibly): certain web sites might lend themselves to information of this sort and the deaf community would appreciate not having to read large sections of English, although this could also be done through video.
  • Embedded (yes): if an avatar is an integral part of the environment of a game, cartoon or learning experience then enabling the avatar to sign as well as speak could be very attractive.

Having listened to these arguments I tried to compare it with the growth of other assistive technologies, especially voice recognition and text to voice. In the early stages of these technologies quality was poor but there was a significant take up, whereas there is not a great take up of signing avatars. It appears that the difference is that, however poor the voice technologies were, they provided a significant benefit to the user. Early screen readers gave people with vision impairments access to a mass of electronic text even if it was tedious and painful to listen to. Signing avatars, if considered to be just an assistive technology, do not provide a similar benefit.

So, will we be seeing more signing avatars? I am sure the answer is yes. The research will continue and the quality will improve so making more scenarios worthwhile. Embedding signing avatars into situations with other avatars will be an area of continuing growth. Signing in Second Life could be an attractive option but, even more so, having signing avatars in educational environments has potential benefits. One idea I had was to have a version of the in-flight safety video cartoons where the characters sign.

Finally I felt that the conference showed the importance of really understanding the requirements of any special set of users and ensuring that the products fit their desires.