The helpful chatbot - And the ethics of your personal digital assistant

Written By:
Published:
Content Copyright © 2022 Bloor. All Rights Reserved.
Also posted on: Bloor blogs

The helpful chatbot banner

I went to an IBM presentation on “transforming talent and the employee experience” as we exit the COVID pandemic, recently, together with a “Hype of Analysts” (that’s a new collective noun). It raised some interesting issues, relevant even if the pandemic hasn’t actually gone away yet. I guess that part of the new normal for the mutable business is designing systems to be resilient and user-friendly, even if this pandemic, or another one, flares up again.

IBM was talking about “transforming talent and the IT experience” as something that may get overlooked as you buy technology for your Cloud transformation. This transformation needs new culture (flatter organisations, transparent leadership – and good ethics); new or updated skills (IBM sees these as the primary currency of the business); new tech (IBM sees “chatbots” becoming AI-enabled and evolving into Intelligent Digital Assistants); and new data-driven and predictive insights into talent management.

“Ethical AI” is going to be an important driver. IBM, for instance, has an AI Ethics board, which oversees IBM’s internal use of AI as well as how AI is used in its commercial products and services. This Board has teeth; it helped to stop IBM’s development and sales of facial recognition software, for example.

Partly, this is really based on trust, which (I think) must be two-way. Many employees probably feel pressured into saying that they trust their employers a long time before they do actually trust them. Trust, I think, is engendered by transparency, open (and safe) conversations about what will be done to employees (giving them real control over their future) – and by management following the same ethical rules as they expect employees to follow.

Since May 2020, IBM has expected its employees to make a “Work From Home Pledge”, which is all about maintaining a good work-life balance and an ethical working environment:

IBM Work from Home Pledge

  • I pledge to be Family Sensitive.
  • I pledge to support “Not Camera Ready” times.
  • I pledge to Be Kind.
  • I pledge to Set Boundaries and to Prevent Video Fatigue.
  • I pledge to Take Care of Myself.
  • I pledge to Frequently Check In on people.
  • I pledge to Be Connected.

I like this – I think it could be copied by other organisations. But it is only a part of building an ethically-based and trustworthy workforce in a (virtual?) workplace. The pledge should probably come after you’ve built an ethically-based workplace, to cement its culture in place. The pledge isn’t a cheap substitute for building good culture.

It also implies a matching pledge from management:

  • I pledge not to make demands on my staff inconsistent with them being Family Sensitive;
  • I pledge to accept that there are times when my staff won’t want to be On Camera;

etc.

Management culture must change to accommodate the new flexible working practices. For example, perhaps “management by results” has to become a reality instead of (as it often is) just wishful thinking. Management perhaps has to accept that someone who has one idea on Monday that makes millions for their company and then spends the rest of the week on the golf course might be as valuable to the company as someone working 9-5 and producing only mundane outcomes.

Also the pledge is stated in terms of “I pledge” but it is not just down to the person making it – they’ll need support from the whole organisation, together with metrics and help to resolve issues (perhaps the workload they are given is inconsistent with some of the pledges). One of the analysts attending commented that “pledges are fine, but what happens in reality is the key”. Fundamentally, as I said, mutual trust – based on action – is fundamental to effective culture change and adoption of flexible working practices.

This is especially true, if feedback on sentiment and outcome metrics is used to drive peoples’ management of their own work environment. Privacy and an appeals procedure are going to be important – and counting how many hours employees spend stuck to their keyboard (perhaps publishing a list of those least present on the company noticeboard) would NOT be a good way to manage culture and the employee experience. In part, because all the really employable (i.e. effective) talent will soon leave and go to work somewhere that is less intrusive.

IBM talked about using AI (augmented intelligence) chatbots, to automate the mundane and make boring, routine user-interfacing processes faster and less burdensome. This will bring its own issues, of course (some of which we’ll explore below), although it is doubtless the way forward.

AI chatbots could be enablers for Skills and Talent Management, making sure that team leaders are aware of the skills available in the organisation, that requisite skills are made available to new work efforts, that staff are encouraged to renew their skills and acquire new ones and so on. IBM believes that talent decisions will become data-driven and predictive. This is easy to say, of course, and upskilling is a good way to retain valued staff, but one analyst pointed out that reliably predicting what skills will be needed in future scenarios isn’t trivial.

There will, I think need to be a flattening of management structures and a much closer relationship between strategic planning, human resources (better called “talent management”, I think) and operations than is common today. Given sufficient information, AI could help predict future talent needs and opportunities, and customise processes for skill acquisition. The fundamental requirement is trust on all sides – especially if sentiment analysis etc. is in use. An analyst highlighted potential privacy issues – and that privacy law is different in different countries. This implies that AI must be able to explain its decisions in natural language, especially if it impacts people management. It will also be important that it recognises the soft (human) skills necessary for managing culture and morale – and that it is applied to leaders as well as the rank and file.

Note also that upskilling and retraining will take time. Should this be out of employees’ home time (and note the “Family First” pledge) or out of company time? A bit of both, I’d think, but the company should expect employees to be able to complete recommended upskilling, at least, on company time. IBM representatives put it this way: “It’s a mixture. Mostly from within paid time but there are always occasions when a topic is interesting but not directly germane to your current job and you carry on into personal time. IBM recommends 40 hours a year to spend on skill development”. If a company puts the effort into gamification to make its learning culture fun, employees are more likely to develop their skills in their own time, but it won’t want them to feel exploited.

Several analysts pointed out that choosing the right metrics for self-learning and education matters – it’s not simply a matter of consumption. Retention and application of knowledge matters and if there is no opportunity to apply new knowledge, it will likely be forgotten. IBM representatives agreed: “that is also built into Your Learning, there is mixture of session and classes from self-learning to then sharing this with others or bringing in live examples from client work -– helping link theory to practice”. Sadly, however, training and skills acquisition is sometimes treated as a “tick box” exercise, a list of courses taken, with no assessment of retention or the ability to apply the learning – something to be aware of and avoid.

We also discussed the issues around AI bias. If an AI learns what is considered acceptable behaviour by analysing an already dysfunctional organisation, it will probably encourage more dysfunctional behaviour. AI driven systems have learned to prefer employing men, to discourage diversity, to only employ young people, to be racist – and to use the language of a social media rant. There are ways to avoid such issues, of course but they’ll only work if the issues are recognised and resources put into addressing them. For a start, analyse AI behaviours – do its decisions match the characteristics of the general population, for example? And there may be other clues – do all your chatboxes have female voices, for example, and are they programmed to sound like willing slaves? Could this bias interactions with them – and does it reflect the acceptance of unhealthy stereotypes by the organisation?

This blog started with “transforming talent and the employee experience”. The sort of intelligent AI-enabled Intelligent Digital Assistant IBM is developing and making available in its offerings can do this. But transformation implies an objective, or it becomes change for change’s sake. So, what is the objective here – more profit? Bigger shareholder dividends? More skilled employees? I’d suggest that objectives such as employee happiness, healthy diversity and long-term employee health (physical or mental) are also likely to engender the other, more concrete, benefits on the side.

Post a public comment?