AI: Baking ethics into the software

Written By:
Content Copyright © 2021 Bloor. All Rights Reserved.
Also posted on: Bloor blogs

AI: Baking ethics into the software banner

When I was paid to be an IT developer, back in the last century, Ethics was “someone else’s problem” and I just implemented the spec, whether it was about enabling customer exploitation or not. In fact, in my bank, the joke was “Ethics? Isn’t that the place north-east of London, where the money traders come from?”.

The past is a different place. These days, we worry about societal outcomes, as well as financial ones, for all stakeholders. An Agile developer shares responsibility with their user representatives for what the software does.

So, when I was writing an essay about the use of AI programming for creating artworks for my Masters degree, I felt impelled (my supervisor expected it) to add something about the ethics of using vast amounts of computer resources (and electrical power) to do something that could be done easily (and arguably better) by a person: “The good news is that AI can now give you a more believable image of a plate of spaghetti,” data artist and researcher Jer Thorp writes on Twitter. He jokingly estimates: “The bad news is that it used roughly enough energy to power Cleveland for the afternoon.” [see-the-shockingly-realistic-images-made-by-googles-new-ai]. The same source reports that “Information and communications technology is on track to create 3.5% of global emissions by 2020 – which is more than the aviation and shipping industries – and could hit 14% by 2040”. That raises ethical issues.

My masters was in Visual Communication but I note that there is now a University of Cambridge Masters in Studies in AI Ethics and Society addressing the “national and global need to adequately equip future leaders and decision-makers to address… the significant ethical and societal challenges…” associated with Artificial Intelligence (AI) technology.

Ethics, just an AI thing?

No, although the most obvious modern use case for IT ethics is indeed around AI. The new EU ethics guidelines for trustworthy AI (PDF download) identify seven key requirements:

  1. human agency and oversight;
  2. technical robustness and safety;
  3. privacy and data governance;
  4. transparency;
  5. diversity, non-discrimination and fairness;
  6. environmental and societal well-being; and
  7. accountability

Obviously, however, these are pretty general and we think that they should be considered as “non-functional requirements” (horrid term) for any IT application these days, although I’ll mostly discuss them in the AI context here, in the interests of space.

What can we do about it?

Simply stated, defensible ethical behaviour should form part of the non-functional requirements for any IT project, from inception, and its scope should include all stakeholders, even customers and compliance authorities. Transparency should be the default (most people and organisations behave ethically when in the public gaze) unless there is a real reason for commercial confidentiality.

In practice, perhaps the best way forward is to implement industry-wide “codes of conduct”. Both organisations and stakeholders can sign up to ethical guidelines developed by organisations such as the EU and adapt general charters of corporate responsibility to their specific circumstances. You would expect to find Key Performance Indicators (“KPIs”) for ethical outcomes, internal codes of conduct, internal ethical policy documents and “ethics awareness training”, supporting trustworthy computing. The fundamental ethical position of an organisation (maintenance of desirable values such as fundamental rights, transparency and the avoidance of harm) should be formally communicated to new staff on joining.

And, talking of ethical issues, don’t overlook the reputational risk aspects of being seen to behave unethically. For example, Elastic complains that Amazon is, in effect, stealing its customers and IP and has taken steps to prevent them from selling its software without collaborating with the company: “If you don’t stand up at one point to this level of behaviour, then it’s like a bully in the school-yard,” says Elastic’s CEO Shay Banon. “And this is our form of standing up to it.”

I note also that Zoho makes great play of its ethical approach to privacy. Does this give it a business advantage? Perhaps it does; Bloor Navigator, Claire Agutter tells me that she is following @dhh on twitter, and that he’s taking on Apple with a new email service that is anti-tracking. Very possibly, good and demonstrable ethics is good PR.

Help and Assistance

There is plenty of assistance with Ethics out there. For instance, the IBM Institute for Business Value paper: “Advancing AI ethics beyond compliance” (PDF download). It says: “AI ethics research focuses on how to design and build AI systems that are aware of the values and principles to be followed in the deployment scenarios. It also involves identifying, studying, and proposing technical and non-technical solutions for ethics issues arising from the pervasive use of AI in life and society. Examples of such issues are data responsibility and privacy, fairness, inclusion, moral agency, value alignment, accountability, transparency, trust, and technology misuse”. AI has its own special problems, around biased training of models and lack of explainability (lack of transparency as to how the results were arrived at is a major issue), but in what way does the list identified in this paper not apply more generally? Based on a survey of some 1,250 global C-level executives in late 2018, this paper also documents reasonably current attitudes.

As one might expect, the EU is to the fore in ethical issues. There is a High-Level Expert Group on Artificial Intelligence, which is developing EU guidelines for AI ethics (PDF download).


Ethical behaviour for all stakeholders should be baked into software, when appropriate and possible, from the get-go. This implies that good ethics should be supported by policies, training and codes of conduct, accessible to all staff, and made part of the staff induction process.  This is as true for AI as it is for all other areas of business activity – increasingly so as AI pervades our working lives more and more.

This post is part of our Future of Work series. You can read the previous post or find them all in our Future of Work section. If you’d like to discuss how we can help get you prepared for the way work and business is changing, then please contact us.