AI drives critical need for a digital ethics framework

Artificial-Intelligence 2

With digital ethics, “I know it when I see it” isn’t good enough

“I shall not today attempt further to define the kinds of material … within that shorthand description … but I know it when I see it.” This rationale is used to categorize the struggle to define something that is hard to define, like taste, art, beauty or, in this case, obscenity, as famously stated by U.S. Supreme Court Justice Potter Stewart.

That perspective could easily be applied to digital ethics today. Gartner defines digital ethics as the systems of values and moral principles for the conduct of electronic interactions. But what does this really mean? Everyone agrees we need to decide what is ethical and what is not, yet most executives and organizations seem to operate on an “I know it when I see it” basis.

Taking the lead on defining digital ethics

Technological advances like cloud, big data and artificial intelligence (AI) are creating new market opportunities and will help solve big societal problems. But the pace and widespread adoption of innovation continues to accelerate, and will introduce challenges that will affect wide swaths of society. Waiting for local, national or international legislation to pass is not an option. It simply takes too long. At Avanade, we believe that responsibility for defining the ethics around building, using and applying technology lies with the organizations that are driving it.  Those who lead the charge must play an active role in developing both informal and formal regulations.

Microsoft is one of those leading companies putting a stake in the ground, and recently described the societal impact of artificial intelligence in a ground-breaking book called “The Future Computed: Artificial intelligence and its role in society.” The book examines the use cases and potential dangers of AI technology and gives guidance how avoid or mitigate them. Digital ethics was also front and center at the recent World Economic Forum in Davos, Switzerland, where several sessions were devoted to AI and how to use it responsibly.

AI brings new urgency to digital ethics

There’s good reason the topic of digital ethics is gaining visibility. As technologies like AI grow, so too does their potential impact. For example, imagine there is a medically unfounded correlation between two data groups, such as gender and fatality rate among individuals who have a certain symptom. An AI-based tool used by a hospital to determine the urgency for surgery based on this data might connect the two data groups and make a wrong recommendation to proceed with surgical intervention. Medical expertise and common sense will be required.

This may seem like a dramatic example, but it underscores the need for human involvement, especially in situations where ethical dilemmas could arise. We are starting to see the impact of digital ethics play out each day, from those that affect the individual (Does an autonomous vehicle save a pedestrian or a passenger?) to those that impact the broader community (How do we maintain control over machines that become increasingly intelligent and powerful?). We all agree that decisions affecting human lives require human involvement. We need to look beyond pure logic and bring in compassion, a value for life and dignity, and simple common sense.

But that’s easier said than done. Avanade research shows that 89 percent of executives have encountered an ethical dilemma at work, but 87 percent admit they are not prepared to address ethical concerns caused by the increased use of smart technologies and digital automation. People are hungry for guidance that is thoughtful, effective and ready to be put into practice.

Four pillars of a digital ethics framework

Digital ethics is one of the most important building blocks for the success of a range of technologies.  People will not trust systems or companies they do not believe are ethical, nor will they do business with them or work for them. Having a strong sense of ethics, especially in the digital space, will quickly become necessary for a digital organization’s short- and long-term sustainability.

At Avanade, we are already defining parameters around digital ethics and believe we can do much better than a “we know it when we see it” approach.  We established four pillars to help determine if a product or a service is digitally ethical:

  • Fairness and inclusiveness.

It must not apply or lead to unlawful or unethical discrimination. It must comply with data protection and security principles while also helping to create a better information society – one that is open, pluralistic, tolerant, equitable and just.

  • Human accountability.

Humans need to be ultimately accountable for the deployment and outcomes of a computer-generated diagnosis or decision. Only humans can apply compassion, empathy and common sense to ethical dilemmas.

  • Trustworthiness.

AI systems must be reliable, capable, safe, transparent and truthful in their digital practices. This includes making it clear to users whether they are dealing with a computer or a human, and ensuring design starts with reliable and trustworthy data.

  • Adaptability.

As our understanding of digital ethics evolves, so should the tools we create. Flexibility will allow for this evolution and enable us to incorporate feedback and learn from it.

Only by meeting these criteria can we be confident that an AI system is digitally ethical.

“Ethics is knowing the difference between what you have a right to do and what is right to do,” Justice Potter also once said. Our hope is that this framework can help move the discussion from the right to do something to agreeing upon the right thing to do.



Exploring the human benefits of artificial intelligence


Did you know the telephone was the first virtual reality technology? It took us five decades to adopt it as a commodity of life. It a took a little less time to adopt radios and televisions. But we swiftly embraced the PC and mobile phone in less than ten years.

We speed up the adoption of new technologies because we use new capabilities to create them. It reduces how long it takes to for us to accept the next “big thing”.

Today, we interact with machines in ways that were once the exclusive territory of humans. Chatbots, or virtual assistants, are the first evidence of machines interacting with us in human-like ways. Microsoft’s Cortana helps you with any search topic from your Windows 10 machine. Alexa, Amazon’s virtual assistant, receives verbal commands. The chat window at the bottom of your bank’s web site prompts you to speak to a specialist.

So, how will AI impact our workforce?

Instead of assuming we’re all automatically out of jobs, let’s consider the value of being human. Humans are creative, imaginative, and often unpredictable. We base our decisions on fact or gut instinct to a situation. We can adapt to change and are guided by morals and principles. A computer “thinks” in a pattern that is logical and structured. Humans program AI to make the smartest and most consistent decisions, but if AI meets an unexpected condition, it will not always do the right thing. So, human and machine must work together.

Imagine this: you’re a new customer service representative for a utility company and you accept calls directly from customers. Your company has a low rating for its customer service and is losing customers. Many of your colleagues are quitting due to boredom. The utility company implements a new system, and instead of prompting customers to “press 1” and wait for more automated prompts, your pleasant voice greets them on the first ring with “how can I help?”

One customer states she wants to pay her bill, and you oblige and conclude the call. Another customer calls at the same time for a different reason and you deal with that request with ease and accuracy. Yes, it may be your voice, but it is not you. Your voice and personality answers hundreds of simultaneous calls – and only when your alter-ego trips up and can’t complete the task will you get forwarded a call. The customer isn’t startled by the change in voice or tone, and you resolve the problem because you see the details of his call already on your device.

“Being human” becomes a premium asset

In our example, the customer feels like he is important to the utility company because they are providing a human to help him. Yes, the utility company could have spawned instances of the same voice and personality to answer all calls. But this approach contributes to lower customer satisfaction because it sounds recorded – especially if they call and hear that voice more than once.

The utility company reduces its customer service staff. Yet, displaced workers are not dismissed. Value is in their innovation and creativity, as well as knowledge of the business and systems. They are now focused on managing the utility network from a control centre, with new responsibilities to use analytics and machine learning to expand and maintain the network.

Therefore, AI can help to increase job satisfaction

You hired out your voice to the utility company to answer calls and provide the “human touch” to their brand. Now you can take your daughter to school or watch your son’s dance recital while “you” are working. You are paid through new usage terms and conditions that give you flexibility and increase work/life balance. You are no longer tied to eight-hour shifts, five days a week.

The customer service scenario may sound preposterous, but the technologies already exist.

Montreal-based start-up Lyrebird created a system to mimic a person’s voice.

  • Google’s DeepMind research company developed a deep neural network that synthesizes realistic human speech.
  • Microsoft is making it easy to integrate artificial understanding into nearly any software application with Cognitive Services.
  • Large consultancies, like Avanade, are stitching emerging technologies together and helping customers use them to increase revenue or drive down cost.


The customer service scenario may prompt concerns over legalities, security, privacy, employment, compensation, discrimination, and identity theft. But it portrays an organization’s responsibility to provide new opportunities for displaced workers.

Despite the challenges, organizations are digitally transforming to compete for customers and AI promises a way forward. With investment in AI expected to grow from $640m in 2016 to $37bn by 2025, the scenario I describe is coming – and humans will likely accept the change as business as usual.


How will artificial intelligence change the way you lead?

Artificial-Intelligence (1)

There have been countless books written about the tenets and principles of effective leadership. The lives of everyone from Ghandi to a coyote have been mined for insights into how to manage people for success. But what of the new world where leaders will be required to manage both people and machines to thrive?

There’s no doubt that the breadth and depth of artificial intelligence, machine learning and robotic-process automation capabilities are growing fast. And, while there is some talk about the possibility of working for a robo-boss in the future, the reality is more likely to be a significant shift in the skills that leaders will require to succeed in this new digital workplace.

So, what does that mean?

A recent Avanade survey showed that 85 percent of executives agree that company leadership needs to be able to manage both humans and machines if they plan to successfully integrate artificial intelligence into their organizations. Indeed, more than half of the C-level executives surveyed believe that an understanding of new and emerging technologies will be more important for leaders than a deep specialization in strategy, sales and marketing. Accenture affirms the need for a balance of skills as it identifies three elements to an executive’s AIQ—technology, data and people.

Just like today, leaders will need to have a balance of intellectual (IQ) and emotional (EQ) intelligence to manage in the AI infused workplace. On the IQ side, leaders will need to have a vision for the AI-first world in their organizations and know where it can be used to free employees to spend more time on complex tasks and enhance productivity. But, even more important, EQ and people-centric skills will be critical to evangelize the positive impacts and keep people engaged, address anxiety around the changing workforce, and help them reskill to focus on new ways of working and thinking.

In fact, with advanced analytics producing insights far greater and faster than the human brain is capable of, the “softer” management skills will be more important than deep subject expertise or raw intelligence. Topics like digital ethics and trust will come to the forefront. According to Harvard Business Review, “Certain qualities, such as deep domain expertise, decisiveness, authority, and short-term task focus, are losing their cachet, while others, such as humility, adaptability, vision, and constant engagement, are likely to play a key role in more-agile types of leadership.”

Of course, this is not the first time that leaders have taken on new skills in response to technology advances. Executives of a certain age will recall when dictation machines and typing pools were replaced by personal computers and the gap between those who learned to type in school and those who didn’t became quickly apparent. We adjusted, and we adjusted again when typewritten memos gave way to emails, then blogs, then tweets.

What’s next?

Executives in all industries need to be open to expanding and pivoting their skillset along with the rest of their workforces. Understanding the capabilities of AI is important, but so is attending to the needs of people who are affected by changes to the workforce. Along with the wisdom of [insert-your-favorite-business-guru-here], learn about the technology that is changing the game within your organizations and on the larger competitive landscape.

With a solid understanding of the capabilities of humans and machines, leaders will be prepared to draw upon the strengths of each to grow and sustain a digital workplace.