What exactly does Ethical AI (Artificial Intelligence) mean? One point of view was presented by Microsoft CEO Satya Nadella during his keynote at the developer conference Microsoft Build 2018 on 7 May:
“We need to ask ourselves not only what computers can do, but what computers should do.”
Satya Nadella, 7 May 2018, Microsoft Build developer conference
The statement reflects Microsoft’s overall approach to ethical AI technology and its application, which has seen the arrival of its AI social betterment programmes, such as AI for Earth and AI for Accessibility, in alignment with its research group FATE – Fairness, Accountability, Transparency, and Ethics in AI. Clearly the brains at Redmond have been considering the impact of AI for some time.
Computers do not say umm
Quite incidentally, the day after Nadella presented Microsoft’s Ethical AI approach, on 8 May the Google I/O 2018 conference took place in California, USA. Here Google Duplex was showcased to the audience of developers. In case you missed it, Duplex is an Artificial Intelligence feature within the Google Assistant that can make phone calls on its owner’s behalf. The attendees were treated to two pre-recorded conversations of the Duplex AI in action, one booking a hair appointment and the other a table in a restaurant. The audience was wowed by Duplex’s use of speech patterns which mimics humans: pausing in the right places, and even saying “umm.” The media at large was less wowed and more alarmed, with the main point of contention being that the people on the receiving end of the phone calls were at no point notified they were conversing with a computer. Computers do not say umm and commentators argued this was deceptive.
The point of discussing Duplex here is not to criticise Google. From the developer viewpoint (this was the audience) Duplex is impressive AI mastery. However, the implications of AI in practice do need to be thoroughly considered before implementation. Do I, the human, have a right to know when I am speaking to a computer? What if I do not want to speak to a computer? Should AI ever be allowed to reach the point where I am unable to tell if I am communicating with a human? Is the technology company obligated to identify AI-generated calls?
Google clarified the day after the I/O conference that it does plan to have Duplex identify itself pre-conversation, but the horse had already bolted, and the furore has spotlighted the urgent need for an industry-wide conversation for clarity on ethical practices concerning AI development and implementation.
Data in the Emerging AI Technology Age
Data privacy is an ever-present topic currently. Just as when Cloud was the emerging technology, with AI there will be new data privacy questions to address as the technology evolves. Machine Learning relies on data, and lots of it. What are the implications for Joe Public’s data privacy here? The General Data Protection Regulation (GDPR) was introduced on 25 May 2018 and this does cover AI and Machine Learning in part. See this document from the UK’s Information Commissioner, Elizabeth Denham, for details.
Defining Ethical AI
On 16 April 2018 the House of Lords released a report from its Committee on AI which calls on the UK to remain a leader in the development of artificial intelligence in order to shape the ethics behind the technology. The report suggests placing ethics at the centre of AI development with a cross-sector AI Code to be established, which can be adopted both in the UK and internationally. The suggestions for this code are:
- Artificial intelligence should be developed for the common good and benefit of humanity.
- Artificial intelligence should operate on principles of intelligibility and fairness.
- Artificial intelligence should not be used to diminish the data rights or privacy of individuals, families or communities.
- All citizens should have the right to be educated to enable them to flourish mentally, emotionally and economically alongside artificial intelligence.
- The autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence.
Microsoft welcomed this report, with its UK CEO Cindy Rose saying, “…Microsoft is ensuring everyone can benefit from AI in a way that is safe and ethical, and that work can be seen every day in the products and services our customers use. We understand that to ensure AI continues to be used as a force for good, it is crucial that it’s developed according to strong ethical guidelines.”
Ethical by Design
The keyword is Trust. No matter how impressive, an emerging technology that is not trusted will struggle in adoption. Microsoft’s AI social betterment programmes are perhaps pre-empting this but are a good demonstration of how AI can be a force for good. Equally, misuse is frightening to ponder – we have all seen those films (Skynet!). Whilst we can marvel at the technology, there remain big questions about AI development ethical practices. Ethics needs to be incorporated by design and within the development process and not left as an, umm, afterthought. Satya Nadella re-stated his thoughts on Microsoft’s approach at the recent Leading Transformation with AI in London on 22 May, saying, “[AI] brings great opportunity, but also great responsibility…We’re at that stage with AI where the choices we make need to be grounded in principles and ethics – that’s the best way to ensure a future we all want.”
So yes, we in the development world do need to think with what AI can do, but also what it should do.
Ballard Chalmers is committed to Ethical AI and we are happy to have the conversation with you about your organisation’s adoption of AI.