We use Google Analytics to measure site usage. This requires a cookie. Support our analysis by clicking Accept.

TL;DR;

Technologist Clare Dillon presents ‘A Journey to Trustworthy AI’, a study of AI ethics from an economic, social, political and cultural lens.
Guest contributor on #AIFightsBack last week was Technologist Clare Dillon. She presented A Journey to Trustworthy AI from an economic, social, political, and cultural lens. This thought-provoking session demonstrated clearly why we all need to build trust in both building and buying AI solutions. It was backed up nicely with plenty of use cases and several major AI bloopers out there. With AI predicted to add a staggering $13trillion to current global economic output by 2030 concerns around unethical AI are very real.

Guest contributor on #AIFightsBack last week was Technologist Clare Dillon. She presented A Journey to Trustworthy AI from an economic, social, political, and cultural lens. This thought-provoking session demonstrated clearly why we all need to build trust in both building and buying AI solutions. It was backed up nicely with plenty of use cases and several major AI bloopers out there.

With AI predicted to add a staggering $13trillion to current global economic output by 2030 concerns around unethical AI are very real. Key failings involve the use of facial recognition technology by the military, built-in AI deception in kids' toys, automated decision-making systems to track employee productivity and the shocking impact of AI on climate. Clare talked about the ART of AI

  1. Accountability
  2. Responsibility
  3. Transparency

and striped these points back to the fundamentals of The Ethics Continuum of legal versus ethical. Right now, what is often legal and compliant with reference to AI is increasingly changing. This presents a complex and risky situation for business.

Video

Slides

A Journey Towards Trustworthy AI (#AIFightsBack webinar) from Ruth Kearney

She also offered practical tips on how business can consider the ethical impact of AI starting with planning through to implementation.

  1. State where you are on the Ethical Continuum
  2. Connect AI implementation to a valid business case
  3. Include all relevant stakeholders
  4. Determine the need for Open or Explainable AI (XAI)
  5. Hire a diverse team
  6. Educate
  7. Build a risk mitigation plan
  8. Track datasets
  9. Test
  10. Keep testing
  11. Monitor Usage Scenarios
  12. Be transparent

We should care about Trustworthy AI

It's not a question of why. The bottom line is that if people don’t trust AI solutions, they simply won’t use them, and we will not be able to advance AI for the greater good. The race, gender and age bias examples shared during the talk explicitly demonstrate why AI Ethics matters and why we all have a part to play.

Clare included several important reference points advancing the area of AI Ethics including The EU Commission Guidelines on AI, AI Institute in New York, The Moral Machine at MIT Media Labs, The Ethics Canvas as developed by the SFI funded Adapt Centre.

References

AI Institute in New York. Read more

MIT Media Lab The Morale Machine. Read more

The Ethics Canvas is adapted from Alex Osterwalder’s Business Model Canvas. Read more

Ethics Canvas ADAPT Center, Trinity College Dublin & Dublin City University Read More

EU Commission The Ethics Guidelines for Trustworthy Artificial Intelligence (AI) Read more

#AIFightsBack AI Readiness and Back to Work Readiness

Our next three sessions help business think about some of the larger concerns about using AI in the company. From an overall AI Readiness perspective, Ashwini Mathur (Novartis) talk about building trusted AI products and embedding this capability throughout the organisation. Then Matt Macdonald-Wallace (Mockingbird Consulting) and Dr. Iain Keaney (Skellig.ai) look at the use of IoT and privacy respecting data science to help businesses operate in the post-COVID19-lockdown world.