AI & Trust

Isha Hans
2 min readFeb 10, 2020

Can trust in AI and algorithms be redeemed?

In ‘The Human Body is a Black Box’, the authors highlight a unique way to address the fairness, transparency and accountability aspect of developing a machine learning model for healthcare. Using Sepsis watch as a case study, they bring out three important aspects of this process:

  1. The starting point & framing the problem: Framing the right problem is imperative to identify the opportunity for improvement rather than the novelty of a technological solution (Sendak et al, 2020). This by extension means that rigorous refining of the problem in context is really important and makes the process more effective for iteration.
  2. Building relationships with stakeholders: As the authors state, direct stakeholders often value different forms of evidence and modes of communication. Sharing insights with them early on and giving them a voice in the process helps build clear line of accountability, and also ensures effective utilization of the tool.
  3. Active feedback loops between the developers of the tool and the direct stakeholders: An continuing relationship to facilitate sustained development and iteration would not only ensure the achievement of desired outcomes, but also help build mutually reinforceable trust and accountability.

Bringing up the discussion about the ‘situated use’ of machine learning technologies in healthcare, the paper proposes an integration of social and technical aspects to help navigate the complex ethical dimensions of ML. This makes me question what frameworks should be made for the development of AI at the policy level. GDPR (by EU) has incorporated a clause referred to as a “right to explanation” when algorithmic decision-making occurs. “That is, individuals have a right to request information explaining the algorithmic logic used to render a decision when a system uses their personal data” (Promoting Greater Societal Trust, Millar et all, 2018, p 9). My signal for this week is A broader view of AI accountability under the GDPR. When it comes to AI, accountability and trust could potentially live at two levels:

  1. the accountability that can be placed on the AI itself, including the process that has gone behind its development, similar to what the authors of The Human Body is a Black Box talk about
  2. who is vetting not just one ML/AI model, but a large number of them. If there were to be an organization laying specific guidelines, what would they have to do to generate trust, credibility and accountability in a field that itself is rapidly evolving. (I say organization and not government, because I believe that this body should be autonomous and free of government interference to ensure checks and balances)

--

--

Isha Hans

Research-driven Designer, Thinker and Strategist with Entrepreneurship skills — https://www.ishahans.com/