CivTech Sprint Challenge: Explainable AI
How can technology be used to create scalable, repeatable approaches to the development of explainable and ethical Artificial Intelligence solutions for the Public Sector, starting with Police Scotland?
Sprint Challenge: How can technology be used to create scalable, repeatable approaches to the development of explainable and ethical Artificial Intelligence solutions for the Public Sector, starting with Police Scotland?
Scotland aspires to be a recognised leader in the development and adoption of Artificial Intelligence (AI) as a trusted, responsible and ethical tool to benefit people. For this to be achieved, particularly in the delivery of public services, it is crucial that the cause of the decisions made or supported by AI systems can be understood by humans – which we call “explainable AI” — including the operators of these systems, as well as the people affected by those decisions.
This is necessary because, without explainability:
It is difficult to guarantee that AI systems perform as intended (for instance, that a clinical image diagnostic AI system makes decisions based on clinically relevant features of the image, and not artefacts such as a watermark), robustly (for instance, some computer vision AI systems are vulnerable to so-called adversarial attacks by perturbations of the input data that are not immediately apparent to humans), and in a reproducible and auditable way (so that errors can be identified, corrected and performance improved over time).
Operators of AI systems might not understand the system’s limitations and be overly confident in its decisions, or conversely, not trust the system and disregard its decisions.
It is difficult to detect and address the consequences of potential biases in an AI algorithm’s training data, and ensure the decisions made or supported by the system do not breach non-discrimination rights enshrined in the Human Rights Act.
It is difficult to ensure people can exercise their rights associated with automated decision-making under GDPR article 13, including the transparency requirement that organisations must provide “meaningful information about the logic involved”, and their right to “obtain human intervention on the part of the controller, to express his or her point of view and to contest the decision”.
It is difficult to create trust by the people affected by the decisions, which is a social prerequisite for the successful deployment of such systems at scale. The public consultation for Scotland’s AI strategy underlined the importance of understanding the opportunities and limits of AI; and where AI is augmenting (as opposed to replacing) human interaction. (https://www.scotlandaistrategy.com/s/The-AI-Of-The-Possible-Developing-Scotlands-Artificial-Intelligence-AI-Strategy-Final-Consultation-R.pdf)
Explainability is a challenge because:
On a technical level, some of the most recent and successful AI algorithms (particularly deep learning neural networks) can resemble “black boxes” whose decisions are inherently hard to understand by humans. This is an active area of research.
On a technical level, there can be an unavoidable trade-off between explainability and some other measure of performance (for instance, accuracy, false negative / false positive rates). Striking an optimal and appropriate balance is challenging will depend on the use case (for instance, in clinical decision-making)
Providing explanations to external stakeholders has associated risks, such as revealing intellectual property about the AI algorithm, creating an opportunity to reverse-engineer the model potentially uncovering the personal data used to train it (if applicable), or giving leads on how to “game” the algorithm.
AI systems do not operate in isolation, but are part of decision-making processes (typically augmenting rather than replacing humans), themselves associated with the operational processes implementing the decisions, communications within and outwith the organisation, and organisational governance and culture. Therefore, explainability cannot be achieved through technical solutions alone and need to be integrated with and supported by other processes in the organisation.
This challenge aims to tackle the broad strategic issue of how to develop ethical, explainable approaches to the use of AI in the Public Sector in a scalable repeatable way. Given the potential scope and variability of such a challenge the Scottish Government have partnered with Police Scotland to provide a specific initial “use case” for challenge respondents to address within the scope of the challenge, however the aim with this challenge is not to just develop a bespoke approach for application to that use case.
This challenge therefore has two levels:
The strategic challenge of developing ethical explainable AI for the public sector in a way which can be repeated/scaled across multiple uses which is set out above/below.
The specific Police Scotland ethical, explainable AI “use case” application to the Police Scotland’s Challenge requirements (set out in Appendix 1) which is focused on:
How a technology solution assist in the processing and analysis of unstructured intelligence data sets to create operational efficiency which would help to free up resources to focus on frontline activities, whilst also providing transparency through explainability of its decision making and allow for an ethical data governance process?
What outcomes does the Challenge Sponsor want to achieve?
Outcome 1: An individual can understand how an AI decision affecting them has been arrived at, in close to real time. This requires that operators of the AI system can understand those decisions in the first place. Solutions should be developed in order to capture and report on effectiveness. This outcome involves solving a mostly technical challenge for the specific use case working with Police Scotland.
Outcome 2: The development of a scalable repeatable method which establishes a benchmark in standards of “explainability”. This outcome aims to ensure that the solution developed for the Police Scotland use case can be generalised to other public sector use cases as much as possible, to maximise potential impact and market size. It is therefore less focused on the technical aspect of the challenge, and more on broader design and processes issues.
Outcome 3: support responsible adoption of AI to improve effectiveness and efficiency. This outcome is about the added value of using AI both in terms of immediate operational impact (effectiveness and efficiency) and to support the Scottish Government’s strategic goal of AI as a trusted, responsible and ethical tool to benefit people.
Who are the end users of the solution likely to be?
- Citizens/Service users affected by decision making
- Organisational users using AI
- Developers of AI applications for Public Sector
- Those involved in ethical frameworks and Information governance and policy
- For specific users relating to the Police Scotland Use Case, Please refer to Appendix 1: Police Scotland Ethical Explainable AI Use Case
The contract has an estimated value of £650,000 excluding VAT, and an estimated project duration of 24 months. See the Public Contracts Scotland site for further details.
Please note: you must apply for this Sprint Challenge via Public Contracts Scotland
27 October 2020, 11:30am
Exploration Stage interviews
4 November 2020
16 – 27 November 2020
Development Stage interviews
2 December 2020
14 December 2020 – 5 March 2021
A live Q&A session will be held with the Challenge Sponsor team at 11:00am on Tuesday 13 October 2020, giving you an opportunity to clarify any aspect of the Challenge. You can register for the Q&A session here.