Algorithmic accountability is the concept that companies should be held responsible for the results of their programmed algorithms. The concept goes hand in hand with algorithmic transparency, which requires companies be open about the purpose, structure and underlying actions of the algorithms used to search for, process and deliver information.
As the product of humans, algorithms can have issues resulting from human bias or simple oversight. Algorithmic accountability is promoted as a way to help such issues be recognized and corrected.
Human bias and oversight in algorithms can cause undesired and even dangerous problems in AI systems. Some errors in the development of AI have become common knowledge. Google’s facial recognition software, for example, labeled some black people as gorillas, which created public relations problems. As the result of a faulty algorithm, Uber’s self-driving car ran a stop sign.
Beyond embarrassment or insult to people, issues with algorithms can be dangerous. Drivers are cautioned, for example, not to swerve to avoid small animals. Using the same flawed algorithm, a self-driving car could potentially fail to recognize a small child as human and hit it as a result.
Practices to prevent issues in algorithms resulting in bias and errors include more extensive testing on the development side, particularly for biases. In the case of self-driving cars, that could mean testing for problems with recognition of minorities.
In order to have code audited, it has to have at least qualified transparency, which means that it is made available to third-party inspection. Such inspection may be performed by a regulatory body if algorithms are not open source and open to public inspection.
The main hurdle for achieving algorithmic accountability is not how to achieve it, but how to make companies accept legal and ethical responsibility for it.