Technology

What is algorithmic transparency and why do we need it?

On February 11, 2013, Eric Loomis was arrested in Wisconsin for his part in a drive-by shooting. He pleaded guilty to knowingly fleeing an officer. 

During the sentencing, the judge consulted an algorithm that predicted how likely Loomis was to recidivate. Because of his criminal history and other factors weighed by the algorithm, Loomis was designated a “high risk” individual. Accordingly, he was handed a six year imprisonment.

The use of algorithms to predict recidivism is a significant milestone in the criminal justice system.

However, the judge did not fully understand how the assessment algorithm worked. In fact, Compas, developed by Equivant, is a proprietary algorithm. The company is shielded from disclosing the intricacies of its program, which it labels  ‘trade secrets,’ to protect from intellectual theft. When Loomis appealed the sentence on the grounds that the “proprietary nature” of Compas made it impossible to assess its accuracy, the Supreme Court of Wisconsin denied the appeal. The Court maintained that a comprehensive explanation of the assessment was not required.

This is a troubling precedent. 

Loomis’ case highlights the use of algorithms in the criminal justice system, but algorithms have pervaded other aspects of everyday as well, often in unobvious ways. Algorithms decide credit scores and insurance premiums, evaluate resumés, make trades on the market and even predict how likely patients are to contract cancer. Moreover, they take away human agency while making these consequential decisions. As such, transparency is needed to ensure algorithms are working as intended, without the biases and discriminatory practices that burden human decision making. A culture of explaining how programs work is necessary to guarantee the development of safe and equitable algorithms, making it easier to detect discriminatory practices. Finally, transparency would increase awareness of the pervasive use of algorithms in modern society.

So what has the government done to increase algorithmic transparency? 

Policymakers introduced the Algorithmic Accountability Act of 2019 in the House and Senate. The bill would require large companies to conduct audits of their algorithms. Specifically, companies would have to report any “impacts on accuracy, fairness, bias, discrimination, privacy, and security” found in the algorithms. The legislation would also mandate oversight by the FCC to monitor the reports and ensure companies remedy their algorithms in case of any errors. If companies failed to comply, the Commission would be able to resort to penalties. 

While the Accountability Act is a display of commendable intentions, it would do little to improve transparency. Algorithmic reports are not public, so individuals would still not be aware of the systems that affect them. Moreover, the bill wouldn’t require companies to explain the way their algorithms work in cases of error. In other words, the decision making process, from the inputs to the outputs, would still be shrouded; algorithms would remain veritable “black boxes.” It is true that maintaining secrecy is necessary for sustainable innovation, but a middle ground could be struck for the purposes of explainability. Perhaps companies could be required to provide high-level descriptions to a technical committee gathered by the FTC? Such committees would allow for accountability and oversight without allowing for widespread intellectual property theft or damaging innovation and competition. Speaking of the FTC, there are warranted concerns after the Facebook fine settlement that the Commission is just too underfunded and powerless to meaningfully penalize large companies. Regulators must do a better job of balancing private sector needs and concerns of the general public. 

Transparency is becoming increasingly important as algorithms spread like wildfire. The need for transparency has become even more pressing given the recent popularity of ‘deep neural networks; quasi-inscrutable systems that manipulate thousands of variables in unintuitive, non-linear ways. Although neural networks are the building blocks for useful day-to-day systems like Apple’s Siri and Amazon’s Alexa, they are also the algorithms behind doctored videos, or ‘deepfakes,’ that have threatened the lives of women by depicting them in porn videos. Increased transparency would ensure that any neural networks employed by public sector decision-making processes are well understood, safe and non-discriminatory. 

The first step to enforcing transparency standards is to raise awareness about the issue with policymakers and educate them about the varieties of algorithmic regulation. Beyond surrounding themselves with technology experts, policymakers should study historical precedents for governing emerging technologies, for example, the institution of pre-market approval processes. They must also hold more hearings of the kind held last month by the House Intelligence Committee on deepfakes to give legal and technical experts the time to share their expertise with policymakers. 

A future where algorithms are ubiquitous needs to be a safe one. It needs to guarantee that all members of our society, from Eric Loomis to cancer patients diagnosed with algorithms, have clarity about the process. This can be achieved by increasing transparency when it comes to using algorithms.