What does it mean for an algorithm to be fair

What does it mean for an algorithm to be fair

In 2014 the White House commissioned a 90-day study that culminated in a report (pdf) on the state of “big data” and related technologies. The authors give many recommendations, including this central warning.

Warning: algorithms can facilitate illegal discrimination!

Here’s a not-so-imaginary example of the problem. A bank wants people to take loans with high interest rates, and it also serves ads for these loans. A modern idea is to use an algorithm to decide, based on the sliver of known information about a user visiting a website, which advertisement to present that gives the largest chance of the user clicking on it. There’s one problem: these algorithms are trained on historical data, and poor uneducated people (often racial minorities) have ahistorical trend of being more likely to succumb to predatory loan advertisements than the general population. So an algorithm that is “just” trying to maximize clickthrough may also be targeting black people, de facto denying them opportunities for fair loans. Such behavior is illegal.

On the other hand, even if algorithms are not making illegal decisions, by training algorithms on data produced by humans, we naturally reinforce prejudices of the majority. This can have negative effects, like Google’s autocomplete finishing “Are transgenders” with “going to hell?” Even if this is the most common question being asked on Google, and even if the majority think it’s morally acceptable to display this to users, this shows that algorithms do in fact encode our prejudices. People are slowly coming to realize this, to the point where it was recently covered in the New York Times.

There are many facets to the algorithm fairness problem one that has not even been widely acknowledged as a problem, despite the Times article. The message has been echoed by machine learning researchers but mostly ignored by practitioners. In particular, “experts” continually make ignorant claims such as, “equations can’t be racist,” and the following quote from the above linked article about how the Chicago Police Department has been using algorithms to do predictive policing.

Wernick denies that [the predictive policing] algorithm uses “any racial, neighborhood, or other such information” to assist in compiling the heat list [of potential repeat offenders].

Why is this ignorant? Because of the well-known fact that removing explicit racial features from data does not eliminate an algorithm’s ability to learn race. If racial features disproportionately correlate with crime (as they do in the US), then an algorithm which learns race is actually doing exactly what it is designed to do! One needs to be very thorough to say that an algorithm does not “use race” in its computations. Algorithms are not designed in a vacuum, but rather in conjunction with the designer’s analysis of their data. There are two points of failure here: the designer can unwittingly encode biases into the algorithm based on a biased exploration of the data, and the data itself can encode biases due to human decisions made to create it. Because of this, the burden of proof is (or should be!) on the practitioner to guarantee they are not violating discrimination law. Wernick should instead prove mathematically that the policing algorithm does not discriminate.

While that viewpoint is idealistic, it’s a bit naive because there is no accepted definition of what it means for an algorithm to be fair. In fact, from a precise mathematical standpoint, there isn’t even a precise legal definition of what it means for any practice to be fair. In the US the existing legal theory is called disparate impact, which states that a practice can be considered illegal discrimination if it has a “disproportionately adverse” effect on members of a protected group. Here “disproportionate” is precisely defined by the 80% rule, but this is somehow not enforced as stated. As with many legal issues, laws are broad assertions that are challenged on a case-by-case basis. In the case of fairness, the legal decision usually hinges on whether an individual was treated unfairly, because the individual is the one who files the lawsuit. Our understanding of the law is cobbled together, essentially through anecdotes slanted by political agendas. A mathematician can’t make progress with that. We want the mathematical essence of fairness, not something that can be interpreted depending on the court majority.

The problem is exacerbated for data mining because the practitioners often demonstrate a poor understanding of statistics, the management doesn’t understand algorithms, and almost everyone is lulled into a false sense of security via abstraction (remember, “equations can’t be racist”). Experts in discrimination law aren’t trained to audit algorithms, and engineers aren’t trained in social science or law. The speed with which research becomes practice far outpaces the speed at which anyone can keep up. This is especially true at places like Google and Facebook, where teams of in-house mathematicians and algorithm designers bypass the delay between academia and industry.

And perhaps the worst part is that even the world’s best mathematicians and computer scientists don’t know how to interpret the output of many popular learning algorithms. This isn’t just a problem that stupid people aren’t listening to smart people, it’s that everyone is “stupid.” A more politically correct way to say it: transparency in machine learning is a wide open problem. Take, for example, deep learning. A far-removed adaptation of neuroscience to data mining, deep learning has become the flagship technique spearheading modern advances in image tagging, speech recognition, and other classification problems.

A typical example of how a deep neural network learns to tag images. Image source:http://engineering.flipboard.com/2015/05/scaling-convnets/

The picture above shows how low level “features” (which essentially boil down to simple numerical combinations of pixel values) are combined in a “neural network” to more complicated image-like structures. The claim that these features represent natural concepts like “cat” and “horse” have fueled the public attention on deep learning for years. But looking at the above, is there any reasonable way to say whether these are encoding “discriminatory information”? Not only is this an open question, but we don’t even know what kinds of problems deep learning can solve! How can we understand to what extent neural networks can encode discrimination if we don’t have a deep understanding of why a neural network is good at what it does?

What makes this worse is that there are only about ten people in the world who understand the practical aspects of deep learning well enough to achieve record results for deep learning. This means they spent a ton of time tinkering the model to make it domain-specific, and nobody really knows whether the subtle differences between the top models correspond to genuine advances or slight overfitting or luck. Who is to say whether the fiasco with Google tagging images of black people as apes was caused by the data or the deep learning algorithm or by some obscure tweak made by the designer? I doubt even the designer could tell you with any certainty.

Opacity and a lack of interpretability is the rule more than the exception in machine learning. Celebrated techniques like Support Vector Machines, Boosting, and recent popular “tensor methods” are all highly opaque. This means that even if ew knew what fairness meant, it is still a challenge (though one we’d be suited for) to modify existing algorithms to become fair. But with recent success stories in theoretical computer science connecting security, trust, and privacy, computer scientists have started to take up the call of nailing down what fairness means, and how to measure and enforce fairness in algorithms. There is now a yearly workshop called Fairness, Accountability, and Transparency in Machine Learning(FAT-ML, an awesome acronym), and some famous theory researchers are starting to get involved, as are social scientists and legal experts. Full disclosure, two days ago I gave a talk as part of this workshop on modifications to AdaBoost that seem to make it more fair. More on that in a future post.

From our perspective, we the computer scientists and mathematicians, the central obstacle is still that we don’t have a good definition of fairness.

In the next post I want to get a bit more technical. I’ll describe the parts of the fairness literature I like (which will be biased), I’ll hypothesize about the tension between statistical fairness and individual fairness, and I’ll entertain ideas on how someone designing a controversial algorithm (such as a predictive policing algorithm) could maintain transparency and accountability over its discriminatory impact. In subsequent posts I want to explain in more detail why it seems so difficult to come up with a useful definition of fairness, and to describe some of the ideas I and my coauthors have worked on.

Until then!

Share this:

Like this:

Like Loading...

时间: 2024-11-02 23:38:12

What does it mean for an algorithm to be fair的相关文章

ACM-ICPC Asia Training League 暑假第一阶段第一场 ABF

A Choosing Ice Cream You are standing in the supermarket in front of the freezers. You have a very tough task ahead of you: you have to choose what type of ice cream you want for after dinner that evening. After a while, you give up: they are all awe

PLA Percentron Learning Algorithm #台大 Machine learning #

Percentron Learning Algorithm 于垃圾邮件的鉴别 这里肯定会预先给定一个关于垃圾邮件词汇的集合(keyword set),然后根据四组不通过的输入样本里面垃圾词汇出现的频率来鉴别是否是垃圾邮件.系统输出+1判定为垃圾邮件,否则不是.这里答案是第二组. 拿二维数据来做例子.我们要选取一条线来划分红色的叉叉,和蓝色的圈圈样本点(线性划分).怎么做呢?这里的困难之处就在于,其实可行的解可能存在无数条直线可以划分这些样本点.很难全部求解,或许实际生活中并不需要全部求解.于是,

STL algorithm算法is_partitioned(26)

is_partitioned原型: std::is_partitioned template <class InputIterator, class UnaryPredicate> bool is_partitioned (InputIterator first, InputIterator last, UnaryPredicate pred); 测试范围内的元素是否是以pred为准则的一个划分.如果是,则返回true,否则返回false. 划分的意思是说,对每个元素进行pred(*it),得

支付宝支付php的demo或sdk报错 Warning: openssl_sign() [function.openssl-sign]: Unknown signature algorithm. in

最近在做支付宝支付,在本地测试一切正常,上传到服务器就遇到报错: Warning: openssl_sign() [function.openssl-sign]: Unknown signature algorithm. in 后来查了查,是我的服务器上PHP环境支持openssl_sign()但却不支持 OPENSSL_ALGO_SHA256这样的参数,问了一下大佬,才发现这个参数是在php5.4.8以上版本才支持,低版本的是使用的SHA256,于是乎试了一下,搞定! 报错原因是支付宝的dem

Berlekamp-Massey Algorithm [for Team Problem 5525]

Input: 第一行为两个正整数n,m 第二行为n个整数a1..an 最后一行为一个正整数k Output: 为一个整数,代表方案数对1000000007取模的值 Sample Input 5 3 1 1 2 0 2 2 Sample Output 3 来自毛爷爷17年论文 Berlekamp-Massey Algorithm直接开算 1 #include<bits/stdc++.h> 2 using namespace std; 3 typedef long long ll; 4 const

Strassen algorithm(O(n^lg7))

Let A, B be two square matrices over a ring R. We want to calculate the matrix product C as {\displaystyle \mathbf {C} =\mathbf {A} \mathbf {B} \qquad \mathbf {A} ,\mathbf {B} ,\mathbf {C} \in R^{2^{n}\times 2^{n}}} If the matrices A, B are not of ty

LabelRank(A Stabilized Label Propagation Algorithm for Community Detection in Networks)非重叠社区发现

最近在研究基于标签传播的社区分类,LabelRank算法基于标签传播和马尔科夫随机游走思路上改装的算法,引用率较高,打算将代码实现,便于加深理解. 一.概念 相关概念不再累述,详情见前两篇文章 二.算法思路 (1)Propagation (2)Inflation (3)Cut off (4)Explicit Conditional Update (5)Stop Criterion 三.A Stabilized Label Propagation Algorithm for Community D

(转)常用算法(Algorithm)的用法介绍

2算法部分主要由头文件<algorithm>,<numeric>和<functional>组成. 2<algorithm>是所有STL头文件中最大的一个,其中常用到的功能范围涉及到比较.交换.查找.遍历操作.复制.修改.反转.排序.合并等等. 2<numeric>体积很小,只包括几个在序列上面进行简单数学运算的模板函数,包括加法和乘法在序列上的一些操作. 2<functional>中则定义了一些模板类,用以声明函数对象. 2STL提供

hihocoder1198 Memory Allocating Algorithm(链表~)

题意: 小Hi和小Ho最近在研究内存分配的机制,他们写了一个比较简单的内存.内存可以表示成M个连续的存储空间,下标为0..M-1: 每当有数据写入时,内存分配程序会从下标0开始向右找一块足够存放下该数据的区域,将该数据写入.比如写入一个长度为2的数据,因为是第一个数据,我们用1来表示: 之后继续依次写入长度为3的数据和长度为2的数据,则有: 当数据足够多后,我们可能会遇到剩下的空间不足以写下新的数据.这时内存程序会从最早的数据开始进行删除.假设我们现在写到第8个数据把内存写满了: 这时我们需要写