Human decision makers are increasingly replaced by self-learning computer algorithms in economic decision making. Such algorithms use personal data (ranging from one’s post code to medical history and Facebook friends) to score and rank people. Such rankings, in turn, can determine who gets offered a job, who gets offered a loan and under what conditions people can insure themselves against illness. Algorithms hold the promise of overcoming human bias and make smarter and more informed decisions. But can we really leave a large number of economic decisions to algorithms? How can we make sure they make good decisions? And how do we ensure they make fairdecisions?
In this seminar, we investigate some novel ethical questions raised by algorithms. Our aim is to develop a framework to think through the merits and dangers of using algorithms in decision making, and to decide how the use of algorithms should be governed. We start by developing a model of human decision making, differentiating between the phases of information gathering, belief-formation, and judgement. In a second step, we investigate how self-learning computer algorithms function, and compare our model of human decision making to the way algorithms process information. We will investigate in detail how scoring algorithms in finance, health care, and human resources already inform critical decisions. This allows us to weed out ethical concerns based on misunderstandings of how algorithms work, and to identify truly new problems raised by algorithmic decision making, in contrast to problems that are raised by any kind of decision maker.
Get the seminar briefing pack.
Summer 2017, with Carsten Jung and Herman Veluwenkamp
Philosophy and Economics Programme, University of Bayreuth