Algorithms can decide your grades, job prospects, and financial security, but are they fair?
By Kalervo Gulson, Teresa Swist, Claire Benn and Kirsty Kitto for The Conversation
Algorithms are becoming commonplace. They can determine job prospects, financial security and more. The use of algorithms can be controversial; for example, robodebt, as the Australian government’s flawed online compliance system has become known.
Algorithms are increasingly used to make decisions that have a lasting impact on our present and future lives. Some of the most significant impacts of algorithmic decision making are in education.
Read also — Koo co-founder of bats for transparent algorithms in social media
So what kind of decisions might algorithms involve? Some decisions will involve the next question students must answer in a test, such as the online provision of NAPLAN. Some algorithms support human decision-making in universities, such as identifying students at risk of failing a subject. Others take the human out of the loop, such as some forms of online exam supervision.
How do algorithms work?
Despite their ubiquitous impacts on our lives, it is often difficult to understand how algorithms work, why they were designed and why they are used. As algorithms become a key part of decision-making in education – and in many other aspects of our lives – it becomes important to know how algorithms work and the kinds of trade-offs that are made in decision-making. using algorithms.
As part of a research to explore these two questions, a set of algorithms was designed using participatory methodologies to involve various stakeholders in the research. The process becomes a form of collective experimentation to encourage new perspectives and ideas on a problem.
ALSO READ – Center seeks details of algorithm and processes used by Facebook amid hate speech allegations
The algorithm game is based on the 2020 UK exams controversy. During the Covid-19 lockdowns, an algorithm was used to determine the grades of students wishing to attend university. The algorithm predicted grades for some students that were well below expectations. In the face of protests, the algorithm was eventually abandoned.
The interdisciplinary team at The Conversation co-designed the UK Exam Algorithm Game over a series of two workshops and several meetings this year. The workshops included students, data scientists, ethicists and social scientists. Such cross-disciplinary perspectives are key to understanding the range of social, ethical, and technical implications of algorithms in education.
Algorithms make compromises, so transparency is needed
The UK example highlights key issues with the use of algorithms in society, including issues of transparency and bias in data.
The team designed the algorithm set to help people develop the tools to have more of a say in shaping the world that algorithms create. Algorithm “games” invite people to play with and discover the operating parameters of an algorithm. Examples include games that show people how algorithms are used in criminal sentencing, or can help predict fire risk in buildings
Read also — Explained | The algorithm that runs the News Feed on Facebook
The public is increasingly aware that algorithms, especially those used in forms of artificial intelligence, need to be understood due to growing issues of fairness. But while everyone may have a vernacular understanding of what is fair or unfair, when algorithms are used there are many trade-offs involved.
In algorithm game research, people have been taken through a series of problems where the solution to one equity problem simply introduces a new one. For example, the British algorithm did not work very well in predicting student grades in schools where fewer students chose certain subjects. It was unfair to these students.
The solution meant that the algorithm was not used for often very privileged schools; these students then received grades predicted by their teachers. But those ratings were mostly higher than the algorithm-generated ratings received by students at large schools, which were most often public comprehensive schools. So that meant the decision was fair to students in smaller schools, unfair to those in larger schools who had grades assigned by the algorithm.
The aim of the game was to show that it is not possible to have a perfect result. And that neither humans nor algorithms will make a fair set of choices for everyone. This means that we have to make decisions about the important values when we use algorithms.
The public must have a say in balancing the power of EdTech
While the algorithm game focuses on using an algorithm developed by a government, algorithms in education are usually introduced as part of educational technology. The EdTech industry is growing rapidly in Australia. Companies seek to dominate all stages of education: enrollment, learning design, learning experience and lifelong learning.
Along with these developments, Covid-19 has accelerated the use of algorithmic decision making in education and beyond.
Also read — Everything you need to know about the Instagram algorithm 2021
While these innovations open up incredible possibilities, algorithms also bring with them a set of challenges that we must face as a society. Examples like the UK review algorithm expose us to how these algorithms work and the kinds of decisions that need to be made when designing them. We are then forced to answer deep questions about the values we will choose to prioritize and the research roadmap we will pursue.
Our choices will shape our future and that of generations to come.
(Gulson, Swist and Kitto are associated with the University of Sydney; Benn is associated with the Australian National University)
Watch the latest DH videos here: