We have already dealt with (in the review of 10 December) the dismissal by Google, of Timnit Gebru, who dealt – according to everyone with great competence – of ethics of artificial intelligence (AI). If we go back to it, why does it also, on Fast Company, Katharine Schwab, with an intriguing long article right from the title: This is a greater thing than Timnit alone. And even just Google. Because there is something at stake that may seem abstruse and instead terribly concrete: if you reflect on the fact that more and more often an algorithm decides who can apply for a hiring, housing grant or loan (not to mention the surveillance and facial recognition systems or those of Chinese social credit, in practice a digital filing of good and bad citizens), you will understand that The “impartiality” of the algorithm itself is decisive.
Unfortunately, as many researches have shown and as easily understood, algorithms instead they very often reflect the preconceptions, prejudices, points of view and attitudes of those who develop them.
Some of the data cited by Schwab in the article are enough to understand that the starting point, that is the audience of computer scientists and programmers, from this point of view is anything but reassuring. In 2018, to work in the field of artificial intelligence women were only 22%; percentage that fell to 18% among experts invited to present papers at conferences on AI. As for the teachers on the subject, they were 80% male. If we then move on to the differences in ethnicity, n in Google, n in Facebook and in Microsoft the percentage of black, Hispanic and Latino together it reaches 10% of the total. Even in the top 30 associations dealing with ethical standards for AI, out of 94 top positions only 3 are occupied by blacks and 24 by women. And Deborah Raji, who today deals with corporate responsibility in the field of AI for Mozilla, recalls that in 2017, as a university student in Toronto, she participated in a conference on machine learning in which, out of 1,800 participants, only about a hundred were black ( one of them was right Timnit Gebru).
It is no better, indeed perhaps worse, on the front of funding for research on ethics and technology. Because most of them, as shown, among others, a 2020 study by Mohamed and Moustafa Abdalla, come precisely from Big Tech companies (we also talked about them in the review of 15 December). Since they are the ones who can create problems, it is good that they spend money trying to solve them, one might think. Too bad that, as Ali Alkhatib, researcher at the Center for Applied Data Ethics at the University of San Francisco points out, the tech giants put all their weight on engineering solutions to what are social problems (the lack of technological solutions was one of the accusations of Google at the work of Gebru which cost her the dismissal). Or, to put it even more explicitly in the words of Emily Bender, co-author of Gebru’s indicted work, there are people around the world who have suffered from discrimination and marginalization for generations and technology is adding more layer upon layer. Not just a long-term abstract problem that we are trying to solve. We are talking about people being harmed now.
Not surprisingly, the attitude of Big Tech is the same as always: leave it to us, we will find a solution to limit the damage (as for the violations of privacy, hatred and disinformation online and so on). But, writes Schwab, for many experts and activists the ultimate goal is not to sound the alarm when something in AI goes wrong, but to prevent discriminatory and preconceived systems from being built and launched into the world from the start. And, given that, as Gebru says at the moment there is really nothing that can prevent any technology from being deployed in any scenario and given the excessive power of Big Tech, also in this field it seems that the only solution is adequate legislative rules and tools “bad” enough to hurt those who do not respect them.
Schwab convinced that something is moving. The now Democratic-majority Congress could churn out a new version of the 2019 Algorithmic Accountability Act, which gives the Federal Trade Commission (Ftc) more teeth to bite those who go wrong. Others think of a Food and Drug Administration (FDA) control agency that sets standards and funds independent ethics and AI research by taxing companies in the industry. Even the dismissal of Gebru and the controversies it raised could, from this point of view, prove to be an asset (and a boomerang for Google and associates).
It is no coincidence that, a month after that dismissal, the first syndicate within Alphabet / Google was born. In the past, internal revolts by workers in Silicon Valley have proven effective against toxic corporate cultures from the point of view of discrimination or sexual harassment or to abandon contested projects of collaboration with military apparatuses or authoritarian states. And Gebru herself says she sees in the unionization one of the few hopes for change. Definitely, something much bigger than Timnit can start from Timnit (although not without resistance, as evidenced by the pressure of Amazon against the birth of an internal union in Alabama, which also prompted Joe Biden to take the side of the workers).
This article was originally published in the Press Review of the Digital Edition of Corriere della Sera. To register click here.