Artificial scapegoat: Why algorithms can't be blamed for political or corporate decisions

The laying of ethical responsibility on algorithms and machine learning for decisions made by capitalist and state actors should be resisted.
Artificial scapegoat: Why algorithms can't be blamed for political or corporate decisions
Artificial scapegoat: Why algorithms can't be blamed for political or corporate decisions
Written by:

Recently, the social media site Twitter saw a spate of suspensions which have seemingly contradicted its own Terms of Service (TOS). A lot of Kashmiri, Dalit, journalist, and progressive accounts were slapped by tweet deletions and account suspensions, some permanent without offering much in the way of explanations. This time, these actions were not limited to the traditionally targeted communities and the marginalised who social media platforms have always treated with a degree of cavalierness bordering on tone deaf bigotry— but affected the privileged and the powerful as well.

One case was senior Supreme Court lawyer Sanjay Hegde getting his account suspended twice in two days, the second time permanently, over seemingly innocent acts. The first suspension was apparently due to his cover image being the famous photo of August Landmesser, a German worker who had refused to perform the Nazi salute, and the second one due to him quote-tweeting a poem of Gorakh Pandey, tweeted by activist Kavita Krishnan. In neither of these cases does Mr Hegde violate Twitter’s TOS and rather, is taking a stance against hate speech and oppression.

Another case was academic and feminist Dyuti Sudipta’s account suspension: no offending tweet was pointed out, just a mail that she had been permanently suspended. Sudipta’s account got restored within a day— again, with no explanation provided of what had triggered Twitter’s action in the first place.

On November 7, Twitter seemingly responded to the barrage of criticism it was facing with regards to such arbitrary acts, protesting that it is fair and impartial. But no explanation was provided for the actual acts. Considering the political importance given to social media these days and the virulence of reactionary hate speech which abounds on Twitter in particular, these meaningless suspensions and deletions caused alarm, and accusations of political meddling flew thick. The umbrage escalated to several academics and activists protesting, running hashtags accusing Twitter of bigotry, and some transferring to the site Mastodon which, unlike Twitter, is community owned and uses user-based moderation.

These incidents caused a particular line of commentary, in which several commenters, including the suspended lawyer himself, opined that this all could have been due to faulty algorithms with some error being made. Perhaps the first suspension was due to some machine learning bit incorrectly identifying the image as offensive, or perhaps some word in the second tweet containing the poem tripped some Natural Language Processing (NLP) mechanism. There was vague commentary that the designers of these algorithms should be “sensitised."

At first, there seemed to be among civil society actors an eagerness to believe all this, with the hope that Twitter would recognise this “machine error” and quickly and without fuss, restore these accounts. With more than two weeks having passed since Hegde’s suspension, this now looks to be a very unlikely explanation. But this line of reasoning is not uncommon. In jurisdictions where machine learning powered Facial Recognition Tech (FRT) is used, machine error becomes a path for officials to shift their responsibility on the error-prone machines. Similarly, machine learning algorithms have been blamed for bias in resume selections and rejections for jobs, and of course, machines are frequently blamed when public distribution systems fail. What is not interrogated often is the corporate or political decision to use certain technologies in certain places.

Interestingly, a lot of scholarly work which deals with critiquing Artificial Intelligence (AI) also has internalised this manner of looking at it. The dominant mode of critiquing AI systems and providing AI governance is called FAT (Fairness Accountability Transparency). The idea being that AI (especially machine learning) systems are made to be fair and not discriminate against the marginalised or produce any bigoted decisions. That they be transparent ie., these systems should have interpretable inner workings. And that they be accountable for decisions taken.

Unfortunately, this model of tech governance is wanting. For one, bias can never be completely weeded out of machine learning because the data machine learning models are both systemically and historically biased, being obtained from human sources which are structurally biased and oppressive. Often, accuracy is a tradeoff with equity. And on a larger note, there might be different types of fairness which may compete. As for transparency, the very premise of the idea is flawed, because certain machine learning systems can’t be transparent or interpretable. They are inherently opaque for humans.

Focusing on the functioning of the system while divorcing it from the social relations around it, I argue, is inherently a wrong way to look at this problem, as all machines have some inherent flaws. But where to use those machines is always a human and political decision, often dictated by structural forces like capital. Ultimately, no machine learning algorithm, no matter how flawed, “rejects a resume.” It’s a human employee of a company operating under a cost cutting impulse who does. No algorithm “suspends an account” or “orders a drone strike”. The decision is always human, corporate or political (often both mean the same thing). Hence technologies like facial recognition for surveillance which have the deep potential of violating constitutionally guaranteed rights should be opposed, not because they may be (and often are) biased. Even if they are not biased (a physical impossibility) they should not exist because it always will lead to an erasure of responsibility of the carceral state. The law cannot and should not play dice.

It must be stressed beyond a point, AI cannot be made fair, accountable, or transparent and the corporate focus on that research in AI policy is a scapegoat. The laying of ethical responsibility on algorithms and machine learning for decisions made by capitalist and state actors should be resisted. What you can do is not have certain technologies exist without adequate human supervision, and doing away with the very existence of AI technology where it threatens lawful and constitutional values, in certain cases like Facial Recognition Tech mentioned above or predictive policing (analytical techniques used to target and observe certain individuals or communities) or automated weaponry (weapons with minimal human control, drones operated by AI etc.)

In the Twitter case, as in all cases, it must be, by definition, that human error for machines doesn’t have moral responsibility for decisions. Hegde’s account (at the writing of this article) remains suspended.

Dr Anupam Guha is an assistant professor, working on AI and Policy, at the Centre for Policy Studies at IIT Bombay.

Views expressed are the author’s own.

Related Stories

No stories found.
The News Minute