top of page
What to do about now, political theory blog, political philosophy blog, logo_edited.png
Aleco Kastanos

Responsible AI: Responsible to Whom?

How the profit motive stages a theatre of ethics washing in Artificial Intelligence


There is a burgeoning wave of start-ups and tech projects that can be loosely described as responsible AI. Without fail, each of these new projects promises to deliver fairer and more intelligent systems without the imposition of ethics policing. An alluring prospect, no doubt, but how credible are these claims? By examining the economic incentives at work, we find that corporate slogans about “diverse multi-stakeholder data policy” serve to dazzle without providing the structural change needed for an equitable outcome. A progressive critique of the current state of AI must recognise the theatre of ethics being staged. This piece explores the inherent contradictions in the two primary models for responsible AI research and development, namely internal ethics teams within larger corporations and independent responsible AI companies.


Although I began this piece by discussing start-ups, it is large companies with huge profit margins (such as Alphabet, Microsoft, and PwC) who dominate the narrative and research in this space. Typically, specialist teams are set up within these firms with a mandate to research and build products that facilitate the ethical development of AI. These responsible AI teams are often led by highly influential individuals with large online followings. By co-opting these superstars of AI, large companies attempt to construct a hegemonic discourse that positions themselves, the developers of AI technology, as the only agents capable of ensuring that the development is done equitably and fairly. This creates a conflict of interest between the production and regulatory functions of these companies which in and of itself warrants a healthy degree of suspicion. Nevertheless, in the interest of steel-manning my opposition, I am willing to grant the assertion that in a market economy it is in the interest of these companies to allow a degree of self-regulation to facilitate fairer outcomes. After all, one might suggest that if a company’s competition produces AI products and research with social impact in mind, not responding in kind would run the risk of falling out of favour with customers. Is this a conclusive demonstration that the profit motive ensures that responsible AI is taken seriously?


Not quite.


The function of private sector companies is to increase shareholder value, not to guarantee an equitable or ethical outcome (Blakeley, 2019). The moment a research finding or project from a company’s responsible AI division recommends an action that threatens to diminish shareholder value, a contradiction emerges. No matter how many vague tenants or “don’t be evil” style slogans are splashed across the office space, when the fundamental raison d'être is threatened by ethical concerns, shareholder value trumps all else. Since all competing tech companies operate under the same capitalist mode of production, there is no substantial threat that their competitors might provide a radically more responsible alternative for customers. After all, they are subject to the same economic forces. In this regard, big tech and FinTech companies more closely resemble a cartel with a shared interest in maintaining their oligarchic status quo rather than competitors posing an existential threat to one another. The result is that responsible AI initiatives have little to no backing the instant their work threatens shareholder value. The argument from market competition does not prevail.


Recent examples at Google, such as the firing of two prominent AI researchers, Timnit Gebru and Margaret Mitchell, for research that portrays Google in a less than flattering light and the dismissal of Blake Lemoine, the Responsibility AI engineer who believes that the dialogue agent LaMDA is sentient, serve as cautionary case studies. Whether or not you agree with the protagonists in these examples, Google’s response to when responsible AI looks like it might impact shareholder perceptions speaks to the limits of internal ethics teams.


If these intrinsic contradictions cannot be overcome, an alternative proposal might be to shift the responsibility of developing ethics for AI to a separate company. The external company would be tasked with providing an ethical rating score based on the nature of AI research and development in technology companies. If regulations were passed that required technology companies to have the “responsible AI” stamp of approval, then stakeholder value would be more likely to align with responsible AI development. After all, shareholder value declines if the state shuts down the company or issues fines for breaching regulations. Although it would require a leap of faith to envision a future where ethical algorithmic assessments are as ubiquitous as a health rating on your local restaurant, they have proved to be highly effective in the food industry.


Have we found a hint of optimism in the wind? Do we simply need ethics ratings from independent companies to set us on the path to a more equitable future? On summer evenings, mulling over the practicalities of how one might be able to provide robust assessments without direct access to the intellectual property, it surely felt like it. But as Autumn makes itself unambiguously present, I am reminded of Noam Chomsky’s famous quote from his exchange with Andrew Marr on the BBC:


“I don’t say you’re self-censoring. I’m sure that you believe everything that you’re saying. But what I am saying is that if you believed something different, you wouldn’t be sitting where you’re sitting.”


This is the unfortunate truth about building an authority responsible for assessing the fairness and ethics of AI: the organisations involved are some of the most powerful in the world. Given our neoliberal context, I can only see two ways this plays out.

The first is that tech companies are allowed to select a privately owned responsible AI rating company to provide their assessment. In this scenario, big tech institutions are the arbiters of which responsible AI rating companies graduate out of adolescence and become an industry force (some might say, the Moody’s of their world). However, there is no incentive for technology companies to promote any organisation that upholds ethical standards in excess of the minimum legal requirement. Any responsible AI rating agency that poses a threat to shareholder value will simply be left in the lurch and fade into irrelevance.

The second option is that the state is tasked with conducting the audits. Under the current political system, where powerful technology companies exert coercive lobby power, I am not optimistic that this route will prove any more effective than regulation in the financial services industry proved to be in 2008. Without eliminating the influence that neoliberalism allows economic capital to exert on the state, regulations requiring responsible AI are more likely to be watered-down and a target of political attack than a compelling force to ensure the ethical development of AI.


This sobering recognition of the desperate state of responsible AI is consistent with my personal experience working in AI. I’ve watched as internal bias and ethics reviews fail to result in tangible action. I’ve been on the receiving end of having my optimistic “do-good” brief twisted and weaponised against colleagues in precarious working conditions. A utopian might suggest that I have been unlucky - or perhaps in the wrong culture - an exception to the rule. Instead, these experiences are better understood in terms of the structural contradictions at play.


Ultimately, experience and theory indicate that both the internal and external models for responsible AI development project a naivety about the incentives of privately owned capital. Under an analysis that foregrounds the current economic system, the state of responsible AI appears to have been designed to help us sleep at night rather than to solve society’s most pressing challenges. Without addressing the inherent contradiction brought about by the supreme objective of maximising shareholder value, private companies cannot be expected to prioritise people over profit.





References


Blakeley, G., 2019. Stolen: How to save the world from financialisation. Repeater.


The Guardian (2020) ‘More than 1,200 Google workers condemn firing of AI scientist Timnit Gebru’, The Guardian, 4 December [Online]. Available at https://www.theguardian.com/technology/2020/dec/04/timnit-gebru-google-ai-fired-diversity-ethics [Accessed 17 Sept 2022].


The Guardian (2021) Google to change research process after uproar over scientists' firing’, The Guardian, 26 Feb [Online]. Available at https://www.theguardian.com/technology/2021/feb/26/google-timnit-gebru-margaret-mitchell-ai-research [Accessed 17 Sept 2022].


The Washington Post (2022) The Google engineer who thinks the company’s AI has come to life’, The Washington Post, 11 June [Online]. Available at https://www.washingtonpost.com/technology/2022/06/11/google-ai-lamda-blake-lemoine/ [Accessed 17 Sept 2022].



399 views0 comments

Comments


Culture wars
bottom of page