Asking the Small, Human Questions
A review of 'Code Dependent: Living in the Shadow of AI' by Madhumita Murgia.

Conventionally speaking, a generative artificial intelligence (GenAI) system has three components — input, model, and output. Madhumita Murgia, in her acclaimed book Code Dependent: Living in the Shadow of AI (2024), puts humans first in all three — and she does so with sensitivity and the urgency of a journalistic enquiry.
Populating Murgia’s work on the input side are low-wage data labourers, such as those in East Africa where the AI input work is ‘outsourced’. These workers help train AI software made by global corporations through image-tagging — for driverless cars like Tesla — or through labelling and categorising text — including toxic and graphic content for ChatGPT — among other methods.
“Their [low-wage data labourers’] work couches a badly kept secret about so-called artificial intelligence systems — that the technology does not ‘learn’ independently, and it needs humans, millions of them, to power it. Data workers are the invaluable human links in the global AI supply chain.”
Working on the AI models are (often disillusioned) engineers, researchers, and scientists who, for example, developed facial recognition tools for photo analysis, navigation in navy submarines, or to spot physical signs of grave illnesses, only for the technologies to be variously used for AI-enabled state surveillance.
“The power of machine-learning models is that they make statistical connections that are often invisible to humans. Their decisions and methods are not determined by the people who build them, which is why they are described as black boxes. This makes them supposedly far more objective than their human counterparts, but their reasoning can be opaque and non-intuitive — even to their creators.”
On the output end are impoverished communities, women, and people of colour who are systematically marginalised — data collected from/about them as well as their labour go into the input processes, to develop models that continue to be severely biased against them, and they are, then, excluded from the commercial gains and benefits of the technologies that they helped build.
“Because of how the AI supply chain is broken down into bite-sized chunks, many of these workers have little, if any, visibility of the shape or commercial value of the final product they are helping to build.”
“Aside from technically opaque, people whose lives are impacted by the automated systems are rarely aware of it… they are usually locked out of the system’s workings by institutions and companies.”
Murgia’s book captures the predicament of these people — asking “small, human questions” to present, with care and nuance, data colonialism at work today.
“The telltale mark of data ‘colonialism’, the exploitation of vulnerable communities by powerful tech companies, is in how the impacts of an algorithmic system are distributed. Advantages conferred by the technology… are often enjoyed by the majority — whether through race, geography or sex.”
“On the flipside, toxic consequences of the systems are suffered most keenly by those who are already victimized and marginalized in societies today.”
Murgia’s Women’s Prize for Non-Fiction 2024-nominated book — as Shoshana Zuboff, the author of The Age of Surveillance Capitalism writes — “arrives not a moment too soon”.
Though Murgia perceives her book as a “drop in the ocean” of work being done on AI, algorithms, and workers’ rights, her recognition of the increasing co-dependent effects of AI systems in our lives, as well as her humans-first approach, are invaluable in how discourse on AI ethics and AI literacy is to be shaped.
“Human beings, and the endless lines of code we live by, are co-dependent. Our blindness to how AI systems work means we can’t properly comprehend when they go wrong or inflict harm — particularly on vulnerable people. And conversely, without knowledge of our nature, ethical preferences, history and humanity, AI systems cannot truly help us all.”
Also read: