Book Review: Code Dependent – Living in the Shadow of AI

Having grown up in Mumbai and now based in the UK, Madhumita Murgia is Artificial Intelligence Editor at the Financial Times, in London. Her impressive list of academic qualifications includes an MA in Journalism from the New York University and before joining FT, she has worked with leading publications like Wired magazine, The Telegraph and The Washington Post. At FT, Murgia leads their coverage of artificial intelligence and writes about data, surveillance and policy. So, of course, it shouldn’t come as a surprise to anyone that her first book – Code Dependent: Living in the Shadow of AI – deals with the subject of artificial intelligence, the ways in which it affects our lives and its possible implications for the future. Written with clarity and precision, and backed by the author’s deep knowledge of – and familiarity with – the subject, it’s a brilliant read and has been shortlisted for the 2024 Women’s Prize for Non-Fiction.

The book’s genesis lies in the author’s interest in large-scale data mining that major tech companies undertake and benefit from. This started with a story that she wrote for Wired, which she says ‘revealed a multi-billion-pound industry of companies that collect, package and sell detailed profiles, based on our online and offline behaviours.’ And as she got in deeper, the framework of this data economy became clearer, along with the realization that our lives are being converted into data packages and being sold to Big Tech for their benefit. ‘This glimpse into the nascent world of data scraping sowed in me a seed of fascination about all the data we were generating by simply living in the modern world – and what was being done with it,’ says Murgia, adding that she has, since then, been chronicling the fortunes of companies like Google, Meta and Amazon, who owe a significant part of their success to having learned the art of data mining and using it to peddle their wares across the globe.

The next big step in the data mining story, says Murgia, is artificial intelligence (AI), which has gone mainstream in recent years and is ideally suited to finding useful patterns in large sets of data gleaned from millions of individuals. With the proliferation of digital devices like smartphones and the widespread usage of hundreds of apps across the world, more data on humans – every little thing about our lives – than ever before is now available to companies that wish to use that data, there is sufficient computing power available to process that data and find behavioural patterns in it, and Big Tech has seized the opportunity to use machine learning to its benefit. With the rise of AI, the consumer is the product and there seems to be no going back.

With the rise of generative AI tech, which Murgia says is built on the bedrock of human creativity – our ability to think, write, create art and music – there has been a significant change in our relationship with machines, which now have the power to manipulate our emotions and ever affect our own thought patterns and behaviour. ‘I have seen AI insidiously enter our lives over the past decade and when I set out to write this book, I wanted to find real-world encounters with AI that clearly showed the consequences of our dependence on automated systems,’ says Murgia. Here, she notes that technologies like ChatGPT already affect things like children’s education and the development of their creativity, healthcare, policing, public welfare and military warfare. These, she says, has a ripple effect on the human society at large and is altering the very experience of being human. AI-led tech has the potential to affect every individual on the planet in the foreseeable future and the author believes this may be a less than ideal situation, since the power to use AI at scale is concentrated in the hands of a few private companies, ‘who hold all the cards.’ And while it’s easy for the end-consumer to get excited by ChatGPT and similar pieces of tech, Murgia has a word of caution for those who are wont to get carried away in a wave of unbridled optimism. While she admits that machine learning models are able to use large sets of data to make statistically relevant connections that may be invisible to most humans, Murgia says we still need to be careful because AI reasoning can be ‘opaque’ and ‘non-intuitive’ and perhaps not fully understood even by those who have created such AI/ML models.

Murgia has split the book into chapters that talk about how AI influences every sphere of our lives – our work, our health and our bodies, our rights, our identity, our freedom, our society and our future. And in each of these, she has found real-world stories that will resonate with readers, since the people she writes about could well be any of us and their predicaments could well be our own, if not now then someday in the near future. And so, there are poorly-paid IT workers – many of whom are based in countries that may not be very well off – who remain invisible for the most part, but who actually perform the arduous tasks (essentially data labeling at immense scale, with exacting demands for accuracy) that are necessary to train AI software for advanced tech applications, which include navigation, social media, ecommerce and augmented reality. Repetitive work that most would find terribly ‘boring,’ and yet something that needs to be done in order to make AI work the way we expect it to work. And it’s only made possible by contributions from hundreds of thousands of poorly paid workers from around the world, who often work in bleak conditions and earn barely enough to just get by, while Big Tech reaps the rich rewards of their labour.

Then, there are instances of AI being used to digitally manipulate images in order to create disturbing content – still images and/or video, often referred to as deepfakes – which has the potential to cause untold harm to the person whose face has been used to create such content. While algorithms used to create deepfakes are genuinely useful for professional film studios, these are now being used by criminals to create inappropriate content, with which they can threaten and intimidate people and extort money from them. And the technology is still so new, most countries don’t have legal frameworks that can adequately deal with those who misuse the tech. In a similar vein, there is AI-assisted misuse of biometrics, especially facial-recognition tech, resulting in identity theft and the frightening consequences it often entails. And AI-assisted apps that are supposed to perform tasks that would have earlier been the exclusive preserve of a trained doctor (for example, studying test reports to identify the existence of a medical issue). Sometimes, these apps can work like magic, while they can be less than ideal at other times, due to inadequate/incorrect/faulty data sets used to train the AI model. Plus, it’s just a few private companies collecting valuable data from patients and using it to train AI, subsequently using it to maximise their profits. Should this really be happening? Nobody seems to know for sure.

Murgia identifies similar issues with AI-assisted, algorithm-driven policing in some countries, which sometimes creates serious ongoing legal problems for those who may only have committed a minor, one-time offence or perhaps even those who haven’t committed a crime at all – all because of the way a piece of code has been written, which in turn drives an AI model that doesn’t know any better. The author also makes note of the wide variety of other ways in which AI – ChatGPT and other similar AI models – get misused, sometimes unknowingly and without malicious intent, but with potentially disastrous consequences all the same. AI being used a therapist, AI being used to obtain medical advice, AI being used to get legal advice, and so on – often without the realization that generative AI can often fabricate things and spout utter nonsense. Like the humans that have created AI, AI can lie, be delusional and hallucinate, and upon being asked a question, it can at times come back with answers that are not grounded in reality.

Murgia probably doesn’t mean to suggest that all is doom and gloom, and she does acknowledge the power of AI to do some good for humanity, but only if properly harnessed. In that context, given that Silicon Valley doesn’t exactly function on the tenet of responsible innovation and chooses, instead, to ‘move fast and break things,’ an out-of-control AI taking over decision making from human beings could well have serious consequences. While taking cognizance of AI’s potential to augment human intelligence and solve complex problems that may have earlier been impossible to contend with, Murgia ends the book on a sombre note, underlining the fact that AI still has a long, long way to go before (and if at all?) it can become fully worthy of the trust, which humanity seems only all too eager to place upon it.

A deeply researched, very well-written book, Code Dependent is an intriguing, interesting read for anyone who wants to gain a better understanding of AI and its implications for all of humankind.

Code Dependent: Living in the Shadow of AI     
Author:
Madhumita Murgia
Publisher:
Pan Macmillan India
Format:
Hardcover / Paperback / Kindle
Number of pages:
320 / 336 / 338
Price:
Rs 1,992 / Rs 599 / Rs 559
Available on Amazon

Leave a comment