Ten Rules of Technology

Ten Rules of Technology

Nothing is more practical than rules to guide thinking. In this post I share the ten rules I use to think about technology and society. Personally I use them as tools to filter the hype that surrounds technology and to get to the critical questions of who benefits and loses, as well as what needs to be done to secure a more equitable society. I previously tweeted a version of these rules, and this posting responds to requests to expand them and to provide some useful links.
 

Rule 1. Technologies Are Inherently Political (Winner 1980). Langdon Winner famously argued that artifacts have politics – that some technologies are “inherently political” due to inflexible features of their structure & operation. Other technologies, he argues, are political “by design,” but open to flexible shaping that affects their possibilities & likely consequences. Winner provided the example of the height of the bridges to Long Island being designed specifically to deter buses of working class and black people from entering. My own particular bug-bear is how the design of airline seat widths and comfort levels reflect and reproduce class privilege and advantage. The truth is that the history of technology is littered with examples of how technology was mobilised to privilege whiteness, to serve class interests, and to effect male domination in the workplace (Cockburn 1983; Hicks 2018).

Rule 2. Technology is neither good nor bad, nor is it neutral (Melvin Kranzberg 1985). Rule 2. is important because we are often told that “technology is just a tool”, which, like the superpowers of comic-book heros, can be used for good or evil. I call this the Superman Fallacy of Technology because technology is never neutral. Technology is neither produced nor used in a social vacuum. It arrives already enmeshed in social and political relations. It is no less neutral than the people or society that produce and use it. The design of the atom bomb in the Manhattan Project cannot usefully be understood as neutral or as free of power relations. The decisions about which technologies to invest in, and about which technologies are applied to which goals, are never neutral.

Rule 3. Technology is Socially Shaped (MacKensie and Wacjman 1986). The Social Shaping of Technology was the coursebook I used for undergraduate studies in the 1990s. It is still a great source of case studies. It succeeded in pushing back against technological determinism by demonstrating how technologies like the Jaquard loom, the light bulb, and the micro-chip were shaped by economic, political and military considerations. As a result of their analysis it became possible to see that technologies are inevitably ‘socially shaped’ by the social relations of the systems in which they are produced and consumed. To avoid simply replacing technological determinism with social determinism, a synthesis position is to argue that humans shape technologies and are in turn shaped by them.   

Rule 4. Technology is a Process of applying knowledge to goals. Technology is often defined as knowledge applied to goals. This begs the questions: “who’s knowledge?”, “applied by whom?” and to “who’s goals?”. Why are research and development efforts disproportionately focused on military and business technologies as compared to humanitarian and environmental technologies? I argued in a previous blog that technology (and international development) are usefully understood as processes of applying human agency and knowledge to solve human-defined problems. From this perspective any adequate analysis of technology is necessarily a socio-technical analysis that needs to include a consideration of who gets to participate, what interests are being served, and who benefits.

Rule 5. Technology Excludes. As I argued in a previous blog, the use of digital technologies always excludes. This is because access to technology is uneven. This is true at least wherever technology diffusion relies on the free market and profit motive, because profit-driven market mechanisms cannot serve those without disposable wealth. 2.5 billion people have no phone. 3.5 billion people have no internet access. Even in relatively wealthy countries like the UK which seem to be saturated with digital technology, it is still the case that 10% of the population do not use the internet. So when government services or civil society projects are made available via mobile or internet platforms they exclude those with the least disposable income. My colleague Kevin Hernandez and I have written in greater depth about how uneven technology access amplifies exclusion and what needs to be done in order to Leave No One Behind in a Digital World. We also argue that technology access is not binary because dramatic inequality of access exists even between those who are counted as ‘connected’ in official statistics. We explain why this inequality is structural and will continue to increase. 

Rule 6. Technology Can Only Amplify Existing Capacity and Intent (Toyama 2010). Kentaro Toyama argued that the use of technology can only amplify existing human capacity and intent, and that it cannot act as a substitute where little or none exists. This amplification thesis means, for example, that the capacity and intent of well-trained and motivated staff can be amplified by their use of computers, but that providing computers in places where staff lack digital literacy or motivation cannot produce the same outcomes. This rule makes it clear that identifying existing human capacity and intent is an important tactic for development intitiatives, but that building human capacity and intent where it does not yet exist is absolutely central if we are interested in addressing the situation of the most disadvantaged and underprivileged.

Rule 7. Technology Reflects, Reproduces, and Amplifies Existing Inequalities. The compound effect of the above rules is that the use of digital technologies tends to amplify existing social inequalities and power differentials. If the market mechanism and the profit motive are the primary means by which technology is diffused then the richest will secure greater ‘digital dividends’ than the poor and existing (dis)advantage will be reproduced and amplified. It is not only wealth inequality that are amplified; gender and racial disadvantage can be amplified by use of technology. As we learnt from the Amazon recruitment AI, if machine learning is based on on datasets that reflect historic patterns of partriarchal prejudice then the artificial intelligence that it produces will amplify that gender injustice. Computer geeks are familiar with the GIGO rule of Garbage In – Garbage Out. Computers are not intelligent; they are entirely dependant on inputs. If you input sexist data they will inevitably produce sexist outputs. From this I derive the PIPO rule of Patriarchy In – Patriarchy Out.

Rule 8. Algorithms are Human Agency and Interests, encoded. Algorithms are produced by humans and so necessarily reflect people’s perceptions and intentions. Machine learning is based on data that reflects people’s historical patterns of behaviour and prejudices. We often mistake artificial intelligence and automated decision-making for neutral and objective processes. Algorithms in fact reflect human agency and intentions, and reproduce human failings and flaws. Cathy O’Neil (2016) and Viginia Eubanks (2018) have shown how algorithms structurally disadvantage poor people. Safiya Noble (2018) has shown how algorithms systematically disadvantage black people and, in her edited volume with Brendesha Tynes (Noble and Tynes 2016), illustrates how the politically coded ‘Intersectional Internet’ structures intersectional disadvantage on black women in working class communities. Although we imagine search engine results and algorithmic recommendations are fact-based and neutral, Shoshana Zuboff (2019) and Nanjala Nyabola (2018) have documented how ‘Surveillance Capitalism’ and ‘Digital Democracy’ reflect specific econonomic and political projects and serve particular interests. In each case we see how the political agency of powerful groups is coded into the algorithms operating at the heart of our increasingly datafied societies. 

Rule 9. Beware Technologies Followed By Verbs. Whenever technology is being blamed for something you can be sure that the real villian is being let off the hook. Whenever technology appears in a media headline we should always expect that the public is being misdirected. We should be sceptical of headlines like “Artificial Intelligence will cause unemployment” or “Social Media causes political polarisation” and insist on asking asking who benefits, what interests are in play here, and who is being left behind? Technology is never to blame because, as we know from the above rules, technology has no agency or intent of it own. This is why whenever technology is being blamed you can be sure that politics and power are being obscured. What interests are served by technology taking the rap? I am as guilty as everyone else (including in the rules above) of following technology with verbs. Hopefully, the discussion that follows each rule succeeds in de-centring the technology and focuses on the human interests that determine the use of technology in society. 

Rule 10. It’s Not About The Technology Stupid! The underlying problem that technology reflects, reproduces and amplifies is the patriarchal-white-supremacist-hetero-normative-capitalist-imperalist power interests that shape how technology is currently being used and abused. bell hooks (2000) teaches us how the intersection of these power interests structure our social institutions and give rise to the unequal social norms and values that shape our unjust social relations. In societies characterised by inequalities of caste/class, gender and ‘race’, when technologies are diffused by markets in proportion to existing (dis)advantage, then those inequalities are reflected, reproduced and amplified by the use of technology. The result is that existing (under)privilege is amplified and inequality gaps are widened. Far from reducing digital and social divides the uncritical use of technologies is expanding them.

The question of what is to be done in order to avoid amplifying inequality through the use of technology will be the subject of a future blogpost.
 

References:

Cockburn, C. (1983) Brothers: Male Dominance and Technological Change, London, Pluto Press.

Eubanks, V. (2018) Automating Inequality: How High-Tech Tools Profile, Police and Punish the Poor. St Martin’s Press.

Hicks, M. (2018) Programmed Inequality, Cambridge, Massachuchettts Institute of Technology.

hooks, b. (2000) Feminist Theory: from margin to centre. Boston, South End Press.

Kranzberg, M. (1986) Technology and History: Kranzberg’s Laws, Technology and Culture vol 27 (3).

MacKensie, D. and Cockburn, C. eds. (1985) The Social Shaping of Technology, Buckingham, Open University Press.

Noble, S (2018) Algorithms of Oppression: how search engines reinforce racism, New York, NYU Press.

Noble S. and Tynes, B. (2016) The Intersectional Internet: race, sex, class and culture online, New York, Peter Lang,

Nyabola, N. (2018) Digital Democracy, Analogue Politics: how the internet is transforming politics in Kenya, London, Zed Books.

O’Neil, C (2016) Weapons of Math Distruction: how big data increases inequality. London, Penguin.

Toyama, K. (2010) Can Technology End Poverty? Boston Review, November 2010.

Winner, L. (1980) Do Artifacts Have Politics? Daedalus, Vol. 109 (1).

Zuboff, S. (2019) The Age of Surveillance Capitalism, London, Profile Books.

Comments are closed.