We are grateful to sava saheli singh for helping us deepen our thinking around what technologies of leadership entails and making connections between ideas that are important to us.
By: sava saheli singh
Technologies of leadership recognizes that technology, like leadership, is not neutral. We will explore leadership as we move away from the Information Age towards the Age of Intelligence. We consider what it means to lead in a time of constant change, technological opportunities and innovative possibilities, but also in a time of unprecedented surveillance, an often-unregulated technology sector, racist algorithms, and neocolonial practices in the AI sector and beyond. In this context, what do leaders need to be aware of? How might we need to rethink leadership in these times and through these times? How can we lead in ways that leverage possibilities of technology, while actively resisting the potential dangers?
When it comes to technology, we tend to talk about privacy, consent, and ethics, but what do these terms actually mean in practice? What is privacy? Whose privacy? What is consent? How do different understandings and enactments of consent change how meaningful it could be? A lot of technologies and applications claim to elicit consent from users, but if users don’t really have a choice to consent if they want to use the application, is that consent? What really are ethics? Whose ethics? Who has set the ethical boundaries and for what purpose?
It's important to ask these questions about a technology or application *before* it is implemented, especially if you are in a leadership or decision-making position. You want to ensure that these technologies and programs are geared towards learning and safety for students, staff, and educators. A good way to approach this is to consult with them and include them in conversations around their needs and ideas around technologies they think might be useful – educators are often already clued in to new technologies or applications. It’s also important to understand how the technology works, what data it collects, who has access to that data, who can use the data in ways they deem fit (or profitable), how the technology was built, what practices and ideologies the company that made this technology seem to support or promote, etc.
Education has become so corporatized and so beholden to a meritocracy that it stops being equitable, while pretending to be. Technology, more often than not, tends to reify and uphold this approach to education, which is extremely problematic. Capitalism and the technology that it nurtures move us further away from the liberatory possibilities of education. More and more, education is aimed at preparing students for jobs, to enter the workforce, to become cogs in the machine. The importance of subjects like literature, poetry, art, music, and even subjects like philosophy or art history is diminished because they aren’t seen as being directly tied to a job needed to move the economy forward in the 21st Century. This also means that education runs the risk of becoming less about knowledge, understanding our world and our societies, and how to make both better for everyone.
At the same time, educators aren’t paid enough and aren’t given enough support to do their jobs. In fact, many institutions are increasing class sizes, reducing the number of educators, staff and faculty, and expecting educators to teach online with little or no training. In this setting, teachers are crunched for time and resources, so it is easy to understand why they reach for generative AI tools like ChatGPT and Midjourney. This is how the application and use of AI becomes a labour issue. If the additional time and effort you are expected to put into your job is taking away from your personal time, you are obviously going to reach for the tools that you’ve been told will make the work easier and more efficient.
One of the interesting ways that I find people talking about AI, especially in terms of so-called “ethical” AI, is that they focus on present or future harms. This includes problematic aspects such as the non-consensual use of people’s content used to train the AI tools, which brings up questions of copyright, intellectual property, and plagiarism. These are important considerations, but the focus on this often obscures some of the other ways in which incredible harms from AI have already occurred.
Simply put and without going into too much detail, for Generative AI to work it needs to be fed or “trained on” vast amounts of data to help it make decisions. Much of this data is often harvested or scraped directly from the internet, along with all the problems that this presents. This data has to then be cleaned and labeled to give it context – a job usually done by humans. This is a huge and tedious process, and one we’ve all been involved in - for example, if you have ever had to identify bicycles or crossings in a series of images - ironically in order to “prove” that you are not a robot - you have unwittingly contributed to cleaning data.
In many cases, the humans who do this work are poorly paid workers from the Global South, sometimes in extremely dire situations such as living in refugee camps after their homes, lands, and families have been taken from them. In this setting, they are given the “opportunity” to work in extremely exploitative conditions for a pittance at a time when any kind of work or money is viewed as a potential way out of their dire circumstance. They take on what is referred to as “microwork” where they clean and label smaller fragments of data, often without knowing who they are doing this for or how it will be used. To add to this, the data they are cleaning can often include extremely harmful and traumatic content like child sex abuse material, racism and other forms of bigotry and in some documented cases even imagery and text related to the conflict they escaped. These are people who have already lived through so much trauma, have lost so much, and are so vulnerable, only to be put in situations where they have to take on precarious and sometimes traumatic work with little or no mental and physical health support.
With so much of this process hidden behind the corporate myth of AI “automation”, how are we to know what is really happening? The Data Workers’ Inquiry is an excellent effort that brings data workers and community researchers together to examine and report on the conditions and experiences of data workers. These narratives bring to light some of the conditions that these people are forced to work under.
All of this is before even considering the ecological and social impacts of AI, which are unimaginably heavy and have long-lasting, negative consequences for our planet, and by extension, for us. From things like using up power, water, and land, to the communities that have been displaced and disrupted, the cost of AI is almost too unbearable to even think of.
So, is there actually such a thing as ethical AI? As it stands, the answer, especially based on the examples above, is no. At least not until we have taken stock of everything that has gone into making AI products and examined all the harms that have already been caused by their creation and implementation. For instance, we have to go back to the data workers and pay them equitably, house them, ensure they get the highest level of mental and other health supports. If everyone had adequate housing, access to food and free healthcare, for instance, then the use of these tools might not be as contentious. But, as it stands right now, in the context of a highly unequal world that has no real desire to change, AI simply serves to exacerbate inequality and harm. We need to fix all of the ways in which we've harmed ourselves, each other, the earth, and at the same time genuinely think through a way forward that does not bring harm to humans, plants, animals, water, land, communities, everyone.
We have let capitalism and so-called “tech bros” drive the narrative and have allowed ourselves to be abstracted away from each other to such an extent that we are comfortable using a tool that has destroyed the environment and incalculably harmed the most vulnerable amongst us. These harms are presented as collateral damage in the service of “progress”. Is this the kind of progress we want? Are we ok with this collateral damage being people’s lives and the very earth that we live on?
To me, the line is clear: I will not step over the bodies of adults and children who have been harmed and exploited; the lands that have been usurped and destroyed; the waters that have been polluted and depleted; and the animals and plants that have been killed in order to let an energy guzzling computer write an essay or an assignment prompt for me, or to generate an image built on plagiarizing countless others.
If and when we find ourselves in leadership or decision-making positions, it behooves us to be responsible to the people in our care such as students, educators, and staff, and that care includes ensuring that we are not making these people and ourselves complicit in the harms that these tools have already wrought and continue to cause. My challenge to everyone reading this and listening to this podcast is to reflect on the questions below.
Reflection Questions
- How are you making truly ethical decisions about the technology you bring into people’s lives?
- How are you creating the space for resistance and rejection of these harms?
- How are you responding to community members who might want to opt out of these harmful technologies?
- How are you educating yourself and others about the past, present, and future dangers and harms of AI, education technology, and technology more broadly?
References
AI is Already Wreaking Havoc on Global Power Systems: https://www.bloomberg.com/graphics/2024-ai-data-centers-power-grids/
Charting the Course: Navigating Climate Justice in the Digital Age: https://www.mediatechdemocracy.com/climatetechhoganlepagericher
The Exploited Labor Behind Artificial Intelligence: https://www.noemamag.com/the-exploited-labor-behind-artificial-intelligence/
The Human Cost of our AI-Driven Future: https://www.noemamag.com/the-human-cost-of-our-ai-driven-future/
The Staggering Ecological Impacts of Computation and the Cloud: https://thereader.mitpress.mit.edu/the-staggering-ecological-impacts-of-computation-and-the-cloud/
Refugees help power machine learning advances at Microsoft, Facebook, and Amazon: Big tech relies on the victims of economic collapse: https://restofworld.org/2021/refugees-machine-learning-big-tech/
Additional Recommended Resources:
AI Countergovernance: https://www.midnightsunmag.ca/ai-countergovernance/
AI Snake Oil: https://press.princeton.edu/books/hardcover/9780691249131/ai-snake-oil
Atlas of AI: https://katecrawford.net/atlas
Crash Course AI. https://thecrashcourse.com/topic/ai/
Critical AI: https://read.dukeupress.edu/critical-ai/issue/1/1-2
Data Workers’ Inquiry Project: https://data-workers.org/about/
Data Workers’ Inquiry Video Channel: https://peertube.dair-institute.org/c/data.workers.inquiry/videos?s=1
Data Workers, In Their Own Voices: https://www.techpolicy.press/data-workers-in-their-own-voices/
Decolonizing Technologies. https://ail.angewandte.at/explore/decolonizing-technology/
Digital Literacy Center. https://dlsn.lled.educ.ubc.ca/wordpress/
Dinika, A. (2024). The Human Cost of our AI-Driven Future. Noema. https://www.noemamag.com/the-human-cost-of-our-ai-driven-future/
Estampa. (2024). Cartography of Generative AI. https://cartography-of-generative-ai.net/
Gilliard, C. & Rorabaugh, P. (2023). You’re Not Going to Like How Colleges Respond to ChatGPT. Slate. https://slate.com/technology/2023/02/chat-gpt-cheating-college-ai-detection.html
Government of Canada's Artificial Intelligence and Data Act: https://ised-isde.canada.ca/site/innovation-better-canada/en/artificial-intelligence-and-data-act
Gray, K. (2014). Race, gender, and deviance in Xbox live: Theoretical perspectives from the virtual margins. Routledge.
Gray, K. L. (2020). Intersectional tech: Black users in digital gaming. LSU Press.
Hicks, M. T., Humphries, J., & Slater, J. (2024). ChatGPT is bullshit. Ethics and Information Technology, 26(2), 1-10. https://doi.org/10.1007/s10676-024-09775-5
Hogan, M. & Richer, T. L. (2024) Extractive AI. Charting the Course: Navigating Climate Justice in the Digital Age. Centre for Media, Technology and Democracy. https://www.mediatechdemocracy.com/climatetechhoganlepagericher
Indigenous | Black Engineering Technology PhD Project https://ibetphd.ca/
Jones, P. (2021). Refugees help power machine learning advances at Microsoft, Facebook, and Amazon: Big tech relies on the victims of economic collapse. Rest of World. https://restofworld.org/2021/refugees-machine-learning-big-tech/
Kindergarten to Industry Academy (K2i). https://lassonde.yorku.ca/about/our-values/kindergarten-to-industry-k2i-academy
Kerr, D. (2024). How Memphis became a battleground over Elon Musk’s xAI supercomputer. NPR. https://www.npr.org/2024/09/11/nx-s1-5088134/elon-musk-ai-xai-supercomputer-memphis-pollution
Kwet, M. (2024). Digital degrowth: Technology in the age of survival. Pluto Press. https://www.plutobooks.com/9780745349879/digital-degrowth/
Luccioni, S., Trevelin, B., & Mitchell, M. (2024). The Environmental Impacts of AI. Hugging Face Blog. https://huggingface.co/blog/sasha/ai-environment-primer
Marx, P. (2024). Generative AI is a climate disaster. Disconnect Blog. https://disconnect.blog/generative-ai-is-a-climate-disaster/
Maughan, T. (2015). The dystopian lake filled by the world’s tech lust. BBC Future. https://www.bbc.com/future/article/20150402-the-worst-place-on-earth
Merchant, B. (2024, June 28). The most powerful takedowns of generative AI, from those who know its impacts best. Blood in the Machine. https://www.bloodinthemachine.com/p/the-most-powerful-takedowns-of-generative
Monserrate, S. G. (2022). The Staggering Ecological Impacts of Computation and the Cloud. MIT Press Reader. https://thereader.mitpress.mit.edu/the-staggering-ecological-impacts-of-computation-and-the-cloud/
Nott, D. & Cambo, S. (2023). Our Labor Built AI. https://thenib.com/our-labor-built-ai/
O'neil, C. (2017). Weapons of math destruction: How big data increases inequality and threatens democracy. Crown.
Pasek, A. (2023). Getting into Fights with Data Centres: Or, a Modest Proposal for Reframing the Climate Politics of ICT. Zine. https://emmlab.info/Resources_page/Data%20Center%20Fights_digital.pdf
Perrigo, B. (2023). Exclusive: OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic. Time. https://time.com/6247678/openai-chatgpt-kenya-workers/
Phirangee, K., & Foster, L. (2024). Decolonizing digital learning: equity through intentional course design. Distance Education, 45(3), 357-364.
Rand, R. & Hendrix, J. (2024). Data Workers, In Their Own Voices. Tech Policy Press. https://www.techpolicy.press/data-workers-in-their-own-voices/
Resources on refusing, rejecting, and rethinking GenAi: https://docs.google.com/document/d/1btDwVuGb0xvAWbM8hlEz0rkiWUZXi9a9esJb7FmthDU/edit?tab=t.0#heading=h.hz2vk0miafgt
Rowe, N. (2023). Underage Workers Are Training AI. Wired. https://www.wired.com/story/artificial-intelligence-data-labeling-children/
Rowe, N. (2023). ‘It’s destroyed me completely’: Kenyan moderators decry toll of training of AI models. The Guardian. https://www.theguardian.com/technology/2023/aug/02/ai-chatbot-training-human-toll-content-moderator-meta-openai
Saul, J., Nicoletti, L., Rai, S., Bass, D., King, I., & Duggan, J. (2024). AI is Already Wreaking Havoc on Global Power Systems. Bloomberg. https://www.bloomberg.com/graphics/2024-ai-data-centers-power-grids/
Selwyn, N. (2022). The future of AI and education: Some cautionary notes. European Journal of Education, 57(4), 620–631.
Screening Surveillance. https://www.screeningsurveillance.com/
The Data Fix Podcast with Mél Hogan: https://shows.acast.com/the-data-fix
The New Empire of AI: The Future of Global Inequality by Rachel Adams
https://www.politybooks.com/bookdetail?book_slug=the-new-empire-of-ai-the-future-of-global-inequality--9781509553099
Kanungo, A. (2023, July 18). The green dilemma: Can AI fulfil its potential without harming the environment?. Earth.Org. https://earth.org/the-green-dilemma-can-ai-fulfil-its-potential-without-harming-the-environment/
Unmasking AI: https://www.unmasking.ai/
Wemigwans, J. (2018). A digital bundle: Protecting and promoting Indigenous knowledge online. University of Regina Press.
Williams, A., Miceli, M., & Gebru, T. (2022). The Exploited Labor Behind Artificial Intelligence. Noema. https://www.noemamag.com/the-exploited-labor-behind-artificial-intelligence/ Williamson, B. (2023). Degenerative AI in Education. Code Acts in Education. https://codeactsineducation.wordpress.com/2023/06/30/degenerative-ai-in-education/
Panelist Bios
Dr. sava saheli singh (she/her) is an Assistant Professor of Digital Futures in Education with the Faculty of Education at York University. She is an interdisciplinary scholar and filmmaker working at the nexus of education, technology, surveillance, speculative futures, critical digital literacy, labour, abolition, and research creation, with a strong commitment to community-based public scholarship. Supported by funding from the Office of the Privacy Commissioner of Canada and the Social Sciences and Humanities Research Council of Canada, she conceptualized and co-produced the award-winning Screening Surveillance series of short, near-future fiction films. These films speculate surveillance futures and the potential and harmful impact of technologically mediated surveillance on everyday life, and are available online as a free educational resource. The films have been screened at film festivals, international conferences, workshops, global public events, and in classrooms across the world.
Lisa Cole is the Director of k2i (kindergarten to industry) academy at the Lassonde School of Engineering, York University. She is a passionate educator, system leader in STEM (Science, Technology, Engineering and Mathematics) Education and committed to building equitable opportunities for students. The k2i academy is an award-winning, innovative ecosystem, committed to dismantling systemic barriers to opportunity for underrepresented students in STEM. k2i academy works alongside multi-sector partners and collaborators to co-create strategic directions for system change work in STEM education. Since June 2020, k2i academy has reached 80,000+ youth, educators, families, and community in 340,000+ hours of STEM learning experiences. k2i has employed 525 high school students in work-integrated learning programs and empowered 175 undergraduate STEM students to engage in this work as mentors and leaders.
Dr. Kishonna L Gray is a Professor of Racial Justice and Technology in the School of Information at the University of Michigan. She is the Director of the Intersectional Tech Lab, a Mellon funded initiative.She is the author of Intersectional Tech: Black Users in Digital Gaming (LSU Press, 2020), Race, Gender, & Deviance in Xbox Live (Routledge, 2014), and is the co-editor of two volumes on culture and gaming: Feminism in Play (Palgrave-Macmillan, 2018) and Woke Gaming (University of Washington Press, 2018). Dr. Gray’s scholarship is intersectionality grounded in transdisciplinary theories and methods of feminism, digital studies, platform studies, game studies, criminology, sociology, and critical race scholarship. She interrogates the impact that technology has on culture and how minoritized users, in particular, influence the creation of technological products and the dissemination of digital artifacts. Her work is based on analyses of game play, platform design, and digital infrastructures.
Andrew McConnell, a member of Nipissing First Nation and Toronto resident, boasts nearly 20 years of experience in education, merging technology and Indigenous teaching. He currently holds the IBET Fellowship at Lassonde School of Engineering at York University. He is seconded faculty to the Faculty of Education at York University where he teaches in the Waaban program for Indigenous Teacher candidates. Prior to his appointment at York University, he was the board lead for Indigenous education at the York Region District School Board for 5 years, and directly supported the work of the Indigenous Education team. While there he liaised with the education staff from Georgina Island First Nation, and the Anishinaabek Education System, and constantly advocated for the inclusion of Indigenous content across the system. He is currently working to reimagine STEM education for Indigenous youth, integrating traditional knowledge and experiences to challenge the conventional STEM approach in secondary schools that does not connect to Indigenous realities. His work looks to connect students with community knowledge keepers, preserving local languages and historical knowledge of the land. By connecting these stories with immersive technology, digital mapping and data visualization, he is looking to preserve knowledge in the community that can be shared with others to promote a better way of living with the world.
Michael Kwet is a Postdoctoral Researcher of the Centre for Social Change at the University of Johannesburg and a Visiting Fellow at Yale Law School. His research focuses on digital colonialism, social media, surveillance, and the environment. Michael has been published in numerous media outlets, including The New York Times, Al Jazeera, and VICE News. In August, he published, Digital Degrowth: Technology in the Age of Survival on Pluto Press.