RGS-IBG Annual International Conference 2019

RGS-IBG Logo
Add to my calendar:    Outlook   Google   Hotmail/Outlook.com   iPhone/iPad   iCal (.ics)

Please note that some mobile devices may require third party apps to add appointments to your calendar


268 Geographies of/with Artificial intelligence (2): Working
Affiliation Digital Geographies Research Group
Convenor(s) Sam Kinsley (University of Exeter, UK)
Chair(s) Sam Kinsley (University of Exeter, UK)
Timetable Thursday 29 August 2019, Session 4 (16:50 - 18:30)
Room Sherfield/SALC Building, Room 7
Session abstract We are variously being invited to believe that (mostly Global North, Western) societies are in the cusp, or early stages, of another industrial revolution led by “Artificial Intelligence” - as many popular books (e.g. Brynjolfsson and McAfee 2014) and reports from governments and management consultancies alike will attest (e.g. PWC 2018, UK POST 2016). The goal of this session is to bring together a discussion explicitly focusing on the ways in which geographers already study (with) ‘Artificial Intelligence’ and to, perhaps, outline ways in which we might contribute to wider debates concerning ‘AI’. There is widespread, inter-disciplinary analysis of ‘AI’ from a variety of perspective, from embedded systematic bias (Eubanks 2017, Noble 2018) to the kinds of under-examined rationales and work through which such systems emerge (Adam 1998 Collins 1993) and further to the sorts of ethical-moral frameworks that we should apply to such technologies (Gunkel 2012, Vallor 2016). In similar, if somewhat divergent ways, geographers have variously been interested in the kinds of (apparently) autonomous algorithms or sociotechnical systems are integrated into decision-making processes (Amoore 2013); encounters with apparently autonomous ‘bots’ (Cockayne et al. 2017); the integration of AI techniques into spatial analysis (Openshaw & Openshaw 1997); and the processing of ‘big’ data in order to discern things about, or control, people (Leszczynski 2015). These conversations appear, in conference proceedings and academic outputs, to rarely converge, nevertheless there are many ways in which geographical research does and can continue to contribute to these contemporary concerns. This session aims to make explicit the ways in which geographers are (already) contributing to research on and with ‘AI’, to identify research questions that are (perhaps) uniquely geographical in relation to AI, and to thereby advance wider inter-disciplinary debates concerning ‘AI’.
Linked Sessions Geographies of/with Artificial intelligence (1): Spacings
Contact the conference organisers to request a change to session or paper details: ac2019@rgs.org
“Hey Alexa, why are you gendered?” Automation in the home and emotional labour
Sam Kinsley (University of Exeter, UK)
The recent and rapid rise of voice-controlled in-home A.I ‘assistant’ devices that are, mostly, gendered female in name and ‘voice’ has given rise to and received critical responses. It has been variously argued by academics, journalists and in satirical comedy that these devices sediment stereotypes of female subservience and facilitate, if not encourage, derogatory behavior that reinforces such stereotypes. There are a range of thoughtful interdisciplinary criticisms of the gendered norms of A.I. devices and the practices that (re)produce them. This paper contributes to these discussions from a different angle, drawing on the long-standing arguments concerning the gendering of emotional labour and body work. The paper thus locates these home AI ‘assistants’ in the historical context of home automation leading to the intensification of inequalities in domestic labour. The intense focus of these technologies, and their developers, on intimate domestic space is no accident. This paper argues that the stereotypes of gendered subservient labour are exploited as a conduit for technology developers to surveil intimate spaces. The concern here is that in the process these gendered stereotypes are arguably exacerbated and intensified.
Cooked with care or a raw deal?: One geographer’s explorations of AI and machine learning from below in London’s gig-economy
Adam Badger (Royal Holloway, University of London, UK)
Advancing Gietman’s (2013) assertion that all data is ‘cooked with care’, this paper investigates the role of the worker’s body and actions (and by extension, those of the geographer undertaking covert research as a gig-worker) in the processes of AI and machine learning in the contemporary gig-economy. As a methodological underpinning, Lefebvre’s rhythmanalysis (2004) will be explored as both a means for researchers to enter the field of work, but also to reflect on how workers make sense of their own environments. Much of that which constitutes success at gig-work of this kind is our ability to pick-up on the inherently eur-/arrhythmic. The senses of when it will be busy. Primary focus will be on the flash point between machinic and worker experiences of AI and the gig-economy. For humans, the nature of work is something that can be felt but not fathomed. Meanwhile for the AI, the nature of work is something that can – to a varying degree of success - be fathomed but not felt. The Platform under investigation recently released a ‘new algorithm’ that promises to revolutionise our workflow. This paper explores the careful cooking of data food delivery riders must participate in – something that as the rates drop for each job, seems to be creating a raw deal.
Workplace Surveillance by AI: it’s for your own good?
Philip Garnett (University of York, UK)
The deployment in organisations of what is, perhaps wrongly, described as Artificially Intelligence surveillance systems for threat detection is growing to the point of becoming part of the everyday. These AI surveillance systems are increasingly taking the form of assemblages of machine learning algorithms situated at the heart of an organisation’s network, where they occupy a position of privileged access to perhaps all the network traffic generated by, and flowing through, the organisation. They are trained to recognise the difference between the everyday traffic of the organisation’s normal devices, processes, and people. In order to detect anomalous behaviour, which can then be flagged as a potential risk and worthy of enhanced scrutiny. So frequently do we hear that aspects of our lives are subject to measurement and analysis by analytical processes and algorithms that when asked for an opinion a frequent response is little more than an indifferent shrug. Perhaps followed up with a statement something akin to “if you have nothing to hide you have nothing to fear”. (If you aren’t an anomaly you have nothing to fear.) Perhaps then it is not surprising that when we discover that our AI colleague might be analysing our everyday working patterns, or perhaps even the substance of our digital interactions, in order to detect anomalous behaviours in our own lives, this met with a similar degree of indifference. It may then not surprize you to learn that the ability of a manager to measure the sentiment of a team has been marketed as a route to enhanced productivity, and a tool for assessing employee well-being. AI surveillance is great, it’s looking out for us.
Link NYC and the performative geopolitics of ‘automation’
Nathaniel O'Grady (University of Manchester, UK)
The proliferation of technologies that supposedly operate with a degree of autonomy from human control have for some time proven a locus of attention for work that probes the complex intersections between data-based devices and security practices. Literature has developed important critical accounts of how these technologies bear on a broader array of actions through which governments imagine future disruptive events, surveill populations and organise various forms of intervention. The paper extends these debates further still by presenting and reflecting upon how (at least the notion of) automation figures as an important hope in the ongoing development of a new emergency warning mechanism within New York’s burgeoning ‘free’ wifi infrastructure; LinkNYC. Specifically, it focuses on automation as both a promise continually alluded to within the set of agreements brokered between an assemblage of public and private organisations that coordinate to deploy this infrastructure for emergency warning and as a term that is thought appropriate for characterising the processes by which emergency warning is itself brought into effect. Rather than designating a set of computational procedures black-boxed within software, however, I draw on this case to argue that automation needs further conceptualisation by its performative effects on the geopolitics of security more generally; particularly how it shapes the distribution of authority amongst the agencies overseeing LinkNYC and thus infuses myriad interests into government operations, how it influences the functioning of new forms of infrastructure and, lastly, how it shapes citizens as subject to new modes of governance made possible by new infrastructure.