RGS-IBG Annual International Conference 2016

RGS-IBG Logo

38 GeoComputation – The Next 20 Years (1)
Affiliation Geographical Information Science Research Group
Quantitative Methods Research Group
Convenor(s) Ed Manley (University College London, UK)
Nick Malleson (University of Leeds, UK)
Alison Heppenstall (University of Leeds, UK)
Andrew Evans (University of Leeds, UK)
Chair(s) Nick Malleson (University of Leeds, UK)
Timetable Wednesday 31 August 2016, Session 2 (11:10 - 12:50)
Session abstract The use of fully programmable computers to construct spatial models and run spatial analyses stretches back to the use of ENIAC to calculate ballistic courses during the Second World War. As ENIAC was announced to the public in 1946, 2016 represents the 70th year of the public use of computers in geography. Perhaps more happily, it is also 20 years since the term “GeoComputation” was invented to draw together a disparate set of geographers doing computing in the 70s, 80s, and 90s at the 1996 “1st International Conference on GeoComputation” in Leeds, UK. In 2017, the community built around this conference will be celebrating its 21st birthday, reflecting on its successes, and future directions. As part of this celebration, we invite presentations for this session speculating on the future of computing in geography: potentials, problems, and predictions. What is the future? The Internet of Things? Group cognition modelling? Solar-system scale geomorphological modelling? Speculative discussions encouraged!
Linked Sessions GeoComputation – The Next 20 Years (2)
GeoComputation – The Next 20 Years (3): Panel Discussion
Contact the conference organisers to request a change to session or paper details: AC2016@rgs.org
For a Cautious Use of Big Data and Computation
Juste Raimbault (Universit� Paris 7, France)
The so-called big data revolution resides as much in the availability of large datasets of novel and various types as in the always increasing available computational power. Although the computational shift (Arthur (2015)) is central for a science aware of complexity and is undeniably the basis of future modeling practices in geography as Banos (2013) points out, we argue that both data deluge and computational potentialities are dangerous if not framed into a proper theoretical and formal framework. The first may bias research directions towards available datasets (as e.g. numerous twitter mobility studies) with the risk to disconnect from a theoretical background, whereas the second may overshadow preliminaries analytical resolutions essential for a consistent use of simulations. We illustrate this idea with an example on well-studied geographical objects that are interactions between networks and territories. We compute static correlations between indicators of urban form and indicators of road network topology, using open datasets of european population density (EUROSTAT (2014)) and OpenStreetMap. A mathematical derivation of the link between spatial covariance at fixed time and dynamical covariance for spatio-temporal stochastic processes, combined with a theory of city-transportation interactions within evolutive urban systems on long times (Bretagnolle (2009)), allows to infer knowledge on involved geographical processes from empirical static correlations. In particular we show the regional nature of dynamic interactions, confirming the non-ergodicity of urban systems (Pumain (2012)). We argue that the two conditions for this result are indeed the ones endangered by incautious big-data enthusiasm, concluding that a main challenge for future Geocomputation is a wise integration of novel practices within the existing body of knowledge.
The Future of Geocomputation: Training, Tools and an Agenda
James Millington (King's College London, UK)
Jon Reades (King’s College London, UK)
Narushige Shiode (King’s College London, UK)
At a workshop at King's College London in December 2015, scientists and scholars with interests in geocomputation met to discuss developments in this area of geographical inquiry. With a view to the future, here we summarise and synthesise the discussions that were broadly split into three topics: Training, Tools and an Agenda for the next decade. On training, a key view was the need to ensure curricula are underpinned by (the practice and process of) problem-solving. This perspective emphasises the importance of 'computational thinking' that is not tied to any particular language or framework, and also raises the question of how university training will need to adapt as incoming students' computational backgrounds improve. This issue needs to be negotiated in tandem with changing data and computational tools: current challenges of reproducibility, openness, and consolidation of knowledge will continue in future. Overcoming these challenges will be important to ensure the outputs of geocomputational research – models, code, and data – can be adequately compared, synthesised and documented for continuity of knowledge development between projects and investigators. There is a sense that the current policies of funders and other academic incentive structures neither encourage, nor reward, activity that ensures good practice in code and data development and maintenance. Such practice will be needed in future since big multi-scale and multi-dimensional data will require new ways of understanding supported by geographical theory, including spatially contextual methods to address non-trivial, qualitative questions. Whatever data, methods and tools are produced in future, as we enter into increasingly cross-disciplinary and collaborative inquiry our core value and responsibility as geographers to deliver sound geographical interpretation must persist.
Comparative Urban Analytics and Geocomputation: a Framework for Global Urban Modelling?
Duncan Smith (University College London, UK)
Recent advances in global geospatial data (open data, 'big' data, remotely sensed data…), spatial analysis software and interactive cartography are opening up many possibilities for comparative urban modelling at international and potentially global scales. International city comparison across economic, social and environmental indicators is increasingly feasible, creating new perspectives for urban analysis in terms of benchmarking city performance against global peers. Increasingly city models can be 'plugged in' to data from any city the world to investigate shared forms and processes, and pursue a global urban science agenda. This approach enables planning policies and innovations to be quickly shared between cities to tackle common urban problems. There are however a number of challenges facing this emerging field. In addition to well-known data quality issues in developing countries, there are limitations in the degree to which urban models developed in advanced economies are applicable in radically different cultural and economic international contexts. Issues such as informal housing, extreme inequality and segregation can be incorporated within urban modelling, but have typically been marginal. Furthermore the strong commercial interests in the evolving Smart Cities field is creating challenges in terms of local ownership and representation of urban models. These issues are discussed in relation to urban analytics projects mapping city indicators across the globe, and the current ESRC RESOLUTION project. This project compares transport accessibility and segregation models of São Paulo and London, and presents results though a comparative interactive mapping platform.
Tangible Geographies
Brendan Harmon (North Carolina State University, USA)
Anna Petrasova (North Carolina State University, USA)
Helena Mitasova (North Carolina State University, USA)
Vaclav Petras (North Carolina State University, USA)
Ross Meentemeyer (North Carolina State University, USA)
Emerging technologies are enabling us to ever more closely couple the physical and the digital, bridging real and virtual geographies. With advances in networking like smart cities and the internet of things we can digitally track, sense, and analyze physical things – giving the physical a digital presence. With advances in sensing technologies we can 3D scan forms at scales ranging from millimeters to entire landscapes. With technologies such as 3D printing and robotics we can digitally fabricate data. With technologies such as machine control for construction, earthmoving, and precision agriculture we can digitally fabricate landscapes and the built environment. Tangible interfaces – user interfaces that physically, interactively manifest data so that we can naturally feel it, see it, and shape it – have the potential to tie these emerging technologies together. Imagine being able to hold a GIS in your hands, feeling the shape of the earth, sculpting its topography, and directing the flow of water. Imagine this GIS changing as the real geographies change – a digital terrain evolving as a real terrain erodes and deposits – and not only seeing these changes, but also feeling them. And imagine being able to change this GIS with your bare hands and having the change automatically built in the real landscape. By making geospatial computation tangible – by seamlessly link physical models, digital models, and real geographies – we will be able to collaboratively understand and interact with space in novel ways with our bodies.
GeoComputation and the exploration of population change: more data and more information?
Christopher Lloyd (University of Liverpool, UK)
This paper considers the current roles and potentials of GeoComputation in exploring long term social and economic change. Reflecting on recently-conducted research assessing small area population change in the UK and in South Africa, it considers some ways in which value can be added to analyses of 'conventional' data sources (for example, data from the Census of Population) using other data sources such as postcodes, landuse and remotely sensed imagery. In addition, the paper considers possibilities for incorporating other diverse sources of information to allow an enhanced understanding of population change and of social and spatial interactions between members of different population groups. Potential strengths and weaknesses of alternative data sources in such contexts are considered and the use of high quality (but often temporally restricted) 'official' sources like the census is contrasted with discussion around the use of often less 'formal' data sets such as those released by retailers or social media data. It is suggested that the development of new geocomputational approaches, combined with a growing ease of access to a diverse array of rich data sources about human populations, will lead to enhanced understandings of social and economic relationships but that caution is required if we are to avoid entering a period of intensive data generation and computationally-intensive analyses of questionable social value.