RGS-IBG Annual International Conference 2017

RGS-IBG Logo
Add to my calendar:    Outlook   Google   Hotmail/Outlook.com   iPhone/iPad   iCal (.ics)

Please note that some mobile devices may require third party apps to add appointments to your calendar


387 Workshop: Spatial Urban Analytics and Crowdsourced Geographic Information for Smarter Cities (2)
Affiliation Geographical Information Science Research Group
Convenor(s) João Porto de Albuquerque (University of Warwick, UK)
René Westerholt (Heidelberg University, Germany)
Chair(s) René Westerholt (Heidelberg University, Germany)
Timetable Friday 01 September 2017, Session 4 (16:50 - 18:30)
Room Sherfield/SALC Building, Room 9
Session abstract The recent emergence and availability of ever more data reflecting everyday human behavior within urban areas opens up opportunities for geographers and strengthens the geospatial viewpoint in the interdisciplinary field of urban science. This session focuses on methods and applications based on crowdsoucing, social media data, collaborative maps (e.g. OpenStreetMap) and mobile crowd sensing/citizen science approaches, with a particular emphasis on explicitly geospatial concepts and methods to the interdisciplinary field of urban analytics. The talks span across a broad range of topics dealing with conceptual innovations, the extension of existing spatial data analysis techniques and the development of new methods that explicitly consider spatial issues (in contrast with more general, non-geographic computational methods). Aside of concepts and methods, some talks consider scenarios related to smart cities, human mobility, urban planning and others.
Linked Sessions Workshop: Spatial Urban Analytics and Crowdsourced Geographic Information for Smarter Cities (1)
Contact the conference organisers to request a change to session or paper details: AC2017@rgs.org
The paths to knowledge
Danny Edwards (Edwards Stadsontwerp, Amsterdam)
Richard Van de Werken (Hastig, Woerden)
Balázs Dukai (Technical University Delft, The Netherlands)
Big data is the world’s current buzzword. Non-stop, immense quantities of data are being produced. By companies, by institutions and by people themselves. Using smartphones they self-track their movements, locations and interactions. Tracking apps like Human, Strava, Runkeeper etc. show beautiful visualisations of user generated mobility data on their websites. However using this data meaningfully in city planning still faces significant hurdles. Firstly, although most data is community-produced, ownership lies with companies. These have not yet found a modus operandi how to fluidly deal with data requests. The second issue lies in reliability and relevance. User generated datasets are often biased and have to be shrunk significantly, to be able to yield meaningful insights. And last but not least, decision makers don’t see the pitfalls and equal data to knowledge. Using the example of the Dutch Drechtsteden region we show how big data can be used effectively, when used in conjunction with a state-of-the-art modelling approach. We analyse a Strava dataset of over 200.000 trips with a origin and/or destination within the Drechtsteden. Strava originally was an app meant to self-track recreational running and cycling activity. However, users have long been using the app to also record their personal mobility behaviour in general. We show that Strava data can in fact not only be used to only analyse recreational use, but also to find detailed commuting patterns. Cross comparisons with other cyclist’ generated datasets are helpful in doing this. We also show how a modelling approach like the space syntax method of analysing urban networks is crucial in understanding the implications of mobility data and judging a concrete course of action for city government. Visualising data sets in beautiful 3D-cityscapes is a crucial step to achieve this. Only combining the data approach and the modelling approach yields deep insights, usable in reshaping the complex emergent systems which are our contemporary cities.
The use of Nature-inspired paradigms to strengthen Urban Resilience: Systematic Literature Review and Future Trends
Francisco Rivas (University of Granada, Spain)
Nowadays, around half’s the world population – more or less 3.5 billion inhabitants – lives in cities, and it is projected that by 2030 six out of ten people will be settled in urban environments. That is why in recent years a multidisciplinary research field has flourished around the complex challenge of sustainable urban development.

One of the most popular dimensions of urban sustainability is resilience, that is, in general, the capacity of an urban settlement to deal with, and adapt to, unexpected events, being either natural or man-made hazards. That capacity is closely related to the ability to accurately model a city as a dynamic, uncertain, complex and adaptive system of ‘data flows’.

Some valuable sources of urban data are well known: geo-sensor data and crowdsourced geospatial data. The amount of data delivered by this heterogeneous handful of sensors is huge, and traditional methods to extract meaningful knowledge from them in order to make optimal and near real-time decisions become not suitable, and therefore more computationally sophisticated techniques are required.

In this paper, several innovative computational breakthroughs, mainly inspired by Nature, some of them already used in resilient urban management, are reviewed in order to suggest potential improvements and further developments. Several of these paradigms are within the areas of: big data mining, advanced machine learning (deep learning, …), metaheuristics, fuzzy logic, among others. Moreover, it will be shown as well the existence of techniques not applied yet in urban resilience. So, this review is extended to other fields (e.g. health, image processing, environment, robotics, …) where these methods have been successfully implemented, to explore future trends.
The final section of the paper is focused on open research questions.
Characterization of urban blocks and sidewalks based on Volunteered Geographic Information and image-based social media
Tessio Novack (Heidelberg University, Germany)
Many urban planning tasks and studies are undertaken based on detailed geometric information about individual urban objects and their changes. However, several other of such tasks and studies rely not directly on the geometry of single objects, but rather on the semantic interpretation and quantitative characterization of larger aggregated areas, which are usually any of the city's administrative plots. In particular for the purposes of traffic dynamics analysis, pedestrian routing, urban stress studies, as well as for land-use mapping and the design of zoning laws, the city’s blocks are particularly relevant analysis units.

Traditionally, urban block characterization had been conducted chiefly with authoritative cadastral data and remote sensing imagery. Presently, at least for some parts of the world, Volunteer Geographic Information (VGI) projects and image-based social medias provide a solid data basis for the studies mentioned above and others. VGI projects provide detailed and mostly accurate information, like the buildings footprints and semantics, points-of-interests, urban equipment and even public green areas as well the street network. Image-based social media may be used to confirm, update and complete VGI data based on the position and content of the photos.

We intend to present how VGI and image-based medias are used as data sources for mapping the urban land use at the block level and for the definition of pleasant paths for pedestrians. More especially, we will show how and which attributes from the blocks and sidewalks can be extracted from these two data sources and how they are used (1) in a powerful per block land use classification approach and (2) in different strategies for defining additional weights (besides the distance) of the graph’s edges for defining pleasant (i.e. healthier) pedestrian routes.
Kriging algorithm Optimisation for impactful integration into industry utilising a new data source to introduce ‘road distance’ and ‘travel time’ matrices
Henry Crosby (University of Warwick, UK)
The accessibility of spatio-temporal methodologies has increased in a number of domains in recent years primarily due to improved functionality, easy-to-use interfaces offered by GIS software’s, and the increased accessibility of openly available data sources. A single result of such improved accessibility is the requirement to ensure that functions in these software’s are assumption-optimal with the intent of minimising the uncertainty for industrial and academic users whose aim is to utilise but not extensively challenge these functions.

A key example of spatio-temporal methodologies is the kernel based ‘Kriging’ algorithm which can be assumed in many forms (Simple, Universal, Bayesian...), all of which are distance and kernel dependent. Until now, optimal kernel selection has taken the sole attraction to kriging optimisation specifically with regards to promoting domain specific applications [1], however very little input has been put forward with regards to optimising the algorithms distance metrics. Notably, the primary metric utilised is the ‘Euclidean’ metric.

This paper introduces a set of knowingly unused and previously openly unavailable distance metrics (‘road distance’ and ‘travel time’) on a combination of kriging algorithms with a multitude of the most popular kernels. The paper will then compare the results of a residential property price predictor for Coventry (U.K.) for each metric, model and kernel utilising 10 fold stratified cross-validation. The ‘road distance’ and ‘travel time’ matrices will be compiled from Open Street Map’s (OSM) Open Source Routing Machine (OSRM).

The application to residential property price (of which the prices were sourced openly from the UK’s Land Registry) will demonstrate just one of many impactful applications to urban based problems in which road distance or travel time will be much more relevant than, say, Euclidean or Manhattan distance. In fact, it is intuitive to see that house prices on a single stretch of road are more likely to be related over time than two properties who physical location is closer (for example two properties that lay either side of a motorway, or two properties with back gardens facing each other). It is key to note that academics have formerly avoided this problem due to the potential for; (1) no significant improvements (2) data availability (3) non-symmetrical nature of road distance/travel time matrices and hence the challenges that one faces in utilising a model specifying a distance metric vector and hence producing a non-valid covariance function. With the latter problem being the most likely reason.
Gotway and Young (2008) put forward two potential solutions to the final reason; Isometric embedding and kernel convolution. The first of the two has been utilised once by Zhou et al (2010) with a road distance (but not travel time) metric in Nanchang, China. The latter has had no uptake with our proposed distance metrics. We will attempt both and compare their success. The final trained kriging model with a road distance or travel time metric is finally integrated into a real estate decision engine for commercial use and workshop demonstration.