:::: MENU ::::
Browsing posts in: Modelling

Highlights from 2nd International Conference on Urban Informatics, Hong Kong

Hong Kong Polytechnic University (PolyU) Campus (Background: Jockey Club Innovation Tower, Zaha Hadid Architects)

This week I had the pleasure of attending the 2nd International Conference on Urban Informatics, run by the Hong Kong Polytechnic University. The conference had an impressive scope of keynotes from various universities such as MIT, UCL, Shenzhen University and presentations of research in conference streams. It was also an ambitious event – further announcing a new international Informatics society, a book series, a set of new courses at PolyU and new journal to follow up the week’s activities.

One of the conference keynotes (Carlo Ratti, MIT Senseable City Lab)

The conference stream presenters ranged from Masters students to experienced Professors and professionals with a very large breadth of topics in ‘Urban Informatics’ spanning themes of AI, Urban Models, Emerging Technologies, Environmental Monitoring and Remote Sensing techniques. In particular, themes from the sessions I attended seemed to often focus on using both supervised machine learning methods and unsupervised clustering methods to understand variety urban systems – from mobility to health.

Some impressive LiDAR technology on show – creating extremely detailed live 3D models… very science fiction! (SkyNet anyone?)

The following presentations were my personal highlights.

Take a Look Around: Geographic Knowledge Discovery for the Case Study of London Streetviews by Dr. Stephen LAW, The Alan Turing Institute and University College London, UK

Stephen presented a comprehensive suite of research on using Google Streetview data and AI (convolutional neural networks) towards automated ways to quantify urban design elements at a large, detailed scale across cities. In particular, it was really impressive to see aesthetic / human experiential notions of active and passive frontages able to be identified en-masse across cities – which could potentially have great impacts as a tool for designers and planners to create more vibrant, happy and healthy environments.

Quantifying aspects of urban design on a massive scale is crucial to help develop a deeper understanding of urban designs elements that contribute to the success of a public space. In this study, we further develop the Street-Frontage-Net (SFN), a convolutional neural network (CNN) that can successfully evaluate the quality of street frontage as either being active (frontage containing windows and doors) or blank (frontage containing walls, fences and garages).

More articles by this author:

Law, S., Seresinhe, C. I., Shen, Y., & Gutierrez-Roig, M. (2018). Street-Frontage-Net: urban image classification using deep convolutional neural networks. International Journal of Geographical Information Science, 1-27.

Law, S., Shen, Y., & Seresinhe, C. (2017, November). An application of convolutional neural network in street image classification: the case study of london. In Proceedings of the 1st Workshop on Artificial Intelligence and Deep Learning for Geographic Knowledge Discovery (pp. 5-9). ACM.

A 4D Spatio-Temporal Approach to Modelling Land Value Uplift from Rapid Transit in High Density and Topographically-Rich Cities by Dr. Christopher Donald HIGGINS, The Hong Kong Polytechnic University, HK

Christopher proposes new approaches to accessibility analysis and predicting land-value uplift which is more sensitive to volumetric and topographically complex cities, such as Hong Kong. As well as showing relationships between access to stations and property value, this work provides innovative toolbox in order to tackle understanding travel time which typically does not thoroughly consider both slope and building height of first and last mile travel to stations.

Read more:
Higgins, C. D. (2019). A 4D spatio-temporal approach to modelling land value uplift from rapid transit in high density and topographically-rich cities. Landscape and Urban Planning, 185, 68-82.

“The present research utilizes spatio-temporal methods in a quasi-experimental research design to examine changes in property values associated with pedestrian accessibility to the West Island Line heavy rail extension in Hong Kong. Several innovations in methods and techniques are proposed that respond to the econometric challenges involved in conducting research in high density, topographically-rich cities. Of these, the paper incorporates landscape topography throughout its estimation process, including the calculation of slope-aware measures of walkable accessibility on a 3D pedestrian network and proposes a new Spherical Distance Weights method for capturing horizontal and vertical spatial association among observations in 3D space. Finally, these weights are combined with measures of temporal distance for a 4D approach that accounts for relations among observations in space and time.”

See the author’s full article at here

See also: Higgins, C.D. & P.S. Kanaroglou. A Latent Class Method for Classifying and Evaluating the Performance of Station Area Transit-Oriented Development in the Toronto Region. Journal of Transport Geography. 52, 61-72. DOI: 10.1016/j.jtrangeo.2016.02.012

A Systematic Tracing of Temporal and Spatial Boundaries of Places with High Concentration of Human Activity to Identify Popular Non-Work Destinations by Prof. Roberto Ponce LOPEZ, Tecnologico de Monterrey, Mexico

Roberto presented an impressive suite of analysis forming part of his Doctoral work ways to mine and classify large social data sets to more meaningfully understand what people do in cities outside of work (which still remains a primary function of many urban models!). Singapore formed an interesting Case Study.



“Research uses data from Google Place API to characterize the built environment of Singapore, building a typology of place that differentiates between shopping malls, strips of retail, food markets and large commercial areas. The method of analysis relies on spatial clustering and k-medoids clustering.”

See his personal blog for more detail on Roberto’s work.

The City

Hong Kong itself was a highlight of the conference. For any urbanist, Hong Kong is one of the most unique vertical cities in the world. Its form and structure become even more interesting the more one walks around and attempts to comprehend how it has all been stacked together in such a small space. The unique mix of cultures living in the city over time makes for a fascinating exploration of art, food, museums and architecture .. indeed complementing the many physical levels the city has on offer.

Yick Cheong Building – five incredibly dense residential buildings stacked closely to one another

Duk Ling – traditional Chinese Junk boat ride on the Harbour

View from Victoria Peak, highest hill on HK Island

Murakami vs Murakami Exhibit, Tai Kwun Contemporary

Didier Jaba Mathieu’s work at the Tung Wah Centenary Garden, Queen’s Road East in Wan Chai

Vegetarian restaurant, Nan Lian Garden

Pavilion of Absolute Perfection, Nan Lian Garden

Thanks for having us Hong Kong PolyU – looking forward to seeing where you can take Urban Informatics in future… 🙂

ICUI 2019 Conference Banner

ICUI 2019 Conference Pack

FlipboardShare

Creating spatial data analytics dashboards in Cartoframes

With the strength of Carto in terms of spatial science and location intelligence ; and the easy access to data science packages in Python, Carto’s new project ‘Cartoframes‘ has a lot of potential to provide excellent mapping dashboards for data-hungry workflows.

Below is a quick tutorial I have made which will hopefully help new users figure out how to use it. It is in no way comprehensive, and there are probably some pieces missing ; but it should be enough to go off to get started! The tutorial covers some of the elements of creating a ‘live’ weather data dashboard for New South Wales in Australia.

What is Cartoframes ? (from https://github.com/CartoDB/cartoframes)
A Python package for integrating CARTO maps, analysis, and data services into data science workflows.

Python data analysis workflows often rely on the de facto standards pandas and Jupyter notebooks. Integrating CARTO into this workflow saves data scientists time and energy by not having to export datasets as files or retain multiple copies of the data. Instead, CARTOframes give the ability to communicate reproducible analysis while providing the ability to gain from CARTO’s services like hosted, dynamic or static maps and Data Observatory augmentation.

Features

Write pandas DataFrames to CARTO tables
Read CARTO tables and queries into pandas DataFrames
Create customizable, interactive CARTO maps in a Jupyter notebook
Interact with CARTO’s Data Observatory
Use CARTO’s spatially-enabled database for analysis

Step 1 – Install libraries
Install all of the relevant libraries. For me I’m using Canopy. Canopy provides Python 2.7 and 3.5, with easy installation and updates via a graphical package manager of over 450 pre-built and tested scientific and analytic Python packages from the Enthought Python Distribution. These include NumPy, Pandas, SciPy, matplotlib, scikit-learn, and Jupyter / IPython. You can get Canopy for free here.

Once installed, open the console and install the packages:
pip install cartoframes
pip install pandas

Step 2 – Import libraries

In a new Jupyter notebook, start by importing the libraries in the first block, these are the ones you’ll generally need (though you can go to town with other numerical / statistical packages here!):

import cartoframes
import pandas as pd
import numpy as np

Step 3 – Set up a Carto account and register for an API key

Start by going to Carto.com and signing up through the prompts.

Once you have signed up, in the top-right of your home page there should be setting toggle which show you:

View your public profile
Your account
Your API keys
Close session

Click on ‘Your API keys’ and copy what shows up on the next page. It should be a long string of text, looking something like this:

31b453f27c085747acc6a51a9e5717beae254ced

Step 4 – Connecting to your Carto account in Python
Try the following line of code in your next Jupyter code block, where xxxxxxxxxxxx is your new API key. This key allows Cartoframes to communicate directly with the data in your Carto account.

Where it says ‘oclock’ you should put your own username.

cc = cartoframes.CartoContext(base_url='https://oclock.carto.com',api_key='xxxxxxxxxxxx')

When you run this code and call ‘cc’ it should provide you with a message such as this:
cartoframes.context.CartoContext at 0x1ea3fa2c518

This means that cartoframes has successfully accessed your Carto account and you can call ‘cc’ to reference accessing this account from now on. Make sure you keep your API key safe!

Step 5 – Upload some data to Carto
For this task, I downloaded the shapefile components of weather locations from the Australian Bureau of Meteorology. This is all of the spatial files (.shp, shx, .dbf etc) for IDM 13 from:
ftp://ftp.bom.gov.au/anon/home/adfd/spatial/

These are all the files prefixed by IDM000013 and suffixed by .dbf,.prj,.sbn,.sbx,.shp,.shx,.shp.xml. Carto will need these all in a .zip file before you upload them.

The metadata for this dataset can be found here:

IDM00013 – point places (precis, fire, marine)
http://reg.bom.gov.au/catalogue/spatialdata.pdf

Once you have downloaded these you can upload the shapefile and it should give you a series of geolocated dots covering all of Australia, with many attributes as described in the metadata above. For this I called the dataset ‘idm00013’.

Step 6 – Read the data in jupyter
Let’s test if everything is working. The following should display a dataframe of all of the aspatial information stored in each weather location:

carto_df = cc.read('idm00013')
carto_df

The following should give you a list of all of the variables available to you to access and change:
list(carto_df.columns.values)

Step 7 – Making a map
Now for the exciting bit – creating a Carto map inside the Jupyter notebook.
Here I’ve picked the elevation column with a brown colour scheme, try:


from cartoframes import Layer, BaseMap, styling
cc.map(layers=[BaseMap('light'),Layer('idm00013',color={'column': 'elevation','scheme': styling.brwnYl(7)},size=5)],
interactive=True)

The following map should display, with light brown showing where the weather points are a low elevation, and high points shown in a darker brown.

Extension – Accessing and parsing a live data feed

The code below retrieves the latest weather forecasts for the weekend ahead from the Bureau of Meteorology’s API. It is stored in a dataframe ‘df’.

I’ll leave the indentation as part of this tutorial!

import xml.etree.ElementTree as ET
import csv
import pandas as pd
import urllib.request
req = urllib.request.Request('ftp://ftp.bom.gov.au/anon/gen/fwo/IDN11060.xml')
with urllib.request.urlopen(req) as response:
xml_data = response.read()
list_dict = []
root = ET.XML(xml_data)
for element in root.findall('forecast'):
for area in element:
for forecast in area:
min_temp = ''
max_temp = ''
aac_id = area.get('aac')
forecast_date = forecast.get('start-time-local')
for element in forecast:
if element.attrib['type'] == 'air_temperature_minimum':
min_temp = element.text
elif element.attrib['type'] == 'air_temperature_maximum':
max_temp = element.text
list_dict.append({'aac':aac_id, 'forecast_date':forecast_date, 'low_temp': min_temp, 'max_temp':max_temp})
df = pd.DataFrame(list_dict)
df

Extension Part 1 – Joining in a live data source

We now want to join the geographical data from the first exercise with this live data feed.
This is done with a ‘left’ join, so we keep all of the weather forecast records and add the geographic data to them.

merged_data = pd.merge(df,carto_df,on='aac')
merged_data

Extension Part 2 – Selecting some data

Now we filter out all records to get one particular day’s forecast (you will need to change the date here to current date).
The filtered data is then written to a new dataset in Carto called ‘merged_weathermap’.

one_forecast = merged_data[merged_data['forecast_date']=='2018-01-16T00:00:00+11:00']
cc.write(one_forecast, 'merged_weathermap',overwrite=True)

Extension Part 3 – Putting it all together
#Step 10

Now let’s add the data from the Weather feed API to a Cartoframes map. The following reads in the merged_weathermap dataset we just created and colours
in the maximum temperature for the forecast data for each weather point in New South Wales. Pink being a high temperature, and blue being a lower temperatue.

from cartoframes import Layer, BaseMap, styling
cc.map(layers=[BaseMap('light'),Layer('merged_weathermap',color={'column': 'max_temp','scheme': styling.tropic(10)},size=10)],
interactive=True)

That’s it! From here, it is feasible to see with a bit of extra work and some scripts that continuously ping the APIs etc that we are only a few steps away from creating live dashboards which integrate other statistical and mathematical packages, such as even including machine learning.

Looking forward to seeing developments in this space and if you have any feedback or ideas let me know!

For more information on Cartoframes have a look at their documentation.

FlipboardShare

Measuring accessibility – on the 30 minute city

30min2

One of the recent projects I’ve been involved in at Arup has been developing spatial, analytical tools to understand transport accessibility. In particular, this is to do with destination-based accessibility – so rather than assessing how well-performing a city is delivering transport at particular points (which could go anywhere), we looked at how this performs delivering to all other places in the city. In particular we were looking at places that are important to creating liveable environments – such as to education, parks, healthcare and our jobs.

For me, this topic was building well on research I had done in 2015 (See ‘Where to From Here? A Modelling Methodology for Measuring Land-Use and Public Transport Accessibility in Melbourne), which assessed destination-based accessibility within transport modelling software, restricted to travel zones. This time there were some major improvements to the method ; mostly from removing from a software shell to raw code, and much more disaggregate units of analysis.

We assessed Greater Sydney Sydney at a 300m x 300m grid level, producing over a million travel time isochrones for driving (including traffic), public transport and walking to assign accessibility values to liveability variables in approximately 120,000 small cells in the city. In a nutshell, our toolkit involved a bit of OpenTripPlanner, Python, Amazon Web Server and FME – all using Open Data sources. This means means the method is highly reproducible for both other cities, and applicable to the same city with a different network (which, could be used to evaluate transport network changes, or alternate land use scenarios). A web map has been produced to showcase some of the work done in this space is so far , exploring what the ’30 minute city’ means for Sydney:

30minutecity.arup.digital

30min

It is certainly exciting to see the potential of this thinking and method being applied to both Sydney and other cities. Accessibility and the impact on individual opportunities is often overlooked and undervalued in many forms of transport analyses. With the increasing richness of the data that is becoming available from the Government and other forms of Open Data; combined with open analytical and visual methods like these it is encouraging and clear that these analyses can potentially produce insight towards tackling some of our growing issues in Australian cities, such housing affordability, transport disadvantage, sustainability.

FlipboardShare

Pages:12