:::: MENU ::::
автосервис | мужской спа салон самара | спа салон для мужчин | скорлупы ппу | комплект изоляции стыка ппу | плакетка наградная заказать | модульные котельные на газе | насосы для отопления цена | насосы циркуляционные для отопления цена | дрель аккумуляторная | дрель миксер купить | отбойный молоток электрический | рубанок купить | интим магазин | секс игрушки | сексшоп | отвод 17375 2001 в Челябинске | гост днища эллиптические в Челябинске | соус sriracha купить | острые соусы купить | цветы с доставкой

Experiments with DeckGL – Population and Transport in Sydney

Deck.gl is a WebGL-powered framework for visual exploratory data analysis of large datasets.

Below are some works in progress of Deck.gl powering analysis of GTFS-Realtime data for Sydney (the information about where exactly the whole network is at any one time) as well as population and employment projections for Sydney. While the framework is reasonably straightforward to use – the process to get the data from the feeds to be read into the framework was quite a burden (I will try and push these to GitHub if anyone asks).. Now to get them online ; and with more buttons! 🙂

DeckGL Flows – Sydney GTFS Test – Oliver Lock May 2018 from Oliver Lock on Vimeo.

An example including buildings generated from cutting mesh blocks out of the road network, with a height based on population density.This provides us with images that look more like the real city, rather than flows running through empty space.

Zoomed out, with buildings, you can see the incredible organic development patterns of Sydney and how transport supports fringe areas.

Population density explorer – so much potential using this hex bin / pipe method to show information. Here we can get very fast renderings for the population density of the whole country. Providing toggles / buttons you could switch between variables (population / employment) as well as past data and future forecasts.

Perspectives on the Planning Institute of Australia National Congress

This year I had the pleasure of attending the Planning Institute of Australia (PIA)’s National Congress in Perth. Perth really put on a great show; highlighting that they are producing some really innovative thinking and groups of professionals in Western Australia. In particular, the key ‘themes’ for me were on public engagement, preparing for an ‘automated’ world and what planners can do, and on human-centred design.

Engagement & YP Connect Session
Young Planners worked with ‘new school’ (Dr. Claire Boulange, Dr. Paula Hooper) and ‘old school’ (Anthony Duckworth-Smith) tools in order to design a suburb. This consisted of playful interactions as groups with hand-made ‘board-game’ style city model. Groups then combined these designs with metrics generated from a GIS platform which calculated yield / health benefits and other indicators. This was an engaging, iterative approach – bringing people together, generate understanding and move projects forward in an inclusive manner. The City of Perth’s 3D model and different tools that planners can use in communicating and streamlining the Development Assessment procedures was another innovative piece in this space.

Driverless future & riding an autonomous bus
Perth is one of the first cities globally to be testing Automated Vehicles in real-life traffic environments ; and are Australia’s very first. The RAC especially are emphasising that their main concern is people safely interacting with these vehicles – inside & out. Lessons learned from these trials will allow us to try and design their use while the technology remains largely under development. It was great to actually be within one of these vehicles rather than just experiencing talk about them!

Human-centred design & Copenhagenize Design Co.
Bringing the keynote speakers from Copenhagenize Design Co. was a big highlight. While talking to a group of planners about the benefits of cycling is preaching to the converted ; it was inspirational to see the change that this small group is able to make. In particular, many of the talks in the conference were touching on human-centred design and thinking ; and these guys are really an embodiment of that and the joy, and health it can bring cities.

How to scrape data from a website in 10 lines using Beautiful Soup and Python

Have you ever wanted to scrape data from a webpage where their open data isn’t available? It is actually quite straightforward for people with a little coding knowledge to retrieve a lot of data with the power of Python and libraries such as Beautiful Soup.

Beautiful Soup is a Python package for parsing HTML and XML documents. It creates a parse tree for parsed pages that can be used to extract data from HTML, which is useful for web scraping. It is available for Python 2.6+ and Python 3.

You can perform research with this data scraped over time, or simply use it for personal use.

Below are two common data sets that everyday people would find useful – Property and Jobs!

Real Estate – Finding sold history of properties

  1. Navigate to webpage – for example http://realestate.com.au/sold

2) Perform a search ; for example here we have searched for all properties sold in Newcastle (you can do this for big areas, or specific streets .. it’s up to you).

3) Extract the URL of the search results to see if you can loop over the results:


For example, as above, the value ‘1’ returns us the first page, if we replace that with ‘2’ we get the second page


This means we can perform a simple loop over the data.

4) Right click over the element you want to retrieve and click inspect. This will tell you what part of the site’s HTML you would like to retrieve. For example, by right-clicking the price we can see that prices are stored in a <span> tag with the class ‘property-price’.

Run this script (>>>> is a tab):

from bs4 import BeautifulSoup
import requests
>>>>for num in range(0,20):
>>>>url = str(‘www.realestate.com.au/sold/in-olivers+hill,+vic+3199%3b/list-‘+str(num))
>>>>r = requests.get(“http://” +url)
>>>>data = r.text
>>>>soup = BeautifulSoup(data)
>>>>mydivs = soup.findAll(“span”, {“class”: “property-price”})
>>>>for line in mydivs:

This will print the first 20 pages of results for all properties. You can give even more information here to extract particular features relevant to your property search – such as number of bedrooms, parking spaces and bathrooms. You can modify this to save it to a text file or csv, or even collect it over time for historical property sales by type.

Sites like ‘Inside AirBnB’ do these kind of scraping exercises (note that like AirBnB itself;  whether this work is allowed to be performed is a grey area) :



We’ve all been there – looking for a new job or seeing how much your skills are worth in the market.

Here’s an example of scraping Indeed.com for jobs data (>>>> is a tab)::

from bs4 import BeautifulSoup
import requests
for num in range(0,2000,10):
>>>>url = str(‘au.indeed.com/jobs?q=python+data+analytics&l=australia&start=’+str(num))
>>>>r = requests.get(“http://” +url)
>>>>data = r.text
>>>>soup = BeautifulSoup(data)
>>>>mydivs = soup.findAll(“a”, {“data-tn-element”: “jobTitle”})
>>>>for line in mydivs:

This will return all jobs related to ‘python data analysis’ in Australia. Have a go at trying to change it to search by salary, or to retrieve additional information when printed.

Happy scraping! Always make sure you read a website’s terms of service before performing any of the above ; and do so at your own risk!