:::: MENU ::::

3D environments for agent-based models

city

As part of my coursework at CASA we are being introduced to some of the latest 3D visualisation technologies and experimenting with how they might be used in a cities research context. Below are some examples of what can be achieved in a short period of time with these software packages.

The moving parts of these visualisations can be defined as agents. Agents with programmed behaviours and decision-trees can, in part, attempt to re-create and predict the appearance of complex real-world phenomena. This process can be defined as emergence – when complex systems arise from relatively simple interactions.

My very first thoughts when experimenting with these programs was – what is the difference between this and the complex behaviours we experience in video games or in movies (such as the large, generated crowds in Lord of the Rings)?

The first and most obvious of these differences is the purpose. Agent-based and other models are designed to provide scientific prediction of future, real-world events. Computer games are constructed primarily for enjoyment / entertainment (though can sometimes seem very realistic).

In video games, the human player takes control of the model and what effects occur, whereas in ABMs the input is largely derived from the data and defined conditions. In the real world simulations, these are based on theories of human (or other agents’) behaviour, while video game agents these behaviours will be based on plot points, be largely fictional and generally better looking.

Where these might collide is placing humans within the agent-based simulation – such as through immersive gaming experiences offered by the Occulus Rift. It is also interesting to think about emergent behaviour of humans interacting with eachother virtually within video game environments – such as in massively-multiplayer games.

1) Blocks following a terrain and avoiding a teapot
Uses: 3D Studio Max

Simple Agents in 3D Studio Max from Oliver Lock on Vimeo.

2) Gravity simulation of particles on a generated city-scape
Uses: Greeble, 3D Studio Max

Simple Agents in 3D Studio Max II from Oliver Lock on Vimeo.

3) Pedestrian movements in a built environment
Uses: CityEngine, 3D Studio Max

3) Pedestrians walking through a building
Uses: CityEngine (3D Model), 3D Studio Max

CityEngine model with 3DS Max Pedestrian Flows from Oliver Lock on Vimeo.

A quick example of these put into more complex, real-world practice is this stadium evacuation produced by Redfish.

4) Agent-Based Model of Crowd Dynamics During Disaster Evacuation

Stadium Evacuation from stephen guerin on Vimeo.

In terms of sharing these models, I recently discovered P3D which allows you to share very clean 3D models in your browser. Integrating simple ABMs into these would be a great way to communicate their results.


Bus flows in Processing

bus

This visualisation was created as part of a short exercise for a digital visualisation module of my current studies.

It was coded in the software/language Processing . The challenge of this task was making sure each bus was
treated as an individual object/agent and creating a global timer that dictates not only where but when the object appears there.

Visualising this movement was also a challenge – as without trails of the previous bus movement it just looked like a twinkling display of unrelated dots. Using unique colours to identify buses was an alternative idea, but resulted in a bit of cognitive overload. It has here ended up to include very slight differences in shade of blue for each bus. The technique used here seem to be particularly effective on the edges of the city, and it becomes less effective when there is a high density of buses clustered near each other.

Another interesting experiment was drawing uniquely coloured lines as the bus progresses along its route.


Quantified Self & Mapping

The quantified self (QS) movement is intrinsically about knowing more about your most important asset in order to make better decisions. There are many applications out there that allow you to record daily activities – be it mood, food consumed or movement. For the great majority of these they probably, in truth, gain little more actual utility than simply writing these down with pen and paper would provide.

When can these quantified self applications be effective? Some applications such as ‘OptimizeMe‘ use integrated approach of these measurements and give intelligent suggestions such as ‘If you get more sleep on Sunday you will have a better mood on Monday’ ; with these technologies emerging, it feels as if the next step, integration of self-data with real data will be combined – such as the application suggesting : ‘Today the sunniest morning of the week – maybe you should walk to work’. Can having these kind of suggestions available make people, en masse, happier, healthier or more active than they already are?

As part of our MRes coursework we are working on creating visual representations of ‘the active city’. One element my group deduced was physical activity and we are currently exploring different QS datasets and how these might be involved.

At the moment I am testing out the ‘Moves‘ application. This application seamlessly records your movement throughout the day which can then be extracted and visualised using the Moves API. So far it has been impressive in its ability to deduce when I am walking by foot, using transport and which venues I spend hours on the computer.

unnamed

There is also a GitHub example of visualising this data through Processing available!

screenshot_1 More information about this viz.


Pages:1...56789101112