A Computational Implementation of Jane Jacobs’ Generators of Diversity

Part of the reason for the frequent recent mentions of Jane Jacobs on this blog is that I’ve been working on developing a ‘Computational Implementation of Jane Jacobs’ Generators of Diversity’ as part of my recently completed dissertation at CASA UCL.

Jacobs posits that diversity fuels webs of social and economic activity that energise the autocatalytic processes that are both the cause and consequence of the agglomeration of people in cities. These networks facilitate the emergence of varied and complex relationships that evolve in response to emerging opportunities and constraints, thereby permitting the ongoing development, adaptation, and resilience of urban systems. Jacobs argues that cities must be sufficiently heterogeneous and granular for these processes to unfold without hindrance, for which purpose she proposes the four “generators of diversity”. These briefly consist of mixed-uses; short-blocks; buildings of varied age and condition; and a dense concentration of people. At first glance these may appear simplistic, but they actually rest on a deeply reasoned conception of complex systems and urban economic processes. (See my last post.)

Some of the logic underlying the theoretical approach can be gleaned from my previous posts, but at the end of the day it boils down to maximising the combinatorial possibilities for varied economic relationships to form, and providing the necessary flexibility for these relationships to vary and adapt across the space-time fabric of cities. In other-words, the greater the diversity of land-uses, the greater the number of route choices, the greater the economic opportunities for fledgling businesses (and no, this is not about heritage buildings), and the greater the number and diversity of people, the more ways neighbourhoods can adapt and co-evolve. Crudely, the more potential and flexibility to connect more things in more ways at varied places and times throughout the day, the better.

The (wicked) problem with applying performative and parametric design at the urban scale is that complex systems don’t exactly yield themselves to prediction or control. In the parlance of complexity science, this means sensitivity to initial conditions, path dependence, phase transitions, multiple equilibria, and all the novel ways for positive and negative emergent feedback processes to occur. (Don’t get me wrong, modelling of complex systems can still be hugely important for understanding them.) A large part of the issue is that the physical structure of cities is just the tip of the iceberg; more important is the relationships between places and spaces rather than the places and spaces themselves. This point is made aptly by Mike Batty in the New Science of Cities, …cities must now be looked at as constellations of interactions, communications, relations, flows, and networks, rather as than locations … location is, in effect, a synthesis of interactions… (p.13) The bottom-line is that spatial structures and flows of people, information, and resources co-evolve in very complicated ways.

While this complexity leaves us rather flustered when it comes to planning cities, Jacobs’ approach is actually very simple and effective. By arguing for an urban substrate that is sufficiently granular, heterogenous, and porous, she is effectively saying that we need to provide a requisite level of flexibility for these process to go about their co-evolutionary development with much of the unpredictability that it entails. This is necessarily a bottom-up and epigenetic process, and it permits self-organisation and — this can hardly be stressed enough — the capacity for constant and dynamic change. A large aspect of her argument is really that we shouldn’t get in the way by imposing overly abstract and rigid urban structures that severely hamper these processes from unfolding over time and through space. (A solid overview of these themes can be found in Stephen Marshall’s Cities, Design, and Evolution.

The computational implementation uses Jacobs’ qualitative discussion of the “generators” as a departure point. Due to Jacobs’ inductive approach, there is no single interpretation for their implementation as computational tools. A degree of subjectivity and interpretation therefore remains, and there is a recognition that the computational expression of the measures will be affected to some extent by the available computational techniques and the nature of the data to which they are applied. However, this said, the implementation does stay focused on the core mechanisms at the heart of Jacobs’ arguments. Chiefly, diversity, and working from the ‘particular’ to the ‘general’ (from the small to the large / from the bottom-up). The measures are respectively implemented using a weighted diversity index; a graph centrality measure based on route complexity, the coefficient of variation of non-domestic property valuations, and a density measure based on the number of addressable locations.

The data consists of several data sources that were combined into a Neo4j graph database, including the Ordnance Survey’s ITN street network dataset, Ordnance Survey AddressBase land-use classifications data, and the Valuation Office Agency’s non-domestic property valuations. (Crown Copyright, All Rights Reserved.) A large part of the implementation was the development of a localised computational methodology that iterates the measures for each street segment based on local attributes within a range of network path threshold distances (150m - 1200m). The computational analysis is done with Python using a variety of packages and modules (Pandas, Scipy, Graph-Tool, Shapely, Fiona, Py2Neo, etc). I won’t go into detail here because it becomes fairly involved.

The methods are applied to a comprehensive list of 535 town and city boundaries for England and Wales, as defined by Arcaute et al. (2013). There is notable variability in the results between cities and a generally insignificant correlation to population size, thus indicating differences within and between the cities based on their local properties. On average, the results for British “New Towns” tend to rate lower across all measures and correlations. The New Towns also tend to have fewer local retail and office establishments, lower local non-domestic valuations, larger non-domestic occupant areas (less granularity), and fewer overall distinct land-use classifications.

The data was furthermore utilised to explore Jacobs’ assertion that diversity and the subsequent complexity of inter-relationships increases for larger cities. By comparing the number of distinct land-use types across towns and cities in England and Wales, it was confirmed that the number of land-use types increases in relation to the size of the city, and that this relationship can be modelled with the species-area-laws that are observed in ecology.

I feel it important to note that, in the overall scheme of things, the above is only a tentative and explorative step. There is tremendous complexity and work to be done in teasing apart the relationships between these measures, let alone consolidating them into a combined measure. However, I think that it is safe to say that Jacobs’ emphasis on autocatalytic webs of relationships generated by diversity, and the need to work from the small to the large, offers a potent foundation for further exploration. At the moment, my hope is to focus on a live web-browser version that runs on open data sources.

cities complexity jane jacobs computation diversity

The Valuable Inefficiencies of Cities

The best solutions to problems aren’t always the most obvious. This is why we value creative thinkers and the odd-ball eccentric genius sorts who arrive at fresh new ways of looking at the world. Our cities work the same way. Let me explain.

It is a natural inclination to want to build on what we’ve done before; to extend, to refine, to optimise. However, this is also a form of path dependence and lock-in, and it can end-up confining our options and stunting the solution space for difficult challenges. In the short-term, it is often more predictable and efficient to shun experimentation and build directly on what we’ve done before, to tinker with the same-old instead of innovating afresh. But, it is precisely in the realm of the unpredictable and the ‘inefficient’ that creativity and accidental discovery occur, and that eclectic old ideas can be combined in novel new ways. What this means is that we need to explore, experiment, and all-the-while question the status quo if we are to innovate and adapt, and from a bigger-picture and longer-term perspective it is quite necessary to throw things out the window and reinvent the wheel from time to time.

Put differently, longer-term resiliency requires an investment in these shorter-term ‘inefficiencies’. Whether our brains or our cities, this is how complex systems work. Longer-term adaptation and success is subject to how effectively systems can engage in both exploration and consolidation in order to identify novel opportunities and channel them to their advantage. Complex systems are particularly good at this because they utilise distributed parallel exploration of a wide range of potential solutions. The ‘fittest’ of these survive to congeal and increasingly dominate from iteration to iteration, but in many cases they also have a tendency to decay and erode once they are no longer useful, therefore making way for new processes to take root. As a result of this tremendous diversity and malleability, complex systems often excel at developing innovative solutions to unique, changing, and complicated problems. You could argue that messy parallel explorative processes are inefficient, sometimes slow, and often ‘expensive’, but you can’t really argue against the longer-term resiliency exhibited by such systems.

Keeping the above in-mind, in a recent post I discussed the emerging dissent against the top-down narrative employed by some proponents of smart cities. Interestingly (and encouragingly) a recurring theme in these critiques is the problematic notion of top-down engineered ‘efficiency’ as some sort of holy grail to be attained by cities, essentially a throw-back to the ideologies promoted within modernist architecture and planning. At the time, Jane Jacobs (surprise, surprise) honed-in on the topic of efficiency, devoting a chapter to The Valuable Inefficiencies of Cities in her book, the Economy of Cities, but this concept is actually at the root of her entire hypothesis. To dispel any doubt, she adamantly proclaims that she “does not mean that cities are economically valuable in spite of their inefficiency and impracticality but rather because they are inefficient and impractical” (The Economy of Cities, p.228) The point Jacobs is making is that the very mechanisms underlying these perceived ‘inefficiencies’ are actually the driving force behind the development of cities, and the underlying reason for the agglomeration of people in the first place.

Jacobs’ complaint with the mantra of efficiency is nicely summed up by Sandy Ikeda:

…mainstream economics stopped thinking about markets as urban, and replaced it with what Jacobs called the “plantation model,” in which diversity of inputs and outputs and the uncertainties of time were replaced with simple production functions in a world where time doesn’t matter and preferences don’t change. The emphasis switched from diversity and complexity to homogeneity and simplicity, from dynamics to statics, and from creativity to efficiency. Mainstream economics is fixated on this notion of “efficiency” … For Jacobs the successful city is inherently inefficient, and that’s a good thing because a successful city is an incubator of new ideas, where ordinary people, not just “creative types,” can be innovative. Innovation, trial and error, can be messy and inefficient.

Jacobs took issue with conventional views of urban economics and planning because the fixation on “efficiency” meant that diversity was ignored and the webs of relationships that facilitate processes of economic discovery and diffusion were severed. She describes such an approach as ‘preformationist’, meaning that growth is erroneously assumed to be the result of the simple quantitative expansion of pre-existing structures and static relationships. This gives the impression that cities are predictable and that their processes can be optimised, but ends-up leading to grotesque abstractions and simplifications instead. Conversely, the 'epigenetic' conception of growth occurs as the result of a cascading “web of interdependent co-developments” (The Nature of Development, p.19). The agglomeration of people in cities thus facilitates diverse and dynamic assortments of economic and social relationships. This provides a framework for the autocatalytic processes of co-evolution to unfold. Fuelled by diversity and competition, ideas and relationships can be intertwined in ever new and unique ways. (This view of development ties nicely to Stuart Kauffman’s concept of ‘supercriticality’. Effectively, diversity combined with a sufficient potential for inter-relationships can fuel an explosion of combinatorial possibilities, which can set-off autocatalytic bouts of co-evolution.) By fostering diversity and maximising opportunities for all manner of economic exchange, cities are thus able to expand their capacity to harness flows of energy (ideas, capital, resources) and, in Jacobs’ view, this is how the actual growth occurs. In an abstract sense, cities are dynamic dissipative structures that constantly adapt in order to seek-out, harness, and maximise the use of these flows. In a more intuitive sense, Jacobs likens this to tropical rainforests that have a sufficiently diverse assortment of species to extract maximal benefit from the sun’s energy cascading through intertwining webs of life. Jacobs argues that cities (and by extension, planning ideologies and economic policies) that sacrifice this economic diversity for efficiency will sooner or later suffer from stagnation (think Detroit). The reason is that they lack the necessary diversity to nurture ongoing development of industry through processes of discovery, diffusion, and co-evolution, and are therefore unable to generate a supply of new industries to support the future economy. The catch is that a good deal of experimentation and failure — inefficiency — typically occurs for every significantly successful breakthrough. To bring this full-circle, the creative mind works the same way, and in the words of Isaac Asimov, "For every new good idea you have, there are a hundred, ten thousand foolish ones".

For some time, Jacobs’ ideas on economics went largely ignored. This changed in 1988 when Robert Lucas (who later went on to win a Nobel prize) found inspiration in The Economy of Cities for his work on incorporating the effects of human capital — effectively the diffusion of knowledge and skills — to explain growth in neo-classical economic models. More generally, economists, particularly urban economists, are increasingly cognisant of the importance of diversity in economies, and of Jacobs’ role in formulating such a view. The beneficial effect of knowledge cascades through diverse economies is now known as ‘Jacobs Externalities’ or ‘Jacobs Spillovers’. This concept has significantly influenced Edward Glaeser, who was part of a team that showed that diversity benefits economic growth (though how he can claim Jacobs was against density, and for preserving heritage at the expense of affordability, is beyond me), and Richard Florida, who formulated the 'creative class' spin on the ‘human capital’ concept.

In spite of being written almost fifty years ago, Jacobs’ observations provide a useful lens for critiquing current approaches towards cities, impressing the need to not sacrifice epigenetic processes for the sake of utopian notions of efficiency:

Provided that some groups on earth continue either muddling or revolutionising themselves into periods of economic development, we can be absolutely sure of a few things about future cities. The cities will not be smaller, simpler or more specialised than cities of today. Rather, they will be more intricate, comprehensive, diversified, and larger than today’s, and will have even more complicated jumbles of old and new things than ours do. The bureaucratised, simplified cities, so dear to present-day city planners and urban designers, and familiar also to readers of science fiction and utopian proposals, run counter to the processes of city growth and economic development. Conformity and monotony, even when they are embellished with a froth of novelty, are not attributes of developing and economically vigorous cities. They are attributes of stagnant settlements. To some people, the vision of a future in which life is simpler than it is now, and work has become so routine as to be scarcely noticeable, is an exhilarating vision. To other people, it is depressing. But no matter. The vision is irrelevant for developing influential economies of the future. In highly developed future economies, there will be more kinds of work to do than today, not fewer. And many people in great, growing cities of the future will be engaged in the unroutine business of economic trial and error. (The Economy of Cities, p. 251)

Tying this back to Smart Cities, the explosion of combinatorial possibilities due to developments in ubiquitous computing and networking means bottom-up opportunities for innovation, diversification, and co-development as never before. If we follow Jacobs’ prescription (who was ever the sort to provide concrete opinions) then we need to provide adequate capital to support a high birth-rate of innovative businesses and industries (think of the potential in legions of 'makers' armed with Arduinos, Edisons, and 3d printing…but supported with capital) and to actively protect these fledglings from the sometimes anti-competitive interests of incumbents. ”One of the most expensive things an economy can buy is economic trial, error and development. What makes the process expensive are the great numbers of enterprises that must find initial capital — which must include those that will not succeed — and the great numbers that must then find relatively large sums of expansion capital as they do being to succeed.” (Ibid, p.228) Whereas many of these initiatives will dead-end or fail, this remains a necessary part of the collective societal process of creativity and innovation. While expensive, this is money well-invested because it reaps the dividends of future development. Furthermore, the sums of money required for incubating innovation are not necessarily any greater than the huge sums of money oft wasted in top-down contrivances that lead to stagnation and lock-in instead. The crux is that citizens must be given the opportunity to identify their own opportunities and develop their own solutions as only they can.

PS - It may come as a surprise that Jacobs’ primary interest was not urbanism but economics, and she felt that this is where she had made her greatest contributions. Indeed, to properly understand Jacobs’ arguments pertaining to urbanism, it is necessary to understand her economic arguments and that she saw the workings of cities and economies as one and the same. If curious about Jacobs’ views on economics, the following papers are well-worth reading:

Ellerman, D. 2005. How Do We Grow?: Jane Jacobs on Diversification and Specialization. Challenge, 48(3), pp. 50-83.

Glaeser, E., Kallal, H., Scheinkman, J., Shleifer, A. 1992. Growth in Cities. Journal of Political Economy, 100(6), pp. 1126-52.

Ikeda, S. & Calvino, I. 2012. Economic Development from a Jacobsian Perspective. The Urban Wisdom of Jane Jacobs, p. 63.

Lucas, R. 1988. On the mechanics of economic development. Journal of monetary economics, 22(1), pp. 3-42.

Quigley, J.M. 1998. Urban diversity and economic growth. Journal of economic perspectives, 12, pp. 127-38.

smart cities jane jacobs complex systems efficiency

Self organisation at the ‘edge of chaos’ between slow-fast systems.

Pattern changes in snow accumulation on successive bridge railings.

I took these pictures one miserably cold winter morning in Winnipeg. Note the potency and variety of snow pattern formations at the intersection of slow-fast system dynamics; In this case the wind (fast), snow (medium), and bridge railings (slow). This is really just a way of saying that interesting information exchanges and feedback loops occur at the intersection of these slow-fast thresholds. The crux is that the information exchanges are happening at an emergent scale that transcends the wherewithal of the individual systems. In the case of inert components (as in these pictures) this edge of information exchange is only optimal when the conditions are ideal, however, systems consisting of living agents seem to have some capacity to naturally gravitate towards this “edge of chaos” as part of a co-evolutionary dynamic.

The same dynamics are at work in cities where multiple interacting systems transfer information back and forth over varying time-scales, leading to epigenetic processes of development and decay; Such as the movement of citizens (fast), changes in land-uses (medium), and the spatial distribution of infrastructure (slow). Overly abstract, rigid, or prescriptive forms of urbanism hinder these processes from unfolding and subsequently co-evolving because they hinder these emergent information exchanges. Food for thought, given the ultra-fast dynamic increasingly introduced by technology… and that, to this day, most new urban development continues to lack the necessary heterogeneity and granularity to facilitate these information exchanges, thus compromising the resilience and vibrancy of our cities.

snow wind complexity information

Time for a new smart cities narrative.

It is a good thing that a growing chorus is calling for a more nuanced discussion of the narratives applied to smart cities and the internet of things. See, for example, Bruce Sterling’s “The Epic Struggle of the Internet of Things”, Adam Greenfield’s “Against the Smart City”, Dan Hill’s "On the smart city; Or, a ‘manifesto’ for smart citizens instead“, and Anthony Townsend’s “Smart Cities: Big Data, Civic Hackers, and the Quest for a New Utopia”.

I should clarify that I’m not anti Smart Cities or IOT. In fact, I am very much for them, but I do think that we need to be more careful about letting top-down formulations dominate the discussion because they risk giving an overly simplistic view of how cities work. In spite of significant advances in coming to grips with cities and society as self-organising systems of systems, there is still an overwhelming tendency to approach them in a reductionist manner, thus giving the impression that if only we had enough information and control, then we can magically conjure utopian cities out of a hat (or, black-box). If left untempered, this narrative risks passing from marketing literature into the public psyche, taking with it the impression that all-powerful algorithms hold the promise of ultimate efficiency and order for cities through the omniscient observation and manipulation of their processes.

The positioning of the narrative is interesting because we observed quite recently that the application of a reductionist lens to cities and city processes often spells trouble. This occurred under the banner of ‘modernism’ and the backlash that became known as postmodernism. On a philosophical, sociological, and cultural level some significant headway was made in elucidating these mindsets. (See, for example, David Harvey’s “The Condition of Postmodernity” and, for a shorter synopsis, David Lyon’s “Postmodernity”.) Many modernist-era architects and planners believed that new-found mastery of cities through all manner of technological prowess allowed the messiness of cities and society to be tamed through the sanitisation of its processes and the replacement of chaos with rational purity. (Perhaps it is not coincidence that Le Corbusier was initially destined for the Swiss watchmaking industry?) Cities, and the buildings in them, were to be seen as machines: Efficient, organised, and above all, controllable. This ideology and its inkling of power took architecture and planning schools by storm, and it required the plain-speaking frankness of Christopher Alexander and, particularly, Jane Jacobs, to point out (much to the consternation of architects and planners of the time) that the complexity that modernism sought to remove from cities was actually its energising lifeblood. Cities are dead without it. Fast-forward fifty-odd years and complexity science is telling us the same thing. Cities, and indeed all complex systems, thrive on complexity for the diffusion of energy and information in all kinds of interesting ways.

So why then, given the benefit of hindsight and the robust theoretical framework offered by complexity science, are we heading towards the quandary of reductionism all over again? Perhaps it is just the enduring allure of utopia. Though the answer may also be that whereas some are still quite painfully aware of this recent history, those framing the top-down smart cities narrative are frequently (though not necessarily) less so.

The solution is as before. Cities can’t be tamed, and attempts to sanitise or prescribe their processes will generally come-up short. It is not control but its decentralisation that matters; and we need not more information but more contextually meaningful information and more open access to it; and it is not technology but its usefulness in the hands of citizens that truly empowers. The subtle but important distinction is that, given sufficient access to information and communication, the self-organisation of smart-citizens from the bottom-up is far more potent and resilient than any engineered or contrived top-down formulation, which is inevitably brittle and biased.

Our cities are living organisms that are ever-more conscious and capable than before. We need to learn to trust the capacity of citizens to innovate, and we need to give them sufficient access to information, technology, and capital to do so. It is in this realm of the complex and the unpredictable that innovation resides and that our cities truly become smart.

urbanism cities smart cities internet of things complex systems

Using localised network complexity as a graph centrality measure

I’ve been running some experiments with graph centrality indices in an attempt to find an effective method to describe Jane Jacob’s thinking behind her argument for ’the need for small blocks’.

Centrality indices are a manner to identify the most important edges (links) or vertices (nodes) in a graph (network), and can tell you things like which people are the most influential in a social network, or which roads are the most likely to experience traffic and liveliness. In the case of street networks, we are ordinarily dealing with ‘planar’ (flat) graphs consisting of intersections and streets. The graph can be represented in its ‘primal’ form, where the intersections are nodes and the streets are links, or, as is the case with Space Syntax, the graph can be represented in its ‘dual’ form where the streets are represented as nodes. There are also different ways to describe what exactly constitutes a ‘street’, such as traditional implementations of Space Syntax that utilise unobstructed lines of sight, i.e. straightness.

For the the sake of intuition and experimentation, I am here using the primal representation where intersections are nodes connected by distance-weighted street links. Since I am trying to zero-in on Jacobs’ discussion and examples, I’ve created a little test scenario which consists of two clusters of manhattan grids, asymmetrically connected by longer street segments in order to sift out what works better at global or local scales. Some of the block clusters also have additional streets at the mid-points of the blocks in order to identify which centrality indices better align with Jacobs’ discussion about smaller street blocks in the context of her Rockefeller Plaza example.

Some better known examples of centrality indices include ‘closeness’ centrality, which effectively tells you how close a particular node is to all other nodes, and ‘betweenness’ centrality, which tells you how many times a particular node appears on the shortest path between all other nodes. At first glance, what Jacobs was describing sounds a little bit like betweenness, because she mentions how that many different paths need to come together to support pools of economic use.

Yet, betweenness doesn’t really tell the whole story. On a global level, betweenness paints an accurate picture of where the most commonly used streets are likely to be, including some isolated and longer streets that are centrally located between all other nodes, but that are in some cases unlikely to be successful with pedestrians.

A different picture emerges with localised betweenness, for which I’ve used a 600m walking radius. An algorithm cycles through all nodes in the graph, figures out which other nodes are within a distance of 600m, and then runs a localised betweenness on the resultant localised graphs.

In the above-picture, the lower cluster’s results resemble something like a ‘closeness’ centrality, whereas the upper cluster shows a similar but slightly higher average score due to the additional origins and destinations. However, the mid-block streets rank quite low because they aren’t technically on the most direct routes.

Looking for a way to better identify these mid-block streets, I then tried Alpha (Meshedness), Beta, and Gamma indices. Between themselves, they yield similar results and roughly describe which parts of the graph are more densely connected than other parts. But, while definitely closer to what I’m trying to measure, they demonstrate some idiosyncrasies such as more centralised nodes scoring lower than less centralised nodes.

Returning to Jacobs’ argument; what she was describing can be thought of as a form of network porosity and route-choice complexity. In other words, a street network that allows for multiple interweaving route-choice combinations that can support maximal access options to local streets. This is a natural fit for an information entropy measure, so I experimented with a form of (Shannon’s) information entropy to measure the amount of route-choice information contained in the localised sub-graphs. Without describing the details, the index calculates the information content of the sub-graph’s route choices, assuming all route choices are equally probable.

The results are starting to get closer to the notions of porosity and complexity but because this formula is applied to the localised sub-graph in a blanket-like fashion, it doesn’t necessarily encompass some of the routing subtleties of the network emanating outwards from each of the respective nodes. This isn’t immediately obvious in the manhattan-grid test case but it is subtly present in more varied graphs. I therefore tried another approach, which is to calculate the outwards route choice probabilities (using a breadth-first outwards search) to effectively calculate the number of route choices and their consequent probabilities, and then using this information for computing the node’s information entropy index.

Out of these and several other approaches I tried, this one is my favourite because it results in a granular description of localised route complexity, which identifies the Rockefeller Plaza type of example used by Jacobs. More generally, information entropy is used as the basis of several diversity measures which are used to gauge diversity in systems ranging from from economics to eco-systems. It is perhaps fitting that such indices are useful for describing Jacobs’ conception of diversity as a key driver in cities as complex systems.

cities complexity Jane Jacobs centrality graphs

Visualising Disabled Freedom Pass trips on the London Underground

Disabled Freedom Pass Card journeys on the London Underground from Gareth Simons on Vimeo.

This video is a follow-up to my earlier post. It is the completed visualisation of Disabled Freedom Pass trips on the London Underground network. The bright orange lines represent the Disabled Freedom Pass (DFP) trips, whereas the white lines represent all other oyster card users. In the hands-on application version, the user can navigate the scene in realtime with the mouse and arrow keys, and click on any station to see only the trips to and from that particular location.

In a nutshell, the data preparation was done in Python, where individual trips were solved using a shortest path algorithm to identify the likely waypoints for each tube trip. The data, over 700,000 trips worth, was then exported as a CSV file which is used in the Unity app to create the scene and animate the trips.

I remain quite impressed with Unity’s speed and flexibility - it actively handles over 3,500 animated objects on-the-fly. As such, its a very potent framework for developing dynamic and interactive visualisations.

Data Prep

The data is based on a 5% sample from November 2009, provided by Transport For London. The data preparation was done in Python and took several inputs that had originally been prepared from the TFL data by fellow group members, Katerina and Stelios:

  • The tube lines consisting of the stations;

  • The station names with the station coordinates;

  • The 700,000 + lines of trip data derived from the original TFL data.

The Python script creates a station index and a station coordinates list, which it then uses to create a weighted adjacency matrix of all stations on the network. The scipy.csgraph package is then used to return a solved shortest path array. Subsequently, the data file is imported and the start and ending location for each trip is then resolved against the shortest path results, with the waypoints for each trip written to a new CSV file. A further CSV file containing each station’s name and coordinates is also generated.

Unity Implementation.

The Unity implementation consists of several inter-related components. It starts to become complex, but here goes:

  • The 3d models for the London landmarks were found in the Sketchup 3D warehouse. Their materials were removed and they were imported to Unity in FBX format;

  • The outline for Greater London was prepared as a shapefile by fellow group member, Gianfranco, which he subsequently exported to FBX format via City Engine;

  • An empty game object is assigned with the main “Controller” script that provides a springboard to other scripts and regulates the timescale and object instancing throughout the visualisation. This script allows numerous variables to be set via the inspector panel, including the maximum and minimum time scales, the maximum number of non-disabled trip objects permitted at one time (to allow performance fine-tuning for slower computers), a dynamic time scaling parameter, and the assignment of object prefabs for the default DFP and non-DFP trip instances. Further options include a movie-mode with preset camera paths and a demo of station selections;

  • One of the challenges in the creation of the visualisation was the need to develop a method for handling time scaling dynamically to reduce computational bottlenecks during rush-hours, and also to speed up the visualisation for the hours between midnight and morning to reduce the duration of time with nothing happening in the visualisation. The Controller script is therefore designed to dynamically manage the time scale;

  • The controller script relies on a “FileReader” script to load the CSV files. The stations CSV file is used to instance new station symbols at setup time, each of which, in turn, contains a “LondonTransport” script file, with the purpose of spinning the station symbols. It also sets up behavior so that when a station is clicked, the station name is instanced (“stationText” script) above the station, and trips only to and from that station are then displayed via the Controller script. The FileReader script also reads the main trip data CSV file, and loads all trips at setup time into a dictionary of trip data objects that include the starting and ending station data, as well as the waypoint path earlier generated by the Python script. The trip data objects are then sorted into a “minute” dictionary that keeps track of which trips are instanced at each point in time. The minute dictionary is in turn used by the Controller script for instancing trip objects.

  • The “Passenger” and “SelectedPassenger” objects and accompanying scripts are responsible for governing the appearance and behavior of each trip instance. They are kept as simple as possible, since thousands of these scripts can be active at any one point in time, effectively only containing information for setting up the trip interpolation based on Bob Berkebile’s iTween for Unity. iTween is equipped with easing and spline path parameters, thereby simplifying the amount of complexity required for advanced interpolation. The trip instance scripts will destroy the object once it arrives at the destination.

  • Other scripts were written for managing the cameras, camera navigation settings, motion paths for the movie mode camera, rotation of the London Eye model, and for setting up the GUI.

Visual and Interaction Design.

It was decided to keep the London context minimal with only selected iconic landmarks included for the purpose of providing orientation, and a day-night lighting cycle to give a sense of time. Disabled Freedom Pass journeys consist of a prefab object with a noticeable bright orange trail and particle emitter, in contrast to other trips, which consist of simple prefab objects with a thin white trail renderer and no unnecessarily complex shaders or shadows due to the large quantities of these objects. The trip objects are randomly spaced across four different heights, giving a more accurate depiction of the busyness of a route, as well as a more three-dimensional representation of the flows.

Interactivity is encouraged through the use of keyboard navigation controls for the cameras, as well as a mouse “look around” functionality, switchable cameras, and the ability to take screenshots.

london london underground visualisation data visualization oyster card disability accessibility Unity3D python

Adventures visualising the visualisation landscape

I’ve spent the last several weeks investigating options for visualising spatial information. Lets just say its been surprisingly frustrating and rewarding at the same time. In a perfect world I’m looking for a visualisation package that is tightly integrated with code, allowing for dynamic user input and on-the-fly visualisation of computational processes as they unfold.

Processing

My journey started with the ‘ol classic: Processing. But its a bit of a love-hate relationship. Processing is a really good way to learn how to program while churning out surprisingly powerful visualisations, but its messy (Java) syntax and somewhat constrained ecosystem are what originally sent me searching for greener pastures.

Having recently learned Python while working on a GIS assignment (Python + ArcGIS’ arcpy), I found it incredibly clean, well documented, and productive. It is really a fantastic programming language for solving complicated challenges because it lets you focus more on how to solve the challenge rather than how to implement a solution within the constraints typical of a programming language. Presumably for this reason, it is increasingly being adopted by the science community (think numpy, scipy, matplotlib, etc) and IMHO is the one to watch.

Processing.py

I was unable to find anything remotely like a Processing equivalent in Python, but did find Processing.py, which is effectively a Python port of the Processing language. You have access to all of the standard processing functions and libraries, and you run the code by dragging and dropping your file onto an app that uses Jython to translate the Python into Java. But while this is a great solution for basic sketches and experimentations, the workflow tends to start feeling constrained for anything more complex.

Blender

Looking for something more directly coupled to Python, the next port of call was Blender. I’m still not sure what to make of it. On the plus side, it offers a tight synthesis between Python and visualisation. But the API is quite difficult to understand and the GUI interface is hugely unintuitive and complicated. And even after playing around with it for a while, the right mouse-click for selecting objects remains the most incredibly unintuitive bit of interface design I’ve ever encountered. I did manage to use python to import a csv file with more than 700,000 trip events and to convert a portion of these into animated objects, but the amount of effort to reward sent me packing. Whereas blender leaves me somewhat dazed, I suspect that if one has several months to really learn and tinker with it, it could become a very potent tool that could conceivably be fully customised.

Rhino 3d

Next, I went knocking on Rhino’s door. What drew me to Rhino is that its a well respected package in the Architecture community, and is often combined with Grashopper to allow the creation of generative design workflows. Furthermore, it has an extensive API which now interfaces with python. I was again able to create a python script to import a csv file with data, and to animate some of these objects in space, but I did encounter some issues. Firstly, I discovered that some python modules didn’t work in Rhino, and the reason is that it actually uses “IronPython” which is integrated with Microsoft’s .NET…and although most Python modules have been incorporated, some, such as the CSV reader module, are conspicuously absent. The other issue I ran into was that trying to move lots of objects in real-time was slow, presumably because Rhino is designed for modelling, rather than real-time animation and visualisation. Presumably, if one were to learn the ins and outs of RhinoCommon, it would be possible to find ways to speed it up, but for now, it was time to move on to the next port of call.

Unity 3d

I’m not sure why I put Unity 3d off to last. I think I was wary of becoming fully invested in a proprietary eco-system, perhaps a sort of double-edged sword. Unity offers excellent real-time performance with user-interaction at its core. But whereas I had ideally been looking for something where I could write code independent of an application, and then just use the application as a sort of visualisation window, with Unity your code must be designed and structured from the ground-up as an integral part of the application. Unity offers three languages, C#, Javascript (not true Javascript, hence often called UnityScript), and Boo, which is touted as being similar to Python. Since I like Python, I started off with Boo. But as per its namesake…I was quickly scared off because although its syntax is Python-like, it is really a completely otherwise language. (And very un-pythonic in that its documentation is sparse and confusing.) Although I have used javascript fairly extensively in the past, I decided to go with C#, and I’m glad I did.

I was initially rather dreading the thought of learning to use C# but, in retrospect, this fear was unfounded. For those new to programming, C# is distinct from C as well as C++. It is understandable why these languages are all called “C something”, but for newcomers this gives too much of an impression that they are all quite similar. They are actually all quite distinct. A brief (and by no means accurate) history is that C came along first. It is a low-level and potent language. Then along came C++ (C “incremented”) which added things like object oriented programming (i.e. classes). Although it effectively grew out of C, it is at the same time completely different due to the revised patterns introduced by object-oriented thinking. C#, the most recent addition, was created by Microsoft for its .NET framework. In spite of the “C” in the name, its syntax is very similar to Java… There is yet another widely known C language; “Objective-C” which is used for Mac and iOS development. (And again, quite different.)

For anyone that has messed about with Processing (which is built on top of Java), the good news is that if you know rudimentary Java it is easy to master the C# syntax and logic. But whereas Processing does some of the work for you behind the scenes, in C# it is up to you to explicitly state whether variables are public or private and it will also push you towards a more strict object-oriented programming approach.

What floored me about Unity 3d is that, not only was it able to import over 700,000 trip events relatively quickly, but for the most part it was also capable of animating these in realtime as the trips unfold across time. In this case, the animated instance of a trip object is created when a trip starts, and is destroyed when the trip ends, so there can be multiple thousands of trips occurring at any point in time. (The trips are of tube journeys across the span of a week in London, and only when visualising the week-day rush hours did it start choking my system. And this because it only used one out of the eight available processing cores.) Because Unity is a game-engine, it uses some clever behind-the-scenes technology to make sure that the playback is as smooth as possible. For this reason, it distinguishes between actual time and frame rates, and it will display animated objects in such a way that their movement remains smooth, even though the frame rate could be lagging or varying greatly. But the clincher is that unity also allows the creation of a fully customised GUI, direct user interaction, and it ports to all different operating systems and mobile devices.

In summary:

  • If you want to do quick and (relatively) simple visualisation sketches, use Processing. (Or Processing.py.)

  • If you want to use pure python to create animations, use Blender.

  • If you want to do interactive form generation, use Rhino.

  • But if you want to blow your socks off with large amounts of data, smooth and real-time playback, and easy user interaction, then use Unity 3d.

programming C Python Boo Visualisation Unity3d

Christopher Alexander on design

Somewhat surprisingly, a lot of architects don’t know who Christopher Alexander is. Those who do, are frequently uneasy with his message. Yet, Alexander is arguably one of the more influential architects to have emerged in the space of the last fifty odd years…with a lasting impact on planners, urbanists, and, interestingly, computer programmers.

Perhaps the reason for this tension, is that Alexander is quite upfront (as he demonstrates in this debate with Peter Eisenman) about his disdain for the pretentious fanfare that often enshrouds architecture schools. What he has zeroed-in on, is that this approach to architecture is frequently driven by highly abstract gobbledygook to the point that buildings often alienate people and generate contextually inappropriate design.

Yet, it is important to note that whereas Alexander is anti architecture establishment, he is not necessarily anti-architecture. His thesis is essentially that we can’t use reductionist approaches to resolve complex design problems. And in this sense, he, along with Jane Jacobs, heralded the emergence of a complex systems view of architecture and urbanism.

As he originally argued in his book, Notes on the Synthesis of Form (an adaptation of his PhD thesis) all too often, design requirements are so varied and complex that we cannot fully define or even comprehend what they are. He therefore presents the case that modern-day architects are at a disadvantage compared to traditional cultures where design was the result of incremental and iterative feedback processes that embedded successful design solutions in the form of time-tested traditions. He paints a picture of modern-day architects as somewhat befuddled by the immensity and complexity of design challenges. And hapless; frequently resorting to creative redefinition of the design problem as a means of skirting the real issues, or shoehorning the design context into half-baked reductionist constructs.

He proceeds to present a methodology for defining design problems in a ‘decomposable’ manner that makes design complexity more manageable. But while I find his analysis quite interesting, I also think that it was inevitable that his solutions would find more acceptance with computer programmers and planners than architects. And I suspect the reason is that programmers and planners arguably deal with objectives that can be more clearly decomposed than many architecture problems. Whereas architecture deals with a grey area between quantitative and qualitative considerations that, by their nature, are often too diffuse to be clearly decomposable, and where meaning is often most profound when the issues at hand are most subtle, interwoven, and complex.

And this is where some of Alexander’s supporters tend to throw the baby out with the bathwater. (One of whom, whose name I won’t mention here, referred to contemporary architecture as a “substitute religion of inhuman geometry”.) It becomes an issue when people automatically assume that contemporary architects and architecture are evil unless a historic style is summoned. This is probably driven by a general lack of awareness of the complexities that architects deal with, as well as the nature of the design process, contemporary materials, and contemporary construction techniques. And it is therefore important that we don’t discredit the value and difficulty of what contemporary practising architects do, nor the immense accomplishment of a design task well resolved.

While I think that Christopher Alexander’s thesis has to be applied judiciously at the scale of individual buildings, the core of his message holds true. In that incremental, bottom-up, and distributed problem solving approaches are profoundly powerful and resilient. His approach becomes especially valid when doing participatory, community-focused work, and could ironically prove powerful in parametric design for complex design tasks. However, as encapsulated in his article A City Is Not A Tree, it is at the urban scale where his ideas become truly profound.

PS - If you are interested in Christopher Alexander’s technique for “decomposing” complex design challenges, then I recommend also taking a look at Mike Batty’s latest work A New Science of Cities, in which several chapters are devoted to exploring and expanding Alexander’s ideas as applied to planning and committee decision making processes. (Including the use of “Markovian Design Machines”.)

christopher alexander architecture design urbanism

Measuring Urban Land-Use Diversity

I’ve been exploring ways to map urban land-use diversity, which is a rather tough nut to crack. The problem is that it requires a reliable source of high quality data that accurately classifies each building occupancy as a type of land-use. After some digging around, I learned that the Ordnance Survey’s OS MasterMap Address Layer 2 contains the land-use classification for each address. While no data source is perfect, this is as close to it as it is likely to get.

The land-use classifications are based on the National Land Use Database, which provides not only the major land-use categories - e.g. “residential”, “retail” - but also some important sub-categories, like “Restaurants and Cafes” and, lets not forget, “Public houses and bars”.

The interesting thing about land-use diversity is that its not necessarily something you can “see” when you’re looking at a city’s visible morphology. In other words, if looking at a map of a street, you don’t ordinarily know whether its perhaps a row of houses, or potentially a more diverse mix that maybe includes a bakery, corner store, daycare, and the like. This is important information to have, because by increasing the land-use mix you can end up with a dramatically different street dynamic due to significantly more pedestrian trips.

So I’ve created a python script tool for ArcGIS that can take detailed building type classifications and calculate a spatial diversity index for each building location. It does this by searching for different land-use types within a user-selected distance from each building (based on network path distance) and will then calculate a mixed-use index score based on the Gini-Simpson diversity index. It will automatically detect the number of land-use “type” categories, which offers some flexibility and specificity on how to analyse the presence and degree of mixed-uses.

Based on my own observations, the tool is accurate in its representation of ‘happening’ and diverse mixed-use areas. But what I’ve found interesting is that surprisingly small search distances seem to work best. This is because they retain the granularity of the results. For example, a 50m search distance yields a more interesting and useful map than a 250m search distance.

The tool does not distinguish between diversity of mixed-uses in larger and smaller buildings, which seems counterintuitive when we think of mixed-use diversity in the fine-grained sense described by Jane Jacobs. I’ve therefore experimented with weighting the results with the inverse of the building’s footprint area, which yields results that focus-in on fine-grained mixed-use hubs.

I’ve also taken a look at whether the spatial diversity index results can be used as a weighted input parameter into network analysis tools, such as the Urban Network Analysis toolkit, which calculates centrality based on one of five centrality methods. The benefit of this approach is that it combines urban network analysis with a functional mixed-use analysis to reveal those little gems of urban diversity.

Now, about those “Public houses and bars” - I think the next step should be to compare urban diversity based on daytime and nighttime scenarios. This will, of course, require some ground-truthing…

urbanism cities diversity gis map

SkyCycle - reincarnation of failed ideas or future-thinking solution?

Yet another ambitious proposal from Norman Foster has hit the press. This time its an elevated cycling super-highway that connects the far corners of London to create what has been termed by Exterior Architecture, the original proponents of the idea, as a cycling utopia. The proposed network is truly comprehensive and ambitious, with the configuration and access points optimised by network analysis courtesy of Space Syntax.

As a long-time commuting cyclist and advocate for cycling, I’ve followed the reaction with some interest, which has predominantly been positive. There appears, however, to be an increasingly concerned contingent comparing it to historically failed projects like London’s attempt at the Pedway. These concerns are driven by hard-learned lessons from the past, where freeways ripped cities apart, elevated pedestrian walkways separated people from the street-level diversity that fuels them, and mobility networks have often been engineered to bypass and exclude the less fortunate.

So, is the SkyCycle doomed before take-off? The reality is that this is one of those projects where success or failure will boil down to implementation. And given the team and the enthusiasm behind the proposal, I think it fair to give it a chance. Here are my reasons for saying this:

  • As the subway is for pedestrians, the SkyCycle would be for cyclists. Unlike many forms of elevated transportation infrastructure created in the past, the proposed network does not compromise existing street-level connectivity. In fact, in some ways it could enhance connectivity and creates potential hubs of activity at the network access points. It is important to remember that bicycles can actually travel quite fast, and so, for longer distances, bicycles function less like pedestrians and more like vehicles. The strong-point for bikes is that it is so easy to transition between these two states.

  • Long-time cyclists generally don’t have issues cycling in traffic. But the reality is that for every cyclist out on the roads, there are many more that would cycle if they had a safe network that is A) connected, and B) segregated from motorised vehicles. In many cases, the safety benefits of segregated cycle ways are perceived, not actual, but it still matters tremendously to the cycling experience and its adoption by the greater public. Short of preventing all vehicular access to London (which I would happily support), there are realistically few other options for achieving the same sense of connectivity and perceived safety without creating dedicated routes above the hubbub of motorised vehicles.

  • As much as I enjoy cycling at street level, it can be a hugely frustrating experience for longer distance trips on congested routes for a few simple reasons: vehicle fumes; getting stuck behind busses and taxis; and having to stop for a traffic light (for what seems like) every few feet…and this has a nasty habit of happening each time you’ve just gotten going again. Short of being a bike-courier or lycra-clad alpha-male, the SkyCycle would make cycling the hands-down fastest and most convenient way to get around London.

  • Now for the downside: The proposed concept is hugely ambitious, and for the same reasons it could be very difficult and costly to implement in reality. The engineering logistics are complex, and while certainly not impossible, will be costly to implement elegantly without substantial design effort and cost. The SkyCycle also presents other logistical challenges, like the amount of distance required for the access ramp gradients, access for ongoing maintenance, and issues of safety on the SkyCycle network. These aren’t small issues, but I’m inclined to think that they can be solved by the right people if given the necessary political and public support.

skycycle cycling infrastructure cities

Robert Moses - getting his fair share of posthumous attention

Robert Moses with Battery Bridge model - Wikimedia Commons.

Robert Moses continues to get his fair share of posthumous attention and his role in NYC’s history continues to be cemented, though not along the lines that he would have liked. Untapped Cities recently ran an interesting blog post on 5 Things in NYC We Can Blame on Robert Moses, commemorating his birthday by revisiting some of his controversial legacy. Thankfully, not all of his projects came to fruition, notably the Lower Manhattan and Mid Manhattan expressways.

What is interesting about Robert Moses is that he was never elected to public office. Yet, somehow, he was effectively the most powerful man in New York, a story told in the Pulitzer winning book, The Power Broker. He had developed a peculiar form of institution called “authorities”, which were designed to exert great power with little accountability. This was exemplified by the Triborough Bridge Authority:

"Language in its Authority’s bond contracts and multi-year Commissioner appointments made it largely impervious to pressure from mayors and governors. While New York City and New York State were perpetually strapped for money, the bridge’s toll revenues amounted to tens of millions of dollars a year. The Authority was thus able to raise hundreds of millions of dollars by selling bonds, making it the only one in New York capable of funding large public construction projects. Toll revenues rose quickly as traffic on the bridges exceeded all projections. Rather than pay off the bonds Moses sought other toll projects to build, a cycle that would feed on itself." (Wikipedia entry on Robert Moses).

All of this meant that Robert was essentially free to do as he pleased - and in some ways he saw New York as his sandbox - to the extent that it took the President of the United States to stop Robert’s insistence on a proposed Brooklyn Battery Bridge.

Robert was known for being particularly decisive and stubborn, and dreamt-up grand planning schemes that often occurred at the expense of individual property owners and the poor. He did so, even when slight changes may have spared the aggravation. He met his match, however, with Jane Jacobs, a story told in "Wrestling with Moses: How Jane Jacobs Took on New York’s master Builder and Transformed the American City". Ironically, stoking the ire of Jane Jacobs could be seen as one of his greatest - though perhaps most unintended - of legacies.

jane jacobs robert moses new york city planning cities

IBM 5 in 5 - smart city induced utopia?

Apparently, we are rapidly approaching the dawn of a technologically induced utopia - a promised land of sorts - a (not so) new claim that all of our problems are rapidly becoming a thing of the past…because overcrowded busses and late pizza will be resolved by the smartening-up of cities.

According to IBM’s 5 in 5 predictions:

"…cities can be hard unforgiving places to live…cities are tough, because they require us to live on their terms, but in five years the tables will turn. With cities adapting to our terms, with cloud-based social feedback, crowdsourcing, and predictive analytics, we’ll shape our cities to our evolving wants and needs, comings and goings, and late-night pizza hankerings. By engaging citizens, city leaders will be able to respond directly to our needs, and dynamically allocate resources…and pizza."

No wonder there is an increasingly concerned chorus of critics objecting to the marketing language used by some proponents of “smart cities”, because they sense that corporate interests and government departments may well try to leverage the new technologies from the top-down, instead of the bottom-up approach preferred by an increasingly empowered citizenry.

There is a bit of truth mixed-in with the hype. It is true that bottom-up crowdsourced information feedback allows the city to self-organise - to dynamically adapt to both new and old opportunities and challenges - and to develop a sort of self-regulating city ‘consciousness’. But a more nuanced view is necessary when it comes to forecasting the end of all evils due to the the implied top-down mastery of all things complex… and this due to the somewhat simplistic notion of government officials sitting behind giant screens in new control centres.

A more rounded perspective can be found in new books by Anthony Townsend and Mike Batty, with a solid review (of both books) available from the New Scientist.

cities technology modernism utopia predictions smart cities IBM5in5

Steve Rayner on Path Dependence in Cities

An interesting presentation by Steve Rayner in which he discusses the significance of Path Dependence and “lock-in”.

Path dependence explains how the set of decisions one faces for any given circumstance is limited by the decisions one has made in the past, even though past circumstances may no longer be relevant. (Wikipedia)

Steve explains that our cities are significantly impacted by past innovations and decisions…such as the location of streets, the invention of the car, and technologies like electric light, flushing toilets, and elevators.

Lock-in through path-dependence can end up causing cities and processes to work in ways that are no longer efficient or sensible. Some kind of mechanism is necessary to allow for flexibility or a radical break in order to escape from the status quo. This is largely what Steve’s Flexible City website is about.

A particularly amusing example in Steve’s presentation is that the size of the space shuttle’s rocket thrusters were determined by the width of a horse’s ass…see the video for details.

cities technology invention evolution flexible city path dependency

Crowded Cambridge train departing from King’s Cross

A Cambridge bound train at King’s Cross delayed due to high winds and a tree downed on the track north of Cambridge.

The Cambridge - London commuter trains are busy at the best of times, and things quickly become interesting when a train runs late. Add some gales and a tree downed on the track north of Cambridge, and you have the makings of some interesting crowd dynamics. Having observed the madness once before, I thought I’d skip this train and grab the next one instead…which, as these things go, was only a short while later and was significantly less full.

Notice how the crowd liquifies as soon as the train’s arrival platform is announced, and the slick auto-alignment of the crowds to the train doors.

train crowd cambridge dynamics kings Cross