Using localised network complexity as a graph centrality measure

I’ve been running some experiments with graph centrality indices in an attempt to find an effective method to describe Jane Jacob’s thinking behind her argument for ’the need for small blocks’.

Centrality indices are a manner to identify the most important edges (links) or vertices (nodes) in a graph (network), and can tell you things like which people are the most influential in a social network, or which roads are the most likely to experience traffic and liveliness. In the case of street networks, we are ordinarily dealing with ‘planar’ (flat) graphs consisting of intersections and streets. The graph can be represented in its ‘primal’ form, where the intersections are nodes and the streets are links, or, as is the case with Space Syntax, the graph can be represented in its ‘dual’ form where the streets are represented as nodes. There are also different ways to describe what exactly constitutes a ‘street’, such as traditional implementations of Space Syntax that utilise unobstructed lines of sight, i.e. straightness.

For the the sake of intuition and experimentation, I am here using the primal representation where intersections are nodes connected by distance-weighted street links. Since I am trying to zero-in on Jacobs’ discussion and examples, I’ve created a little test scenario which consists of two clusters of manhattan grids, connected by two longer streets in order to sift out what works better at global or local scales. Some of the block clusters also have additional streets at the mid-points of the blocks in order to identify which centrality indices better align with Jacobs’ discussion about smaller street blocks in the context of her Rockefeller Plaza example.

Some better known examples of centrality indices include ‘closeness’ centrality, which effectively tells you how close a particular node is to all other nodes, and ‘betweenness’ centrality, which tells you how many times a particular node appears on the shortest path between all other nodes. At first glance, what Jacobs was describing sounds a little bit like betweenness, because she mentions how that many different paths need to come together to support pools of economic use.

Yet, betweenness doesn’t really tell the whole story. On a global level, betweenness paints an accurate picture of where the most commonly used streets are likely to be, including some isolated and longer streets that are centrally located between all other nodes, but that are in some cases unlikely to be successful with pedestrians.

A different picture emerges with localised betweenness, for which I’ve used a 600m (1/4 mile) walking radius. An algorithm cycles through all nodes in the graph, figures out which other nodes are within a distance of 600m, and then runs a localised betweenness on the resultant localised graphs.

In the above-picture, the right-hand cluster’s results resemble something like a ‘closeness’ centrality, whereas on the left it shows a similar but slightly higher average score due to the additional origins and destinations. However, the mid-block streets rank quite low because they aren’t technically on the most direct routes.

Looking for a way to better identify these mid-block streets, I then tried Alpha (Meshedness), Beta, and Gamma indices. Between themselves, they yield similar results and roughly describe which parts of the graph are more densely connected than other parts. But, while definitely closer to what I’m trying to measure, they demonstrate some idiosyncrasies such as more centralised nodes scoring lower than less centralised nodes.

Returning to Jacobs’ argument; what she was describing can be thought of as a form of network porosity and route-choice complexity. In other words, a street network that allows for multiple interweaving route-choice combinations that can support maximal access to local streets. This is a natural fit for an information entropy measure, so I experimented with a form of (Shannon’s) information entropy to measure the amount of route-choice information contained in the localised sub-graphs. Without describing the details, the index calculates an information entropy based on the number of route choices, assuming that all route choices are equally probable.

The results are starting to get closer to the notions of porosity and complexity, but since this formula is applied to the localised sub-graph in a blanket-like fashion, it doesn’t necessarily encompass some of the routing subtleties of the network emanating outwards from each of the respective nodes. This isn’t immediately obvious in the manhattan-grid test case that I’ve been using, but is subtly present in more varied graphs. I therefore tried another approach, which is to calculate the outwards route choice probabilities (using a breadth-first outwards search) to effectively calculate the number of route choices and their consequent probabilities, and then using this information for computing the node’s information entropy index.

Out of these and several other approaches I tried, this one is my favourite because it results in a granular description of localised route complexity, which identifies the Rockefeller Plaza type of example used by Jacobs. More generally, information entropy is used as the basis of several diversity measures which are used to gauge diversity in systems ranging from from economics to eco-systems. It is perhaps fitting that such indices are useful for describing Jacobs’ conception of diversity as a key driver in cities as complex systems.

Visualising Disabled Freedom Pass trips on the London Underground

Disabled Freedom Pass Card journeys on the London Underground from Gareth Simons on Vimeo.

This video is a follow-up to my earlier post. It is the completed visualisation of Disabled Freedom Pass trips on the London Underground network. The bright orange lines represent the Disabled Freedom Pass (DFP) trips, whereas the white lines represent all other oyster card users. In the hands-on application version, the user can navigate the scene in realtime with the mouse and arrow keys, and click on any station to see only the trips to and from that particular location.

In a nutshell, the data preparation was done in Python, where individual trips were solved using a shortest path algorithm to identify the likely waypoints for each tube trip. The data, over 700,000 trips worth, was then exported as a CSV file which is used in the Unity app to create the scene and animate the trips.

I remain quite impressed with Unity’s speed and flexibility - it actively handles over 3,500 animated objects on-the-fly. As such, its a very potent framework for developing dynamic and interactive visualisations.

Data Prep

The data is based on a 5% sample from November 2009, provided by Transport For London. The data preparation was done in Python and took several inputs that had originally been prepared from the TFL data by fellow group members, Katerina and Stelios:

  • The tube lines consisting of the stations;

  • The station names with the station coordinates;

  • The 700,000 + lines of trip data derived from the original TFL data.

The Python script creates a station index and a station coordinates list, which it then uses to create a weighted adjacency matrix of all stations on the network. The scipy.csgraph package is then used to return a solved shortest path array. Subsequently, the data file is imported and the start and ending location for each trip is then resolved against the shortest path results, with the waypoints for each trip written to a new CSV file. A further CSV file containing each station’s name and coordinates is also generated.

Unity Implementation.

The Unity implementation consists of several inter-related components. It starts to become complex, but here goes:

  • The 3d models for the London landmarks were found in the Sketchup 3D warehouse. Their materials were removed and they were imported to Unity in FBX format;

  • The outline for Greater London was prepared as a shapefile by fellow group member, Gianfranco, which he subsequently exported to FBX format via City Engine;

  • An empty game object is assigned with the main “Controller” script that provides a springboard to other scripts and regulates the timescale and object instancing throughout the visualisation. This script allows numerous variables to be set via the inspector panel, including the maximum and minimum time scales, the maximum number of non-disabled trip objects permitted at one time (to allow performance fine-tuning for slower computers), a dynamic time scaling parameter, and the assignment of object prefabs for the default DFP and non-DFP trip instances. Further options include a movie-mode with preset camera paths and a demo of station selections;

  • One of the challenges in the creation of the visualisation was the need to develop a method for handling time scaling dynamically to reduce computational bottlenecks during rush-hours, and also to speed up the visualisation for the hours between midnight and morning to reduce the duration of time with nothing happening in the visualisation. The Controller script is therefore designed to dynamically manage the time scale;

  • The controller script relies on a “FileReader” script to load the CSV files. The stations CSV file is used to instance new station symbols at setup time, each of which, in turn, contains a “LondonTransport” script file, with the purpose of spinning the station symbols. It also sets up behavior so that when a station is clicked, the station name is instanced (“stationText” script) above the station, and trips only to and from that station are then displayed via the Controller script. The FileReader script also reads the main trip data CSV file, and loads all trips at setup time into a dictionary of trip data objects that include the starting and ending station data, as well as the waypoint path earlier generated by the Python script. The trip data objects are then sorted into a “minute” dictionary that keeps track of which trips are instanced at each point in time. The minute dictionary is in turn used by the Controller script for instancing trip objects.

  • The “Passenger” and “SelectedPassenger” objects and accompanying scripts are responsible for governing the appearance and behavior of each trip instance. They are kept as simple as possible, since thousands of these scripts can be active at any one point in time, effectively only containing information for setting up the trip interpolation based on Bob Berkebile’s iTween for Unity. iTween is equipped with easing and spline path parameters, thereby simplifying the amount of complexity required for advanced interpolation. The trip instance scripts will destroy the object once it arrives at the destination.

  • Other scripts were written for managing the cameras, camera navigation settings, motion paths for the movie mode camera, rotation of the London Eye model, and for setting up the GUI.

Visual and Interaction Design.

It was decided to keep the London context minimal with only selected iconic landmarks included for the purpose of providing orientation, and a day-night lighting cycle to give a sense of time. Disabled Freedom Pass journeys consist of a prefab object with a noticeable bright orange trail and particle emitter, in contrast to other trips, which consist of simple prefab objects with a thin white trail renderer and no unnecessarily complex shaders or shadows due to the large quantities of these objects. The trip objects are randomly spaced across four different heights, giving a more accurate depiction of the busyness of a route, as well as a more three-dimensional representation of the flows.

Interactivity is encouraged through the use of keyboard navigation controls for the cameras, as well as a mouse “look around” functionality, switchable cameras, and the ability to take screenshots.

Adventures visualising the visualisation landscape

I’ve spent the last several weeks investigating options for visualising spatial information. Lets just say its been surprisingly frustrating and rewarding at the same time. In a perfect world I’m looking for a visualisation package that is tightly integrated with code, allowing for dynamic user input and on-the-fly visualisation of computational processes as they unfold.

Processing

My journey started with the ‘ol classic: Processing. But its a bit of a love-hate relationship. Processing is a really good way to learn how to program while churning out surprisingly powerful visualisations, but its messy (Java) syntax and somewhat constrained ecosystem are what originally sent me searching for greener pastures.

Having recently learned Python while working on a GIS assignment (Python + ArcGIS’ arcpy), I found it incredibly clean, well documented, and productive. It is really a fantastic programming language for solving complicated challenges because it lets you focus more on how to solve the challenge rather than how to implement a solution within the constraints typical of a programming language. Presumably for this reason, it is increasingly being adopted by the science community (think numpy, scipy, matplotlib, etc) and IMHO is the one to watch.

Processing.py

I was unable to find anything remotely like a Processing equivalent in Python, but did find Processing.py, which is effectively a Python port of the Processing language. You have access to all of the standard processing functions and libraries, and you run the code by dragging and dropping your file onto an app that uses Jython to translate the Python into Java. But while this is a great solution for basic sketches and experimentations, the workflow tends to start feeling constrained for anything more complex.

Blender

Looking for something more directly coupled to Python, the next port of call was Blender. I’m still not sure what to make of it. On the plus side, it offers a tight synthesis between Python and visualisation. But the API is quite difficult to understand and the GUI interface is hugely unintuitive and complicated. And even after playing around with it for a while, the right mouse-click for selecting objects remains the most incredibly unintuitive bit of interface design I’ve ever encountered. I did manage to use python to import a csv file with more than 700,000 trip events and to convert a portion of these into animated objects, but the amount of effort to reward sent me packing. Whereas blender leaves me somewhat dazed, I suspect that if one has several months to really learn and tinker with it, it could become a very potent tool that could conceivably be fully customised.

Rhino 3d

Next, I went knocking on Rhino’s door. What drew me to Rhino is that its a well respected package in the Architecture community, and is often combined with Grashopper to allow the creation of generative design workflows. Furthermore, it has an extensive API which now interfaces with python. I was again able to create a python script to import a csv file with data, and to animate some of these objects in space, but I did encounter some issues. Firstly, I discovered that some python modules didn’t work in Rhino, and the reason is that it actually uses “IronPython” which is integrated with Microsoft’s .NET…and although most Python modules have been incorporated, some, such as the CSV reader module, are conspicuously absent. The other issue I ran into was that trying to move lots of objects in real-time was slow, presumably because Rhino is designed for modelling, rather than real-time animation and visualisation. Presumably, if one were to learn the ins and outs of RhinoCommon, it would be possible to find ways to speed it up, but for now, it was time to move on to the next port of call.

Unity 3d

I’m not sure why I put Unity 3d off to last. I think I was wary of becoming fully invested in a proprietary eco-system, perhaps a sort of double-edged sword. Unity offers excellent real-time performance with user-interaction at its core. But whereas I had ideally been looking for something where I could write code independent of an application, and then just use the application as a sort of visualisation window, with Unity your code must be designed and structured from the ground-up as an integral part of the application. Unity offers three languages, C#, Javascript (not true Javascript, hence often called UnityScript), and Boo, which is touted as being similar to Python. Since I like Python, I started off with Boo. But as per its namesake…I was quickly scared off because although its syntax is Python-like, it is really a completely otherwise language. (And very un-pythonic in that its documentation is sparse and confusing.) Although I have used javascript fairly extensively in the past, I decided to go with C#, and I’m glad I did.

I was initially rather dreading the thought of learning to use C# but, in retrospect, this fear was unfounded. For those new to programming, C# is distinct from C as well as C++. It is understandable why these languages are all called “C something”, but for newcomers this gives too much of an impression that they are all quite similar. They are actually all quite distinct. A brief (and by no means accurate) history is that C came along first. It is a low-level and potent language. Then along came C++ (C “incremented”) which added things like object oriented programming (i.e. classes). Although it effectively grew out of C, it is at the same time completely different due to the revised patterns introduced by object-oriented thinking. C#, the most recent addition, was created by Microsoft for its .NET framework. In spite of the “C” in the name, its syntax is very similar to Java… There is yet another widely known C language; “Objective-C” which is used for Mac and iOS development. (And again, quite different.)

For anyone that has messed about with Processing (which is built on top of Java), the good news is that if you know rudimentary Java it is easy to master the C# syntax and logic. But whereas Processing does some of the work for you behind the scenes, in C# it is up to you to explicitly state whether variables are public or private and it will also push you towards a more strict object-oriented programming approach.

What floored me about Unity 3d is that, not only was it able to import over 700,000 trip events relatively quickly, but for the most part it was also capable of animating these in realtime as the trips unfold across time. In this case, the animated instance of a trip object is created when a trip starts, and is destroyed when the trip ends, so there can be multiple thousands of trips occurring at any point in time. (The trips are of tube journeys across the span of a week in London, and only when visualising the week-day rush hours did it start choking my system. And this because it only used one out of the eight available processing cores.) Because Unity is a game-engine, it uses some clever behind-the-scenes technology to make sure that the playback is as smooth as possible. For this reason, it distinguishes between actual time and frame rates, and it will display animated objects in such a way that their movement remains smooth, even though the frame rate could be lagging or varying greatly. But the clincher is that unity also allows the creation of a fully customised GUI, direct user interaction, and it ports to all different operating systems and mobile devices.

In summary:

  • If you want to do quick and (relatively) simple visualisation sketches, use Processing. (Or Processing.py.)

  • If you want to use pure python to create animations, use Blender.

  • If you want to do interactive form generation, use Rhino.

  • But if you want to blow your socks off with large amounts of data, smooth and real-time playback, and easy user interaction, then use Unity 3d.

Christopher Alexander on design

A lot of architects don’t know whom Christopher Alexander is. And those who do, are frequently uneasy with his message. Yet, Alexander is arguably one of the more influential architects to have emerged in the space of the last fifty odd years…with a lasting impact on planners, urbanists, and, interestingly, computer programmers.

The reason for this tension, is that Alexander is quite upfront (as he demonstrates in this debate with Peter Eisenman) about his disdain for the pretentious fanfare that often enshrouds architecture schools. What he has zeroed-in on, is that this approach to architecture is frequently driven by highly abstract gobbledygook to the point that buildings often alienate people and generate contextually inappropriate design.

Yet, it is important to note that whereas Alexander is anti architecture establishment, he is not necessarily anti-architecture. His thesis is essentially that we can’t use reductionist approaches to resolve complex design problems. And in this sense, he, along with Jane Jacobs, heralded the emergence of a complex systems view of architecture and urbanism.

As he originally argued in his book, Notes on the Synthesis of Form (an adaptation of his PhD thesis) all too often, design requirements are so varied and complex that we cannot fully define or even comprehend what they are. He therefore presents the case that modern-day architects are at a disadvantage compared to traditional cultures where design was the result of incremental and iterative feedback processes that embedded successful design solutions in the form of time-tested traditions. He paints a picture of modern-day architects as somewhat befuddled by the immensity and complexity of design challenges. And hapless, frequently resorting to creative redefinition of the design problem as a means of skirting the real issues, or shoehorning the design context into half-baked reductionist constructs.

He proceeds to present a methodology for defining design problems in a ‘decomposable’ manner that makes design complexity more manageable. But while I find his analysis quite interesting, I also think that it was inevitable that his solutions would find more acceptance with computer programmers and planners than architects. And I suspect the reason is that programmers and planners arguably deal with objectives that can be more clearly decomposed than many architecture problems. Whereas architecture deals with a grey area between quantitative and qualitative considerations that, by their nature, are often too diffuse to be clearly decomposable, and where meaning is often most profound when the issues at hand are most subtle, interwoven, and complex.

And this is where some of Alexander’s supporters tend to throw the baby out with the bathwater. (One of whom, whose name I won’t mention here, referred to contemporary architecture as a “substitute religion of inhuman geometry”.) It becomes an issue when people automatically assume that contemporary architects and architecture are evil unless a historic style is summoned. This is probably driven by a general lack of awareness of the complexities that architects deal with, as well as the nature of the design process, contemporary materials, and contemporary construction techniques. And it is therefore important that we don’t discredit the value and difficulty of what contemporary practising architects do, nor the immense accomplishment of a design task well resolved.

While I think that Christopher Alexander’s thesis has to be applied judiciously at the scale of individual buildings, the core of his message holds true. In that incremental, bottom-up, and distributed problem solving approaches are profoundly powerful and resilient. His approach becomes especially valid when doing participatory, community-focused work, and could ironically prove powerful in parametric design for complex design tasks. However, as encapsulated in his article A City Is Not A Tree, it is at the urban scale where his ideas become truly profound.

PS - If you are interested in Christopher Alexander’s technique for “decomposing” complex design challenges, then I recommend also taking a look at Mike Batty’s latest work A New Science of Cities, in which several chapters are devoted to exploring and expanding Alexander’s ideas as applied to planning and committee decision making processes. (Including the use of “Markovian Design Machines”.)

Measuring Urban Land-Use Diversity

I’ve been exploring ways to map urban land-use diversity, which is a rather tough nut to crack. The problem is that it requires a reliable source of high quality data that accurately classifies each building occupancy as a type of land-use. After some digging around, I learned that the Ordnance Survey’s OS MasterMap Address Layer 2 contains the land-use classification for each address. While no data source is perfect, this is as close to it as it is likely to get.

The land-use classifications are based on the National Land Use Database, which provides not only the major land-use categories - e.g. “residential”, “retail” - but also some important sub-categories, like “Restaurants and Cafes” and, lets not forget, “Public houses and bars”.

The interesting thing about land-use diversity is that its not necessarily something you can “see” when you’re looking at a city’s visible morphology. In other words, if looking at a map of a street, you don’t ordinarily know whether its perhaps a row of houses, or potentially a more diverse mix that maybe includes a bakery, corner store, daycare, and the like. This is important information to have, because by increasing the land-use mix you can end up with a dramatically different street dynamic due to significantly more pedestrian trips.

So I’ve created a python script tool for ArcGIS that can take detailed building type classifications and calculate a spatial diversity index for each building location. It does this by searching for different land-use types within a user-selected distance from each building (based on network path distance) and will then calculate a mixed-use index score based on the Gini-Simpson diversity index. It will automatically detect the number of land-use “type” categories, which offers some flexibility and specificity on how to analyse the presence and degree of mixed-uses.

Based on my own observations, the tool is accurate in its representation of ‘happening’ and diverse mixed-use areas. But what I’ve found interesting is that surprisingly small search distances seem to work best. This is because they retain the granularity of the results. For example, a 50m search distance yields a more interesting and useful map than a 250m search distance.

The tool does not distinguish between diversity of mixed-uses in larger and smaller buildings, which seems counterintuitive when we think of mixed-use diversity in the fine-grained sense described by Jane Jacobs. I’ve therefore experimented with weighting the results with the inverse of the building’s footprint area, which yields results that focus-in on fine-grained mixed-use hubs.

I’ve also taken a look at whether the spatial diversity index results can be used as a weighted input parameter into network analysis tools, such as the Urban Network Analysis toolkit, which calculates centrality based on one of five centrality methods. The benefit of this approach is that it combines urban network analysis with a functional mixed-use analysis to reveal those little gems of urban diversity.

Now, about those “Public houses and bars” - I think the next step should be to compare urban diversity based on daytime and nighttime scenarios. This will, of course, require some ground-truthing…

SkyCycle - reincarnation of failed ideas or future-thinking solution?

Yet another ambitious proposal from Norman Foster has hit the press. This time its an elevated cycling super-highway that connects the far corners of London to create what has been termed by Exterior Architecture, the original proponents of the idea, as a cycling utopia. The proposed network is truly comprehensive and ambitious, with the configuration and access points optimised by network analysis courtesy of Space Syntax.

As a long-time commuting cyclist and advocate for cycling, I’ve followed the reaction with some interest, which has predominantly been very positive. There appears, however, to be an increasingly concerned contingent comparing it to historically failed projects like London’s attempt at the Pedway. These concerns are driven by hard-learned lessons from the past, where freeways ripped cities apart, elevated pedestrian walkways separated people from the street-level diversity that fuels them, and mobility networks have often been engineered to bypass and exclude the less fortunate.

So, is the SkyCycle doomed before take-off? The reality is that this is one of those projects where success or failure will boil down to implementation. And given the team and the enthusiasm behind the proposal, I think it fair to give it a chance. Here are my reasons for saying this:

  • Unlike various forms of elevated transportation infrastructure created in the past, the proposed network does not compromise existing street-level connectivity. In fact, it enhances connectivity and creates potential hubs of activity at the network access points, similar to the vibrancy around transit stations. For the same general reasons, the network does not occur at the expense of normal street-level cycle routes and access, but serves to enhance it.

  • Long-time cyclists (including yours truly) generally don’t have issues cycling in traffic. But the reality is that for every cyclist out on the roads, there are many more that would cycle if they had a safe network that is A) connected, and B) segregated from motorised vehicles. In many cases, the safety benefits of segregated cycle ways are perceived, not actual, but it still matters tremendously to the cycling experience and its adoption by the greater public. Short of preventing all vehicular access to central London (which I would support, but the quantity of busses and taxis would still be significant), there is not a way to get the same sense of connectivity and perceived safety without an elevated cycle network.

  • The SkyCycle would be extremely convenient and the view, marvellous. As much as I enjoy cycling at street level, it can be a hugely frustrating experience for longer distance trips for a few simple reasons: vehicle fumes; getting stuck behind busses and taxis; and having to stop for a traffic light (for what seems like) every few feet…and this has a nasty habit of happening each time you’ve just gotten going again. It is important to remember that bicycles can actually travel quite fast, and so, for longer distances, bicycles actually function less like pedestrians and more like vehicles. The strong-point of bikes is that its so easy to transition between these two states. Short of being a bike-courier or lycra-clad alpha-male, the SkyCycle would make cycling the hands-down fastest and most convenient way to get around London.

  • Now for the downside: The proposed concept is hugely ambitious, and for the same reasons it could be very difficult and costly to implement. The engineering logistics are complex, and while certainly not impossible, will be costly to implement elegantly without substantial design effort and cost. The SkyCycle also presents other logistical challenges, like the amount of distance required for the access ramp gradients, access for ongoing maintenance, and issues of law enforcement on the SkyCycle network. These aren’t small issues, but I’m inclined to think that they can be solved by the right people if given political and public support.

Gargoyles atop Gonville & Caius College in Cambridge.

f9, 1/1600, -2ev, 200mm, © Gareth Simons

Cambridge has its fair share of Gargoyles, and the specimens perched atop Gonville and Caius college are amongst my favourites.

Robert Moses - getting his fair share of posthumous attention

Robert Moses with Battery Bridge model - Wikimedia Commons.

Robert Moses continues to get his fair share of posthumous attention and his role in NYC’s history continues to be cemented, though not along the lines that he would have liked. Untapped Cities recently ran an interesting blog post on 5 Things in NYC We Can Blame on Robert Moses, commemorating his birthday by revisiting some of his controversial legacy. Thankfully, not all of his projects came to fruition, notably the Lower Manhattan and Mid Manhattan expressways.

What is interesting about Robert Moses is that he was never elected to public office. Yet, somehow, he was effectively the most powerful man in New York, a story told in the Pulitzer winning book, The Power Broker. He had developed a peculiar form of institution called “authorities”, which were designed to exert great power with little accountability. This was exemplified by the Triborough Bridge Authority:

"Language in its Authority’s bond contracts and multi-year Commissioner appointments made it largely impervious to pressure from mayors and governors. While New York City and New York State were perpetually strapped for money, the bridge’s toll revenues amounted to tens of millions of dollars a year. The Authority was thus able to raise hundreds of millions of dollars by selling bonds, making it the only one in New York capable of funding large public construction projects. Toll revenues rose quickly as traffic on the bridges exceeded all projections. Rather than pay off the bonds Moses sought other toll projects to build, a cycle that would feed on itself." (Wikipedia entry on Robert Moses).

All of this meant that Robert was essentially free to do as he pleased - and in some ways he saw New York as his sandbox - to the extent that it took the President of the United States to stop Robert’s insistence on a proposed Brooklyn Battery Bridge.

Robert was known for being particularly decisive and stubborn, and dreamt-up grand planning schemes that often occurred at the expense of individual property owners and the poor. He did so, even when slight changes may have spared the aggravation. He met his match, however, with Jane Jacobs, a story told in "Wrestling with Moses: How Jane Jacobs Took on New York’s master Builder and Transformed the American City". Ironically, stoking the ire of Jane Jacobs could be seen as one of his greatest - though perhaps most unintended - of legacies.

IBM 5 in 5 - smart city induced utopia?

Apparently, we are rapidly approaching the dawn of a technologically induced utopia - a promised land of sorts - a (not so) new claim that all of our problems are rapidly becoming a thing of the past…because overcrowded busses and late pizza will be resolved by the smartening-up of cities.

According to IBM’s 5 in 5 predictions:

"…cities can be hard unforgiving places to live…cities are tough, because they require us to live on their terms, but in five years the tables will turn. With cities adapting to our terms, with cloud-based social feedback, crowdsourcing, and predictive analytics, we’ll shape our cities to our evolving wants and needs, comings and goings, and late-night pizza hankerings. By engaging citizens, city leaders will be able to respond directly to our needs, and dynamically allocate resources…and pizza."

No wonder there is an increasingly concerned chorus of critics objecting to the notion of “smart cities”, because they sense that corporate interests and government departments will try to leverage the new technologies from the top-down, instead of the bottom-up approach preferred by an increasingly empowered citizenry.

There is a bit of truth mixed-in with the hype. It is true that bottom-up crowdsourced information feedback allows the city to self-organise - to dynamically adapt to both new and old opportunities and challenges - and to develop a sort of self-regulating city ‘consciousness’. But a more nuanced view is necessary when it comes to forecasting the end of all evils due to the the implied top-down mastery of all things complex… and this due to the somewhat simplistic notion of government officials sitting behind giant screens in new control centres.

A more rounded perspective can be found in new books by Anthony Townsend and Mike Batty, with a solid review (of both books) available from the New Scientist.

Steve Rayner on Path Dependence in Cities

An interesting presentation by Steve Rayner in which he discusses the significance of Path Dependence and “lock-in”.

Path dependence explains how the set of decisions one faces for any given circumstance is limited by the decisions one has made in the past, even though past circumstances may no longer be relevant. (Wikipedia)

Steve explains that our cities are significantly impacted by past innovations and decisions…such as the location of streets, the invention of the car, and technologies like electric light, flushing toilets, and elevators.

Lock-in through path-dependence can end up causing cities and processes to work in ways that are no longer efficient or sensible. Some kind of mechanism is necessary to allow for flexibility or a radical break in order to escape from the status quo. This is largely what Steve’s Flexible City website is about.

A particularly amusing example in Steve’s presentation is that the size of the space shuttle’s rocket thrusters were determined by the width of a horse’s ass…see the video for details.

Load more posts