I called this talk I gave for the Willem de Kooning Academy’s CrossLab night ‘New Design for a New Aesthetic’ initially, but I reconsidered that title. Not because of the person who took semantic issue with the idea of a ‘new aesthetic’, I couldn’t really care less about that. The idea that there can be a new design that addresses the issues within the New Aesthetic is just too ambitious. We cannot possibly succeed which is why I’m calling this discipline we’re engaged in: designing in the face of defeat (I blogged about this before) and it is what we will be doing for the foreseeable future.
I pre-rolled a screencapture of Aaron Straup Cope’s Wanderdrone to add ominous foreboding to the mix of design/advertising enthusiasm permeating the room. Crosslab called the night a night about Dynamic Design, which I didn’t really get, but I retook an old talk about algorithmic design but now heavily updated to incorporate current thinking about algorithms, the new aesthetic and object oriented ontology.
Our offices in Berlin-Kreuzberg
Given the fact that we as Monster Swell are a company that does a lot of stuff with maps we are affected by the fact that mapping is being turned on its head. And it’s not because there aren’t enough interesting maps, there are now more than ever before. Just to show a couple.
Timemaps, a map of the Netherlands distorted by the amount of travel time required during various times of day:
But right now there is a projection inversion going on where a lot of the time we no longer project the real world onto flat surfaces and call that maps, but where we overlay maps themselves back onto reality. And what we call maps does not need to have any relation with physical reality anymore, we can map anything onto anything using any (non-)geometric form we choose.
This is mainly a consequence of us putting the internet into maps. But if you think again, the internet is not the only place where we put maps. We put the internet into pretty much everything by now.
So maps are creeping back into the real world and we get odd clashes when we try to overlay a map back onto the territory or when we try to perfectly capture a capricious world, as you can see in these Google Maps and Street View examples: 1, 2, 3, 4. I don’t know how long they will be online over at the New Aesthetic Tumblr since that has been closed by James Bridle right now.
We got QR codes to enable the machine readable world. These hardly have any real world use (just go over to the WTF QR Codes Tumblr) but they function more as cultural icons, precursors of a strange and inscrutable future.
And even more interestingly they are being used for instance by the Chinese to calibrate spy satellites. So these are maps on the earth that are being used to create better maps of the earth.
The New Aesthetic is when this kind of projection inversion happens more widely, not just in the realm of maps, but in all of the places in the world that the internet touches. By now that is nearly everything. The examples that were being collected over at the New Aesthetic Tumblr showed how the arts were picking up on this trend.
All of these things have been created by algorithms which are not as mysterious as many people make them out to be. Algorithms are how computers work and increasingly how the world works. They codify behaviour and quoting Robert Fabricant, as designers ‘behaviour is our medium’. Being a designer should entail more than a passing knowledge of and proficiency with algorithms. We are moving into a world where creative work is becoming procedural. The most important media are prescriptive and set rules for the world more than they are descriptive and depict the world.
The real problem with algorithms is that they often involve us but they are completely alien to us (in the Bogostian sense). They are operationally closed. Operational Closure means that things may work in ways that are not at all obvious to us, neither at first nor after we poke into them because any kind of sense we make of it is either partial or does not translate into our frame of reference. Algorithms get inputs and perform outputs but the way they operate on these has nothing to do with how we as humans think about the world. Think is not even the right word, but we try to relate to them from our human cognition. The machines see us, but they do not ‘see’ us in any way we would recognize as seeing and we have no idea what it is that they see.
The ability to see faces in things is a basic aspect of our visual pattern recognition. When we teach that same skill to computers we get unexpected consequences. It is the same with the flash crash on the stock market that happens in the blink of an eye without anybody really knowing what caused it. The rationales of the algorithms are opaque to us and their emergent behaviour unpredictable.
As Kevin Slavin mentioned in an interview: the more autonomous the algorithms are and the more effects they have on our daily lives, the more we may be accommodating them without realizing it.
There was this story recently that scientists have created a robot fish that is so good at mimicking the behaviour of regular fish that it can become their leader. This is what worries me. Who says we are not all following robot fish most of the time?
So that is what I think is the biggest challenge right now for designers. Try to create systems that harness the open and generative power of the internet while on the other hand remaining human and aligned with human interest. One way would be to make the internals of algorithms transparent so people can enter into an informed relationship with them.
Unfortunately there are no magic bullets for this whatever your local design visionary has been telling you. There never have been. Everything is made up of withdrawn objects that are mediated towards one another with unexpected consequences. To quote Graham Harman from the Prince of Networks:
“the engineer must negotiate with the mountain at every stage of the project, testing to see where the rock resists and where it yields, and is quite often surprised by the behaviour of the rock.â€
There are no ideas that will solve all problems, there are no products that will do everything. There is only the work through which we may gain more understanding and make better things. So with that, I hope we all can do good work.
It is possible for residents of the EU to request from Twitter all of the data it has stored about them in accordance with European data protection laws (just follow the steps). Some Twitter users have requested their data and filled in the necessary paperwork. After a while they have gotten all of their records including a file with all of their tweets in it.
I had seen Martin Weber’s post about this before but when I saw Anne Helmond post about her experiences as well, I was prompted to carry out the idea I’d had before: to import an entire Twitter archive into Thinkup to complement the partial archive it contains of my longtime Twitter use (since September 2006).
I use Thinkup myself enthusiastically to supplement existing archival, statistics and API functionality around the web and more importantly to have it under my own control. These services serve as my social memory and it is nice to have a copy of them that can’t disappear because of some M&A mishap. It has proven useful more than once to be able to search through either all of my tweets or all of my @replies. But as noted, Thinkup can only go back 3200 tweets from when first you install it because of Twitter API limits. For people like me (35k tweets) or Anne (50k tweets), that’s just not enough.
I installed a new Thinkup on a test domain and asked for (sample) files from Anne and Martin and went at it. Command-line being the easiest, I took the upgrade.php script, ripped out most of its innards and spent an afternoon scouring the Thinkup source code to see how it does a Twitter crawl itself and mirrored the functionality. PHP is not my language of choice (by a long shot), but I have dabbled in it occasionally and with a bit of a refresher it is pretty easy to get going.
I finally managed to insert everything into the right table using the Thinkup DAO but it still wasn’t showing anything. Gina Trapani —Thinkup’s creator— told me which tables I had to supplement for the website to show something and after that it worked! A fully searchable archive of all your tweets in Thinkup.
The code is a gist on Github right now and not usable (!) without programming knowledge. It is hackish and needs to be cleaned up, but it works ((It should scan available instances and only import tweets if they match an instance in your install among many many other things.)). Ideally this would eventually become a plugin for Thinkup but that is still a bit off.
What’s the point of all this? There are a couple:
First it shows that data protection laws such as the ones we have in Europe do have an effect (see also for instance: Europe v. Facebook). Even on the internet laws have teeth and practical applications. Data protection laws can be useful if they are drafted on general principles and applied judiciously.
But the result you get: a massive text file in your inbox is not the most usable way to use or explore half a decade’s worth of social media history. That’s where Thinkup comes in. It’s brilliant functionality serves as a way to make this data live again and magnifies for each person the effect of their data request.
Secondly, for any active user of Thinkup, supplementing their archive with a full history is a definitive WANT feature. Twitter has been very lax in providing access to more than the last 3200 tweets. If a lot of users used their analog API to demand their tweets, Twitter may be forced to create a general solution sooner.
Lastly, Thinkup has applied for funds with the Knight Foundation to turn itself into a federated social network piggy-backed on top of the existing ones. Thinkup would draw in all of the data that is already out there into its private store and then build functionality on top of that (sort of an inverse Privatesquare). Having access to all of your data would be a first step for any plan that involves data ownership and federation.
I presented this hack yesterday at the Berlin Hack and Tell. Your ideas and comments and help are very welcome.
The agenda is filling up again just before the summer break. Alper will speak at:
May 24th – Technical review of city dashboard concepts at HvA
A brief bit of teaching with design and technical critique of city visualization dashboards developed by students.
May 25th – Apps for Amsterdam Awards Night
Judging and attending the awards for the Amsterdam open data application contest.
May 27th – What Design Can Do
Presenting an engaged data-centric approach for designers’ benefit (blurb).
Here are the slides for a talk I gave at /dev/haaglast Friday ambitiously titled “Fixing Reality with Data Visualization†which was well received. I promised to write it up here, so here it is.
Starting off with some introductions. We are Monster Swell, this equation is the central challenge of our practice.
To start with the title inspiration for this talk. I recently finished this book by Jane McGonigal.
“Reality is Broken†by Jane McGonigal recently came out and it’s not really true, but it’s quite opportune. Reality isn’t broken, but there is —as always— lots that can be improved. Slapping a gamification label on that is a false exit because it implies that such improvement can be done easily by the magic of games.
The core idea of the book is that:
1. Reality can be fixed by game mechanics (voluntary participation, epic stories, social collaboration, fitting rewards), and
2. That reality should be fixed by game mechanics.
Both of these points: the possibility and the desirability of such are the subject of fierce debate both within game design circles and without.
We are now seeing a superficial trend of gamification, badge-ification and pointification where everybody is rushing forward to add as many ‘game-like’ features to their application/concept to look tuned into the fun paradigm.
Fortunately this does not work. Checking in for points and badges is fun at first, but is hardly a sustainable engagement vector. Foursquare mostly did a bait and switch with their game until they got enough critical mass to be useful along other vectors.
Things that are difficult remain difficult even if they are gamified. ‘An obstacle remains an obstacle even with a cherry on top.’
Ian Bogost terms this exploitationware. Our own discussions concluded with that if you are not the one playing, you are being played.
In our practice we look for deeper ways to engage people and affect them. There are hardly any one-to-one mappings to be found and the effects that are most worthwhile are the higher order ones. As Kars Alfrink says:
“We don’t tell them to coordinate, we create a situation within which the way to win is to coordinate.â€
Corollary: A game about violence does not immediately make people violent.
But another way of looking at it might render it as a map. The metaphor of men and liberties and territory to occupy already points towards that comparison.
Looking at it in another way it could also be a Cartesian grid with binary data values plotted onto it. A data visualization of a phenomenon we don’t know (yet).
Coming back to the map parallel, this picture of center pivot irrigation systems (by NASA) in Garden City, Kansas looks awfully similar to the goban and this is just an aerial photograph with some processing applied to it.
So to come to this point:
‘Any sufficiently abstract game is indistinguishable from a data visualization.’
The difference just is that a game is a visualization of a game model and its rules. The whole point of playing a game is learning those rules and uncovering the model of the game is essence ‘breaking’ a game. After this point it usually ceases to be fun.
And its complementary point:
‘Any sufficiently interactive data visualization is indistinguishable from a game.’
And indeed the best ones are highly interactive and offer various controls, abstraction levels and displays of data deep enough to engage users/players for a long time. It is also the reason that in our practice we don’t occupy ourselves much with visualizations in print media.
To continue the point about games: many games are either quite concrete or very abstract simulations. This is most obvious with sim games such as Sim City pictured below.
Simulations are subjective projections of reality both because of the choices that the designer of the simulator has embedded in their choices for the projection and because of the interpretation of the player of the simulation and how their ingrained notions allow them to interpret the simulation.
Ian Bogost (picture) in his book Unit Operations coins a state of being called ‘Simulation Fever’.
Bogost says that all games in some way are simulations, and that any simulation is subjective. The response people have to this subjectivity is one of either resignation (uncritically subjecting oneself to the rules of the simulation, taking it at face value) or of denial (rejecting simulations wholesale since their subjectivity makes them useless). Taken together, Bogost calls these reactions simulation fever. A discomfort created by the friction between our idea of how reality functions and how it is presented by a game system. The way to shake this fever, says Bogost, is to work through it, that is to say, to play in a critical way and to become aware of what it includes and excludes.
I think we could use the correspondence between games and visualizations to coin a corresponding term called Visualization Fever.
Those are my most important points, that good and interesting games and good and interesting data visualizations share many of the same characteristics. We can use data and its correspondence with reality (or lack thereof) to create a similar fever.
(This graphic is somewhat rudimentary but it was made within Keynote in five minutes and I hope it gets the point across.)
The visualization process shares a lot of similarities with the open data process that we are involved in. It is a perpetual conversation and the visual part is only one place where it can be improved. Data collection, discussion on results and errors, sharing of data and the resulting products, controllability of the outputs and being able to remix and reuse them and incorporating this process as feedback back into atoms are all areas that need active participation.
There is nothing easy about this. It is a ton of hard work and long tedious conversations. Fortunately most of it is worth it.
Some examples of visualization fever in action.
Verbeter de Buurt is the Dutch version of See Click Fix and it works really admirably. It creates a subjective map of an area with the issues that a group of people have signalled in their neighborhood. Nothing really is said about who these people are and if these issues are indeed the ones that are the most pressing (we all know the annoying neighbour who complains about dog poo to whomever will hear it). By making issues visible, this map imposes its view of the city onto the councils and exerts change.
Planning systems at an urban scale is a very difficult process. These planning stages are being opened up to the general public using consultation and other means but it remains to be seen if and how citizens can comprehend the complex issues that underlie city planning.
One step to help both experts and laypeople to better come to grips with the city that they are inhabiting is to create macroscopes that in one view show the entire scale and all the things that are in a system in such a way that we can make (some) sense of it. These Flowprints by Anil Bawa-Cavia are a great example of doing such for public transportation.
And done right these visualizations can reveal the systems of the world or in this case the order flow of trains in the Netherlands. Everybody knows how crowded Dutch rail is, which trains go where along which routes, but actually seeing it happening in front of your eyes in a real-time visualization gives you an insight and a tangible grip on the system that you did not have before.
So what do we fix?
We use visualizations and their compressed interactive views to expose system design choices and errors. They can also be used to give depth to a specific point, something which journalists are increasingly finding necessary. People consuming data heavy news want to be able to poke that data themselves.
A lot of visualizations I have seen thusfar serve not much more than to reinforce pre-existing judgements almost as if the person creating the visualization sought to build that which they wanted to see. Visualizations will need to be better, more flexible and draw upon more data if we want to break out of these throughs of shallow insight.
The brief as stated by the nice people at Bloom as well is that having a visualization serve solely as a visual output is too limited a use of the interactions created. You should be able to use the same interactions in the visualization to also influence the underlying model either directly or indirectly. That is to say the model and the representation should be bidirectionally influencing.
Planetary, the latest app by Bloom is a great example of that. It shows you a beautifully crafted astromusical view, but it also allows you to play your music library from within that very same visualization.
We need to bring visualization and deep data literacy to the web and infuse any relevant site and system (that is to say all) with them. Many people asking for data visualization think that they are some magical fairy dust that will make a site awesome by its very touch. This is of course not true.
Data and interactive visuals can generate value and insight for any site that employs them properly.
In the presentation Data Visualization for Web Designers by Tom Carden he remarks that web developers already know how to do all this. These are exactly the tools we have been employing over the last years to create interactive experiences (and we plan to use them more and more).
Internet Explorer is still the cripple old man of the web, but given understanding clients (and users) and some compatibility layers, you may be able to get away with using a lot of this stuff as long as the result is awesome enough.
The other trend is the idea that there need to be bridges built between web people and GIS people. Preferably how to create GIS-like experiences using the affordances that the web necessitates. A trend we were thinking about neatly summarized (blog) at a #NoGIS meetup by Mike Migurski.
GIS people have tremendous tools and knowledge but they are not accustomed to work in a very web way: quick, usable, beautiful. Web people can build nice sites pretty quickly, but they tend to fall flat when they need to work with geographical tools that are more complex than the Google Maps API.
If we can combine these two powers, the gains will be immense.
We can create subjective views to exert power upon reality and try to fix things for the better. The subjectivity is not a problem, as often the values embedded in the views are the very point. Subjectivity creates debate and debate moves things forward.
The tools we have to create these views are getting ever more powerful, but there is also a lot of work to be done.
As a wise man said: “The best way to complain is to make things.†(picture)
The past weeks Alper has been giving lectures at the Willem de Kooning design academy on the subject of data visualization. The students should be busy creating their projects these coming weeks and we eagerly anticipate their results.
We will be represented at the Cognitive Cities conference in Berlin this weekend to talk about city data visualization. And next week we’ll be at the Infographics conference trying to talk some sense into those that think print is the end all of data.
Next Wednesday Alper will be presenting about Foursquare in the Netherlands, its past, its present and its future on Social Media Club 030 #8.
Alper will also join a Hack de Overheid team of experts to provide technical support at the Conference for Investigative Journalism in Ghent. We will aid journalists with their data issues and questions and we will also develop an application during the conference based on the demand we see.
Update: the slides of the presentation at SMC have been posted to their Slideshare.