comment 0

Fictions Matter Too: A Vision for an Imaginaries Lab in Design

“If men define situations as real, they are real in their consequences”
William Thomas and Dorothy Swaine Thomas, 1928 — later named as the ‘Thomas Theorem’

Billboard in Bloomfield, Pittsburgh, PA, 2017

The events of the last couple of years, from Brexit to Trump, have been a vivid demonstration for our time of the power of the imaginary to affect human affairs. Not for the first time, of course — but amplified in an unprecedented way by algorithms, bots, targeting, and strategic use of personal data via social media — huge decisions are being influenced by imagined versions of what ‘reality’ is.

We cannot avoid trying to work out how to make sense of terms such as alternative facts, fake news, and post-truth as being part of everyday discourse, and incorporating them and their effects into our own models of how the world works. As Maciej Ceglowski says, people “will happily construct alternative realities for themselves, and adjust them as necessary to fit the changing facts,” and this is greatly aided by the technological infrastructures being employed by those who want to control public opinion. The powerful are, as always, those who can create the simplest, easiest to spread, most superficially persuasive images, myths, conceptions, metaphors, frames, cause-and-effect pairings, and indeed stories, in the public mind. We shouldn’t be surprised: it’s not like it hasn’t happened before, in other eras, using different means, and we all know the outcomes of that. Fictions are political, and they matter.

Shared fictions as central to society

If I were better informed by sociological theory, I could make more insightful points here about Arjun Appadurai’s consideration of “the imagination as a social practice… a form of negotiation between sites of agency (‘individuals’) and globally determined fields of possibility”, or about the concept of imaginaries in a sociotechnical sense — the specific concept developed by Sheila Jasanoff, Sang-Hyun Kim, and others around the ways in which certain dominant ‘shared’ visions of societal futures centred around certain types of (technological) progress have effects on what happens in the present — “representations of how the world works — as well as how it should work”. It’s arguable that understanding our shared (or not) visions of what climate change, or artificial intelligence, or immigration, or identity, or law, or ‘sovereignty’, or even countries themselves, are, are all important in understanding our current situation and trajectory, but also that historically, these have had potentially vital roles in the ways in which human civilisations and societies developed. Yuval Noah Harari suggests that “Any large-scale human cooperation — whether a modern state, a medieval church, an ancient city or an archaic tribe — is rooted in common myths that exist only in people’s collective imagination”, and that this is partly due to the emergence of the ability to describe the imaginary in language, to “transmit information about things that do not exist at all… entities that [people] have never seen, touched or smelled.”

“We risk being the first people in history to have been able to make their illusions so vivid, so persuasive, so ‘realistic’ that they can live in them.”
Daniel J. Boorstin, The Image: A Guide to Pseudo-Events in America, 1962.

Design and imaginaries

The idea of design (and art more broadly) as being a different form of language which can also describe the fictional or imaginary, making it real enough to be addressable, to be considered and critiqued and reflected on, is interesting. Design has the power to make visible and tangible imagined ‘better‘ (or worse) situations, to design artefacts as ‘tokens of better ages’, to apply ideas of utopia as a method, and to inspire and open up vistas – if not always actual maps — towards different futures, through speculation and design fiction. What do designers do, if not, in some sense, give us experiential pockets of imaginaries — both our own, reflected back at us, and visions of different futures, fictional at present? I find Clive Dilnot’s notion of design simultaneously stating “This!” and asking “This?” to be quite a clear way of thinking about this, because the ‘This?’ implicitly allows for speculation which is critical, which we may interpret as warnings or at least provocations to think further about what consequences might be of the proposition in question. By making our own imaginaries (more) visible, and doing the same for others’, whether new or old, design can be a translator between minds and ideas and the world. This is where I see that design essentially makes fictions matter (dual meaning intended).

“Dreams are true while they last, and do we not live in dreams?”
Tennyson, The Higher Pantheism, 1867

There can be a self-fulfilling nature to imaginaries, as the Thomas Theorem implies. If we believe something to be real, and act as if it is real, and build institutions and infrastructures around that ‘reality’, the effect may be the same as if it had been real in the first place. Fictions become fact. For example, Stephen Metcalf discusses the self-fulfillingness of imagining society as a market: “The more closely the world can be made to resemble an ideal market governed only by perfect competition, the more law-like and “scientific” human behaviour, in the aggregate, becomes.” In a design context, the idea of a kind of circular causality in which designers’ imaginaries (models, or even stereotypes, we might say), of people’s lives end up being designed into systems which then effectively make those imaginaries real is not uncommon (I looked briefly at this kind of effect in this piece for the recent Science Gallery Dublin staging of Design and Violence.) There’s something here close to Anne-Marie Willis’s idea of ontological designing, or various formulations of the “We shape our X, and then our X shape us” idea by Churchill/McLuhan/Bill Mitchell and others — we shape our imaginaries, and then, through acting on them, designing systems around them, designing systems as if they were real, they shape our actions.

Understanding understanding

In design, human-computer interaction, and human factors research, both academic and applied, we often investigate the mental models people have, or appear to have, when they are using a piece of technology, or a system. We try to find out how they think something works, or how they expect it to work, from driverless cars to government, to heating systems, to website structure, and, learning from those insights, try to (re)design those systems, or at least interfaces to those systems. The redesigns either try to match better how people think something works, or — more rarely but more interestingly — change those models.

“When we don’t know how a thing works, we make it up”.
“We can only trust something if we think we know how it works”.
Louise Downe, Chicken Shops, Platforms and Chaos, 2013 (now Head of Design for the UK Government).

Most of the research I’ve done over the last ten years, which started in questions of how people’s behaviour is influenced by the design of the products, services, and environments they use, has moved towards something much more around using design methods to understand people’s situations, the social and environmental contexts in which people live and make decisions, how they are thinking about what they’re doing and the world more widely, and what agency they have to change things. Understanding understanding (or at least trying to) — investigating how people imagine and make sense of the world — seems as though it ought to be central to any form of design research which claims to be human-centred, and the generative, or future-facing complement is enabling people to have new understandings, new imaginaries. If you’ve followed any of my more recent work, it’s been a kind of patchy way of gradually — driven by the opportunities afforded by different funded projects and teaching needs — addressing some of these questions of current and new imaginaries, from investigating mental imagery and new kinds of display for energy, to forms of design fiction as a way of enabling students to explore consequences and ambiguity, re-imagine what interactions with AI could be, and materialise invisible phenomena.

“The future is not empty. The future is loaded with fantasies, aspirations and fears, with persuasive visions of the future that shape our cultural imaginaries.”
Ramia Mazé, ‘Forms and Politics of Design Futures‘, 2014

What the Imaginaries Lab aims to do

Part of my reason for joining Carnegie Mellon a year ago was the opportunity to build a research (and teaching) platform which explores exactly these kinds of ideas in a more structured way, through a design lens. The Imaginaries Lab is small, and so far internally funded at Carnegie Mellon, but since the start of 2017, a team of graduate research assistants and I have been looking at people’s imaginaries of local government in Pittsburgh (and their agency in relation to it), ways of externalising mental imagery through landscape metaphors, and approaches to new kinds of qualitative interface. We had a ‘soft launch’ in May, during Carnegie Mellon’s Design Week, and in the coming year will be expanding and continuing these projects and developing new collaborations and directions. One of these already announced is Electric Acoustic, a situated energy sonification installation funded by the Carnegie Mellon College of Fine Arts, but there are also some other interesting ideas in the pipeline.

So, what’s the vision for the Lab? I see us concentrating on two big (linked) challenges: New ways to understand, and New ways to live. In both cases, we’ll be creating tools to support people’s imagining, both what they already imagine (which is still important), but also helping people imagine in new ways. What starts as fiction can become real, explorable, experiential. We will be creating new fictions, but also creating tools to help people understand and deconstruct the fictions that are already having an effect on them. The Lab’s work cannot help but be political: questions of understanding and futures are inextricable from questions of worldview, belief in how the world is and how it should be.

New ways to understand encompasses ideas such as creating new metaphors (to use Mary Catherine Bateson’s term), new kinds of interface, new ways of explaining and visualising systems and the relationships between ideas, and using design methods to help people have agency to use these new ways of understanding. This builds on projects such as Powerchord, Drawing Energy, Qualitative Interfaces, Mental Landscapes, Materialising the Invisible, and aspects of Civic Visions, taking some of these ideas in new directions and finishing or consolidating some of the work we have already done. One particular domain that seems especially worth exploring from a design point of view is imaginaries around artificial intelligence and automation — to offer some ethical perspectives that could help designers working in the field, but also to “develop alternative narratives to technological futures” in Dunne & Raby’s words. More widely, new ways to understand could have a substantially activist stance, helping counter the intentional fictions of the post-truth world and giving people agency to challenge and change things, in their communities and beyond.

New ways to live is more explicitly about linking imaginaries to everyday life (and indeed changes in practices and behaviours) through prototyping new ways of living — and helping people imagine new ways of living, both at a household and societal level (thus linking more explicitly to the ‘sociotechnical imaginaries’ notion in sociology as discussed earlier). What is it like to live in a different way, with different premises to your everyday routines? How can design fictions that you can actually use (or live ‘in’), together with new tools for understanding the world, affect what you do? This builds on the work I did around living labs and design for behaviour change, intersecting with some of the ideas in Carnegie Mellon’s transition design research area, and learning from the experiential futures work of futurists such as my new Carnegie Mellon colleague Stuart Candy. ‘New ways to live’ is going to involve some bigger kinds of projects, with more ambitious goals.

As a Lab, we will grow slowly — I don’t want to be spending the entirety of my time looking for funding for the next project — but one of the things that excites me about doing this is that it is, in itself, an exploration of the power of imaginaries. Putting the lab’s name on the office door and in my email signature, and treating it as a real thing within the university and externally, has made it a real thing, in a way which was refreshingly simple. It’s not now a fiction, but once upon a time, it was — as with every other design project and every other human endeavour. We can bring different worlds into being.

Imaginaries Lab, Carnegie Mellon School of DesignImaginaries Lab, Carnegie Mellon School of DesignImaginaries Lab, Carnegie Mellon School of DesignImaginaries Lab team, May 2017

Above, right: The Imaginaries Lab team May 2017. Left to right: Silvia Mata-Marin, Dan Lockton, Delanie Ricketts, Nehal Vora, Theora Kvitka, Ashlesha Dhotey

Parts of this article are based on talks I have given this year at Cornell University (the Hillier Lecture) and at the Universidad del Desarrollo in Santiago.

I’d like to thank Delanie Ricketts, Theora Kvitka, and Nehal Vora for their work with the Lab on its first few projects and wish them the best of luck in their new careers, thank Sarah Foley for her summer research work on service fictions, welcome back Ashlesha Dhotey and Silvia Mata-Marin, and also welcome our new research assistants joining this fall, Devika Singh, Matt Prindible, and Shengzhi Wu. Thanks too to Sebastian Deterding for putting me on to the Thomas Theorem, which expresses succinctly something that otherwise would have led to a rambling explanation on my part, and to Cameron Tonkinwise and Peter Scupelli for encouraging me to put the name on the door.

Thinking About Things That Think About How We Think

Cross-posted from the Environments Studio IV blog, Carnegie Mellon School of Design

We often hear the phrase ‘intelligent environments’ used to describe spaces in which technology is embedded, in the form of sensors, displays, and computational ability. This might be related to Internet of Things, conversational interfaces or emerging forms of artificial intelligence.

But what does ‘intelligence’ mean? There is a long history of attempts to create artificial intelligence — and even to define what it might mean — but the definitions have evolved over the decades in parallel with different models of human intelligence. What was once a goal to produce ‘another human mind’ has perhaps evolved into trying to produce algorithms that claim to ‘know’ enough about how we think to be able to make decisions about us, and our lives. What we have now in ‘intelligent’ or ‘smart’ products and environments is one particular view of intelligence, but there are others, and from a design perspective, designing our interactions with those ‘intelligences’ as they evolve is likely to be a significant part of environments design in the years ahead. Is there an opportunity for designers to explore different kinds of interactions, different theories of mind, or to envisage new forms of intelligence in environments, beyond the dominant current narrative?

Building on the first two projects’ treatment of how humans use environments, and how invisible phenomena can be materialized, for this project the brief was to create an environment in which visitors can experience different forms of ‘intelligence’, through interacting with them (or otherwise experiencing them). The project was not so much about the technical challenges of creating AI, but about the design challenges of enabling people to interact with these systems in everyday contexts. So, quick prototyping and simulation methods such as bodystorming and Wizard of Oz techniques were entirely appropriate—the aim was to provide visitors to to the end-of-semester exhibition (May 4th, 2017) with an experience which would make them think, and provoke them to consider and question the role of design in working with ‘intelligence’.

More details, including background reading, in the syllabus.

We considered different forms of behaviour, conversation, and ways of thinking that we might consider ‘intelligent’ in everyday life, from being knowledgeable, to being able to learn, to solving problems, to knowing when not to appear knowledgeable, or not to try to solve problems. If one is thinking about how others are thinking, when is the most intelligent thing to do actually to do nothing? Much of what we considered intelligent in others seemed to be something around adaptability to situations, and perhaps even adaptability of one’s theory of mind, rather than behaving in a fixed way. We looked at Howard Gardner’s multiple intelligences, with the ideal of interpersonal, or social, intelligence being one which seemed especially interesting from a design and technological point of view — more of a challenge to abstract into a set of rules than simply demonstrating knowledge, a condition where the feedback necessary for learning may not itself be clear or immediate, and where the ability to adjust the model assumed of how other people think is pretty important. How could a user give social feedback to a machine? Should users have to do this at all?

Each of the three resulting projects considers a different aspect of ‘intelligence’ from the perspective of people’s everyday interaction with technologies in the emotionally- and socially-charged context of planning a party or social gathering, and some of the issues that go with it.

Gilly Johnson and Jasper Tom‘s SAM is an “intelligent friend to guide you through social situations”, planning social gatherings through analysing interaction on social networks, but which also has Amazon Echo-like ordering ability. It’s eager to learn—perhaps too eager.




Ji Tae Kim and Ty Van de Zande‘s Dear Me, / Miyorr takes the idea that sometimes intelligence can come from not saying anything — from listening, and enabling someone else to speak and articulate their thoughts, decisions, worries, and ideas (there are parallels with the idea of rubber-duck debugging, but also ELIZA). In this case, the system is a kind of magic mirror that listens, extracts key phrases or emphasised or repeated ideas, and (in conjunction with what else it knows about the user), composes a “letter to oneself” which is physically printed and mailed to the user. Ty and Ji Tae also created a proof-of-principle demo of annotated speech-detection that could be used by the mirror.



Chris Perry‘s Dialectic is an exploration of the potential of discourse as part of decision-making: rather than a single Amazon Echo or Google Home-type device making pronouncements or displaying its ‘intelligence’, what value could come from actual discussion between devices with different perspectives, agendas, or points of view? What happens if the human is in the loop too, providing input and helping direct the conversation? If we were making real-world decisions, we would often seek alternative points of view—why would we not want that from AI?

Chris’s process, as outlined in the demo, aims partly to mirror the internal dialogue that a person might have. Pre-recorded segments of speech from two devices (portrayed by paper models) are selected from (‘backstage’) by Chris, in response to (and in dialogue with) the user’s input. There are parallels with “devices talking to each other” demos, but most of all, the project reminds me of a particular Statler and Waldorf dialogue. In the demo, the devices are perhaps not seeking to “establish the truth through reasoned arguments” but rather to help someone order pizza for a party.


comment 1

Exploring Qualitative Displays and Interfaces

Windsock on Burgh Island. Devon

by Dan Lockton, Delanie Ricketts, Shruti Aditya Chowdhury (Imaginaries Lab, Carnegie Mellon School of Design) and Chang Hee Lee (Royal College of Art)

Much of how we construct meaning in the real world is qualitative rather than quantitative. We think and act in response to, and in dialogue with, qualities of phenomena, and relationships between them. Yet, quantification has become a default mode for information display, and for interfaces supporting decision-making and behaviour change.

There are more opportunities within design and human-computer interaction for qualitative displays and interfaces, for information presentation, and an aid to help people explore their own thinking and relationships with ideas. Here we attempt one dimension of a tentative classification to support projects exploring opportunities for qualitative displays within design.

This blog post is a slightly edited version of a late-breaking work submission presented at CHI’17, May 06—11, 2017, Denver, CO, USA, and published in the CHI Extended Abstracts at http://dx.doi.org/10.1145/3027063.3053165

Download this article as a PDF.

Water trapped in train carriage door is a form of qualitative display of the train’s acceleration, deceleration and inertia.

Introduction

Outside of the digital, we largely live and think and act and feel in response to, and in dialogue with, the perceived qualities of people, things and phenomena, and the relationships between them, rather than their number.

Much of our experience of—and meaning-making in—the real world is qualitative rather than quantitative. How friendly was she? How tired do I feel right now? Who’s the tallest in the group? How windy is it out there? Which route shall we take to work? How was your meal? Which apple looks tastier? Which piece of music best suits the mood? Do I need to use the bathroom? Particularly rarely do we deal with quantities in relation to abstract concepts—two coffees, half a biscuit, three children, but rarely 0.5 loves or 6.8 sadnesses.

And yet, quantification has become the default mode of interaction with technology, of display of information, and of interfaces which aim to support decision-making and behaviour change in everyday life [27]. We need not elaborate here the phenomena of the quantified self [36, 42] and personal informatics more widely [24, 12], except to note the prevalence of numerical approaches (Figure 1) and the relative unusualness of non-numerical, pattern-based forms (Figure 2).

Figure 1: A typical form of quantitative interface: a Fitbit’s display of number of steps taken.
 

Figure 2: The Emulsion activity tracker, by Norwegian design studio Skrekkøgle, contains two immiscible liquids. Movement splits the colored liquid into smaller drops, making patterns.
 

But what might we be missing through this focus on quantification? It seems as though there might be opportunities for human-computer interaction (HCI) to explore forms of qualitative display and interface, as an approach to information presentation and interaction, as an aid to help people explore their own and each other’s thinking, and specifically to help people understand their relationships and agency with systems.

In this article, we discuss qualitative displays and interfaces, and attempt one dimension of a tentative classification supporting design projects exploring this space.

Leaves as a qualitative interface for the wind

What could qualitative displays and interfaces be?

Here we define a qualitative display as being a way in which information is presented primarily through representing qualities of phenomena; a qualitative interface enables people to interact with a system through responding to or creating these qualities. ‘Displays’ are not necessarily solely visual—obvious to say, perhaps, but not always made explicit.

Before exploring some examples, we will look at some theoretical issues. The terms ‘qualitative interface’ or ‘qualitative display’ are not commonly used outside of some introductory human factors textbooks, but forms of interface along these lines are found in lots of projects at CHI, TEI, DIS, Ubicomp (all academic human-computer interaction conferences) and other venues, without authors explicitly drawing our attention to the concept—it is perhaps just too obvious and too broad to merit specific comment in HCI and interaction design research. But, assuming the idea does have value, what are some characteristics?

A human face is a qualitative interface, perhaps the earliest we encounter [e.g. 40] along with the voice. We learn to read and interpret emotions in others’ expressions, to recognize commonalities and differences across people, to make inferences about internal and external factors affecting the person, and monitor the effects we or others are having on that person. We understand that the face and voice and our ability to read them are abstractions, interpretations, not perfect knowledge, but a model which enables us to make decisions in conjunction with our reading of our own emotions.

In a sense, the whole world, as we perceive it, is a very complex qualitative interface. The most accurate model of a phenomenon is the phenomenon itself, but it is only useful to us to the extent we can understand what we are observing, detect the patterns we need to, and recognize that we are constructing the ‘reality’ we perceive. We are always creating a model [14] and that model is necessarily not reality itself; all displays of information are representations of a simplified model of phenomena in the world. Levels of indexicality [32], drawing on Charles Peirce’s semiology, are relevant here, addressing the “causal distance” between the phenomenon and how it is displayed.

One advantage of interfaces seeking to provide a qualitative display is that they have the potential to enable the preservation of at least some of the complexity of real phenomena—representing complexity without attenuating variety [2]—even if we do not pay attention to it until we actually need to, in much the same way as certain phenomena in the real world become salient only when we need to deal with them. Looking out of the window or opening the door to see and feel and hear what the weather is like outside presents us with complex phenomena, but we are able to interpret what actions we need to take, in a more experientially salient way than looking at some numbers on a weather app.

Figure 4: It’s easy to imagine the feel of the wind on ourselves when we watch this scarf tied around a lamp post flapping in the breeze. Figure 5: A windsock gives us more sense of the wind’s qualities than a numerical display.
 

The feel of the wind on our skin, or watching the wind affect the environment, gives us a better sense of whether we need a scarf or coat than knowing the quantitative value of the wind speed and direction (Figures 3, 4 and 5). We can see, hear and feel not just wind speed and direction, but other qualities of it—is it continuous? in short gusts? damp, dry?

Qualitative displays could enable us to learn to recognize patterns in the world (and in data sets), and the characteristics of state changes, similarly to benefits identified in sonification research [35]. We should consider that ‘qualitative’ does not simply imply the absence of numbers. The examples we use in this paper might involve elements that could easily be quantified (rain drops, ink in a pen) but are given meaning through their display in a way that emphasises a quality or characteristic of the phenomenon. We recognise that this is potentially an ambiguous area, and are open to evolving the concept.

A possible spectrum of one dimension of qualitative displays: directness of connection

Here’s a tentative spectrum of one dimension of qualitative displays, relating phenomena to the display in terms of how directly they are connected.

(Levels 0—1 involve direct use of a real-world phenomenon in the display; from about Level 2 up to Level 5, they involve increasing degrees of translation or transduction of the phenomena. This parallels ideas in indexical visualisation [32] and embedded data representation [41] in terms of ‘situatedness’ or causal distance to phenomena.)

  • Level 0: The phenomenon itself ‘creates’ the display directly
  • Level 1: The display is an ‘accidental’ side-effect of the phenomenon
  • Level 2: The side-effect is ‘incorporated’ into a display that gives it meaning
  • Level 3: The display is a designed side-effect of the phenomenon
  • Level 4: Some minor processing of the phenomenon creates the display
  • Level 5: Major processing of the phenomenon creates the display

Figure 6: Some examples of displays from Levels 0, 1 and 2. Level 0: The pattern of raindrops hitting a translucent umbrella—frequency, coverage, and sound—directly creates a ‘rain display’ for the user, providing insight into the current state and enabling decisions about whether the umbrella is still needed; City lights create a display showing the shape of the city’s districts and indicator of population density; Water trapped in a train carriage window moves as the train ac-/de-celerates, creating a dynamic display of the train’s motion; A transparent pen is a physical progress bar for the amount of ink remaining—it could be quantified, but it is perhaps the quality of being not-yet-run-out which matters to the user. Level 1: A worn patch on a map accidentally provides a display of ‘you are here’; Use marks [5] from previous users demonstrate how to use a swipe-card for entry to a building; A spoon worn through decades of use is an accidental display of the way in which it has been used [31]; Footprints in the snow ‘accidentally’ provide a display of previous walkers’ paths. Level 2: ‘This Color For Best Taste’ label gives ‘meaning’ to the colour of a mango’s skin for the consumer (Photo used with permission of Reddit user /u/cwm2355); Writing ‘Clean Me’ or other messages in dust on a car gives meaning to the dusty property; Admiral Robert Fitzroy’s Storm Glass, as used on the voyage of the Beagle (1831—6), incorporates crystals whose changing appearance was believed to enable weather forecasting (Photo: ReneBNRW, Wikimedia Commons, public domain dedication); George Merryweather’s Tempest Prognosticator (1851[30]) incorporates “a jury of philosophical councillors”, 12 leeches whose movement on detecting an approaching storm causes a bell to ring (Photo: Badobadop, Wikimedia Commons, CC-BY-SA).
Figure 7: Some examples of displays from Levels 3, 4 and 5. Level 3: IceAlert is designed so that freezing temperatures cause the blue reflectors to rotate to become visible; A ‘participatory bar chart’ by Dan Lockton along the lines of [22, 33, 16], designed so that ‘voting’ increases the visible height of the bar, though the votes are not numbered; A non-numerical weighing scale by Chang Hee Lee designed so liquid trapped under glass changes shape; Toilet stall door lock designed so display rotates from ‘Vacant’ to ‘Engaged’—the position of the lock itself gives us a display of actionable information. Level 4: Chronocyclegraphs (1917) by Frank and Lillian Gilbreth, tracing manual workers’ movements [10] (Photo from [15], Archive.org, out of copyright]; Live Wire (Dangling String) by Natalie Jeremijenko (1995)[39] moved a wire in proportion to local network traffic; Melbourne Mussel Choir, also by Natalie Jeremijenko with Carbon Arts [6] uses mussels with Hall effect sensors to translate the opening and closing of their shells into music; Availabot (2006), by Schulze & Webb, later BERG [3], is a USB puppet which “stands to attention when your chat buddy comes online”. Level 5: Powerchord by Dan Lockton [29] provides real-time sonification of electricity use, translating it into birdsong or other ambient sound; Immaterials: Ghost in the Field by Timo Arnall [1] visualizes “the three-dimensional physical space in which an RFID tag and a reader can interact with each other”; Ritual Machine 2 by the Family Rituals 2.0 project [23] uses patterns on a flip-dot display to visualize the countdown to a shared event for two people; Tempescope by Ken Kawamoto [21] visualizes weather conditions elsewhere in the world through re-creating them in a tabletop display (Photo used from Tempescope Press Kit).
 

The boundaries between levels here are dependent on observers’ interpretations of what is signified (whether an effect is accidental or deliberate is a common question in design (teleonomy [25])). Nevertheless, this spectrum permits a classification of some examples and is being applied by the authors in undergraduate design studio projects. We note the absence of screen-based examples: this is not intentional, and we welcome adding relevant examples. There are many intersecting research areas we aim to explore; in current HCI research, the most relevant are data physicalisation, embedded data representation, tangible interaction, sonification, and glanceable displays.

The work of Yvonne Jansen, Pierre Dragicevic and others [20] in data physicalisation, including compilation of examples, and embedded data representation [41], provides us with many instances of qualitative display, mostly at what we are calling Levels 2—5; likewise, development of ubiquitous computing, tangible interaction and tangible user interfaces [39, 18, 17] and Hiroshi Ishii’s subsequent vision of tangible bits [19] offers a huge set of projects, many of which provide qualitative interfaces for data or system interaction (usually at Levels 4—5).

Sonification [35] and glanceable displays [e.g. 9, 34] also offer us diverse sets of examples often using non-numerical representation, also largely at levels 4—5. As noted earlier, qualitative does not just mean non-quantitative, and the boundaries may be blurred: if a sonification directly maps numerical values to tones, is it much different to an unlabelled line chart? Or are sparklines [37], for example, a way of turning quantitative data into a form of qualitative presentation?

Even with a quantitative display, how a person interprets it may have a qualitative dimension: Figure 8 shows an electricity monitor used by a study participant [28] who accidentally set it to display kg CO2/day equivalent; this “meant nothing” to her but she interpreted the display such that “>1” meant “expensive”. ‘Annotations’ of values as users construct their own meaning [11] may fit here; the aim must, however, be to avoid the kind of reductive ‘qualitative’ nature of a limited set of labels [13].

Figure 8: A quantitative electricity display that was used ‘qualitatively’ by a householder (see text). Figure 9: An example of MONIAC, the Phillips Machine, at the Reserve Bank of New Zealand (Photo by Kaihsu Tai, Wikimedia Commons, public domain dedication).
 

Analogy and metaphor are important here, and the almost-forgotten field of Analogue Computing offers us an intriguing perspective. By “build[ing] models that created a mapping between two physical phenomena” [7], some analogue computers effectively operated as ‘direct’ displays of an analogue of the ‘original’ phenomenon—a kind of meta-level 2 type qualitative display, with devices such as the 1949 Phillips Machine [4] (Figure 9), which performed operations on flows of coloured water to model the economy of a country, enabling an interactive visualization of a system in operation as it operates (there are parallels with Bret Victor and Nicky Case’s work on explorable explanations [38, 8], and the development of visual programming languages).

Other areas of pertinent research and inspiration, are synaesthesia and mental imagery: sensory overlaps, fusions and mappings offer a fertile field for exploring qualitative displays of phenomena.

Conclusion: What use is all of this?

We’re interested in using qualitative displays and interfaces for supporting decision-making, behaviour change and new practices through enabling new forms of understanding—as an aid to help people explore their own and each other’s thinking, and specifically to help people understand their relationships and agency with the systems around them [26]. Projects using qualitative displays are unlikely simply to be de-quantified ‘conversion’ of existing numerical displays; instead, the aim will be to make use of the approach to represent and translate phenomena appropriately, in ways which enable users to construct meaning and afford new ways of understanding, enabling nuance and avoiding reductiveness.

The spectrum of the ‘directness’ dimension introduced here provides a possible starting point for this work, by giving a framework for analysing examples and suggesting ways of handling phenomena to be displayed, and is currently being used by the authors to brief an undergraduate design studio project on materialising environmental phenomena to reveal hidden relationships. We welcome the opportunity to learn from others who have thought about these kinds of ideas to inform our future explorations of this area.

Acknowledgements

Thanks to Dr Delfina Fantini van Ditmar, Dr Laura Ferrarello, Flora Bowden, Gyorgyi Galik, Stacie Rohrbach, Ross Atkin, Shruti Grover, Veronica Ranner and Dixon Lo for discussions in which some of these ideas were formulated and explored, and to the CHI reviewers. Unless otherwise noted, photos are by the authors.

References

1. Timo Arnall. 2014. Exploring ‘immaterials’: Mediating design’s invisible materials. International Journal of Design 8, 2: 101—117. http://www.ijdesign.org/ojs/index.php/IJDesign/article/view/1408

2. W. Ross Ashby. 1956. An Introduction to Cybernetics. Chapman & Hall, London.

3. BERG. 2008. Availabot. Retrieved Jan 10, 2017 from http://berglondon.com/projects/availabot/

4. Chris Bissell. 2007. The Moniac: A Hydromechanical Analog Computer of the 1950s. IEEE Control Systems Magazine 27, 1:59—64. https://dx.doi.org/10.1109/MCS.2007.284511

5. Brian Burns. 2007. From Newness to Useness and Back Again: A review of the role of the user in sustainable product maintenance. Retrieved June 1, 2009 from http://extra.shu.ac.uk/productlife/
 Maintaining%20Products%20presentations/Brian%20Burns.pdf

6. Carbon Arts. 2013. Melbourne Mussel Choir. Retrieved Jan 10, 2017 from http://www.carbonarts.org/projects/melbourne-mussel-choir/

7. Charles Care. 2006—7. A Chronology of Analogue Computing. The Rutherford Journal 2. Retrieved Jan 10, 2017 from http://www.rutherford
 journal.org/article020106.html

8. Nicky Case. 2014. Explorable Explanations. Blog post (Sept 8, 2014). Retrieved Jan 10, 2017 from http://blog.ncase.me/explorable-explanations/

9. Sunny Consolvo, Predrag Klasnja, David W. McDonald, Daniel Avrahami, Jon Froehlich, Louis LeGrand, Ryan Libby, Keith Mosher, and James A. Landay. 2008. Flowers or a Robot Army? Encouraging Awareness & Activity with Personal, Mobile Displays. In Proceedings of 10th International Conference on Ubiquitous Computing (UbiComp’08): 54—63. https://doi.org/10.1145/1409635.1409644

10. Régine Debatty. 2012. The Chronocyclegraph. Blog post, We Make Money Not Art (May 6. 2012). Retrieved Jan 10 2017 from http://we-make-money-not-art.com/the_chronocyclegraph/

11. Paul Dourish. 2004. What we talk about when we talk about context. Personal and Ubiquitous Computing 8, 1: 19—30. http://dx.doi.org/10.1007/
 s00779—003—0253—8

12. Chris Elsden, David Kirk, Mark Selby, and Chris Speed. 2015. Beyond Personal Informatics: Designing for Experiences with Data. In Proceedings of the SIGCHI Conference Extended Abstracts on Human Factors in Computing Systems (CHI EA ’15): 2341—2344. https://dx.doi.org/10.1145/2702613.2702632

13. Delfina Fantini van Ditmar and Dan Lockton. 2016. Taking the Code for a Walk. Interactions 23, 1: 68—71. https://dx.doi.org/10.1145/2855958

14. Heinz von Foerster. 1973. On constructing a reality. In F.E. Preiser (Ed.). Environmental Design Research Vol. 2. Dowden, Hutchinson & Ross, Stroudberg: 35—46. Reprinted in Heinz von Foerster. 2003. Understanding Understanding—Essays on Cybernetics and Cognition. Springer-Verlag, New York: 211—228. https://dx.doi.org/10.1007/0-387-21722-3_8

15. Frank Gilbreth and Lillian Gilbreth. 1917. Applied Motion Study: a collection of papers on the efficient method to industrial preparedness. Sturgis & Walton, New York. Retrieved Jan 10, 2017 from https://archive.org/details/appliedmotionstu00gilbrich

16. Hans Haacke. 2009. Lessons Learned. Tate Papers 12. Retrieved Jan 10, 2017 from http://www.tate.org.uk/download/file/fid/7265

17. Eva Hornecker and Jacob Buur. 2006. Getting a grip on tangible interaction: a framework on physical space and social interaction. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’06): 437—446. https://dx.doi.org/10.1145/1124772.1124838

18. Hiroshi Ishii and Brygg Ullmer. 1997. Tangible bits: towards seamless interfaces between people, bits and atoms. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’97): 234—241. https://dx.doi.org/10.1145/258549.258715

19. Hiroshi Ishii, Dávid Lakatos, Leonardo Bonanni, Jean-Baptiste Labrune. 2012. Radical atoms: beyond tangible bits, toward transformable materials. Interactions 19, 1: 38—51. https://dx.doi.org/10.1145/2065327.2065337

20. Yvonne Jansen, Pierre Dragicevic, Petra Isenberg, Jason Alexander, Abhijit Karnik, Johan Kildal, Sriram Subramanian, and Kasper Hornbæk. 2015. Opportunities and Challenges for Data Physicalization. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’15): 3227—3236. https://dx.doi.org/10.1145/2702123.2702180

21. Ken Kawamoto. 2012. Prototyping “Tempescope”, an ambient weather display. Blog post (Nov 15, 2012). Retrieved Jan 10, 2017 from http://kawalabo.blogspot.jp/2012/11/prototyping-tempescope-ambient-weather.html

22. Lucy Kimbell. 2011. Physical Bar Charts. Retrieved Jan 10, 2017 from http://www.lucykimbell.com/LucyKimbell/PhysicalBarCharts.html

23. David Kirk, David Chatting, Paulina Yurman, and Jo-Anne Bichard. 2016. Ritual Machines I & II: Making Technology at Home. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’16): 2474—2486. http://dx.doi.org/10.1145/2858036.2858424

24. Ian Li, Anind Dey, and Jodi Forlizzi. 2010. A stage-based model of personal informatics systems. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’10): 557—566. https://dx.doi.org/10.1145/1753326.1753409

25. Dan Lockton. 2012. POSIWID and Determinism in Design for Behaviour Change. Social Science Research Network. http://dx.doi.org/10.2139/ ssrn.2033231

26. Dan Lockton. 2016. Designing Agency in the City. In Lacey Pipkin (Ed.), The Pursuit of Legible Policy: Agency and Participation in the Complex Systems of the Contemporary Megalopolis. Buró-Buró, Mexico City: 53—61. http://legiblepolicy.info/book/ Legible-Policies_BB.pdf

27. Dan Lockton, David Harrison, and Neville Stanton. 2010. The Design with Intent Method: A design tool for influencing user behaviour. Applied Ergonomics 41, 3: 382—392. http://dx.doi.org/10.1016/ j.apergo.2009.09.001

28. Dan Lockton, Flora Bowden, Catherine Greene, Clare Brass, and Rama Gheerawo. 2013. People and energy: A design-led approach to understanding everyday energy use behaviour. In Proceedings of EPIC 2013: Ethnographic Praxis in Industry Conference: 348—362. https://dx.doi.org/
 10.1111/j.1559—8918.2013.00029.x

29. Dan Lockton, Flora Bowden, Clare Brass, and Rama Gheerawo. 2014. Powerchord: Towards ambient appliance-level electricity use feedback through real-time sonification. In Proceedings of UCAmI 2014: 8th International Conference on Ubiquitous Computing & Ambient Intelligence: 48—51. https://dx.doi.org/10.1007/978-3-319-13102-3_10

30. George Merryweather. 1851. An essay explanatory of the Tempest Prognosticator in the building of the Great Exhibition for the Works of Industry of All Nations. John Churchill, London. Retrieved Jan 10, 2017 from https://archive.org/details/b2804163x

31. Bruno Munari. 1971. Design as Art (trans. Patrick Creagh). Pelican Books, London.

32. Dietmar Offenhuber and Orkan Telhan. 2015. Indexical Visualization—the Data-Less Information Display. In Ulrik Ekman, Jay David Bolter, Lily Diaz, Morten Søndergaard, and Maria Engberg (eds.). Ubiquitous Computing, Complexity and Culture: 288—303. Routledge, New York.

33. Jennifer Payne, Jason Johnson, and Tony Tang. 2015. Exploring Physical Visualization. In Jason Alexander, Yvonne Jansen, Kasper Hornbæk, Johan Kildal and Abhijit Karnik. Exploring the Challenges of Making Data Physical. Proceedings of the SIGCHI Conference Extended Abstracts on Human Factors in Computing Systems (CHI EA ’15): http://architectures.danlockton.co.uk/wp-content/2015-chi2015workshop-physvis.pdf

34. Tim Regan, David Sweeney, John Helmes, Vasillis Vlachokyriakos, Siân Lindley, and Alex Taylor. 2015. Designing Engaging Data in Communities. In Proceedings of the SIGCHI Conference Extended Abstracts on Human Factors in Computing Systems (CHI EA ‘15): 271—274. http://dx.doi.org/
 10.1145/2702613.2725432

35. Stefania Serafin, Karmen Franinovic, Thomas Hermann, Guillaume Lemaitre, Michal Rinott, and Davide Rocchesso. 2011. Sonic Interaction Design. In Thomas Hermann, Andy Hunt, and John Neuhoff (Eds.), The Sonification Handbook. Logos, Berlin: 87—110. http://sonification.de/handbook/ index.php/chapters/chapter5/

36. Melanie Swan. 2013. The quantified self: fundamental disruption in big data science and biological discovery. Big Data 1, 2: 85—99. https://dx.doi.org/10.1089/big.2012.0002

37. Edward Tufte. 2001. The Visual Display of Quantitative Information (2nd ed.). Graphics Press, Cheshire, CT.

38. Bret Victor. 2011. Explorable Explanations. March 10, 2011. Retrieved Jan 10, 2017 from http://worrydream.com/
 ExplorableExplanations

39. Mark Weiser and John Seely Brown. 1995. Designing Calm Technology. Dec 21, 1995. Retrieved Jan 10, 2017 from http://www.ubiq.com/
 weiser/calmtech/calmtech.htm

40. Sherri C. Widen. 2013. Children’s Interpretation of Facial Expressions: The Long Path from Valence-Based to Specific Discrete Categories. Emotion Review 5, 1: 72—77. https://dx.doi.org/10.1177/
 1754073912451492

41. Wesley Willett, Yvonne Jansen, and Pierre Dragicevic. 2017. Embedded Data Representations. IEEE Transactions on Visualization and Computer Graphics 23, 1: 461—470. https://dx.doi.org/10.1109/TVCG.2016.2598608

42. Gary Wolf. 2010. The quantified self. Video (June 2010). Retrieved Jan 10, 2017, from https://www.ted.com/talks/gary_wolf_the_quantified_self

Design Students Explore Landscape Metaphors for Project Modeling

Delanie Ricketts and Dan Lockton

This article originally appeared on the Carnegie Mellon School of Design website

We often use landscapes as metaphors in everyday speech, particularly to talk about complex systems–understanding a complex information system as an “information landscape”, for example, helps convey the idea that such a system, like a landscape, is vast and encompasses many interacting variables. However, while landscape metaphors are common in speech–terms like “stakeholder landscape”, “lie of the land”, “ocean of possibilities”, “food desert”, even the word “field”–landscape metaphors have been used more rarely in visual applications.

On March 30th, 45 Juniors from Carnegie Mellon University’s School of Design’s “Persuasion” class, taught by Michael Arnold Mages, Dan Lockton, and Stephen Neely, took part in a workshop to explore practically how physical and visual landscape metaphors could help elicit new insights about complex experiences–in this case, modeling and reflecting on group design projects. Facilitated by MA Design student and Research Assistant Delanie Ricketts and Assistant Professor Dan Lockton, as part of the School of Design’s new Imaginaries Lab, the workshop involved students collaboratively creating ‘landscape’ models representing projects they have worked on, using simple paper cut-outs of features such as hills, trees, weather, and people. Each group used the elements in different ways to represent different aspects of their projects, through creating ‘timeline’ landscapes in both two and three-dimensional formats.

Some projects started with rocky beginnings, represented by different cones or hills, in order to show how difficult that part of the project was. Other projects started with trees, rivers, and stars, representing periods of calm ideation, research, or general feelings of optimism. When projects encountered new difficulties later on, many groups represented these periods with lightning, rain, hills, and cones. Several groups used (and came up with names for) metaphors within the general landscape metaphor to represent specific parts of their project experiences, such as a “plateau of exhaustion” before the project came to an end.

Delanie’s previous prototypes of the landscape metaphor visuals, as part of her research assistantship project, have focused on how they could facilitate individual reflection on one’s own career path. However, while people found the metaphor and elements to be a useful and creative reflection tool, several expressed that it was difficult to show how their perspective changed over time within a two-dimensional format. In this second iteration of elements, we aimed to provide greater variation as well as enable three-dimensional expression. In addition, we wanted to explore how the metaphor could be used to think through a different topic, project planning or reflecting rather than career, and in a group rather than individual context.

Students’ responses to trying out this second iteration of landscape elements, applied to group projects rather than individual career paths, suggested that they found the process fun and creative, while also abstract. Many participants commented that the tool helped them understand their project and teammates’ perspectives better, especially in terms of stress, productivity, and overall emotional satisfaction at different points throughout a project’s lifetime. The format is more useful for surfacing – and reconciling – overarching understandings than probing deeper insights about the specifics of complex experiences, but, in triggering discussion, it has value in enabling members of a team to understand and interrogate each other’s perspectives and mental models of a situation (echoing ideas from organizational systems thinking experts such as Peter Senge).

We aim to develop the landscapes kit further, through iterations with application in individual reflection, project planning, and research settings.

Many thanks to Chris Stygar, Josiah Stadelmeier, and the whole School of Design 3D Lab for their help in developing the materials for the project, the Design graduate students and juniors for taking part in the different stages of the project, and Manya Krishnaswamy for helping facilitate. Thanks to Joe Lyons for putting the article on the School website.

Mental Landscapes
Mental Landscapes
Mental Landscapes
Mental Landscapes
Mental Landscapes
Mental Landscapes
Mental Landscapes
Mental Landscapes
Mental Landscapes
Mental Landscapes

Environments Studio: Materializing the Invisible

Timelapse of studio

Timelapse of studio, by Jasper Tom

In Materializing the Invisible, we considered invisible and intangible phenomena—the systems, constructs, relationships, infrastructures, backends and other entities, physical and conceptual, which comprise or influence much of our experience of, and interaction with, environments both physical and digital. ‘The invisible’ here is potentially everything from how the building’s heating system works, to the algorithms behind targeted ads, to who’s friends with whom, to where corruption is occurring in government, to where your IoT fridge sends the data it collects, to people’s mental imagery of time, to the electricity use of devices, to networks of cameras and sensors, to how political decisions are made. It also potentially includes things that happen at scales or in dimensions we can’t directly comprehend, from planetary processes such as climate, to the interaction of electromagnetic fields, to the microscopic. And things that happen, that enable day-to-day functioning of our lives, but we don’t know much about. Where does our food come from? Where does our waste water go? What route did that package take to get to us?

The process of revealing the invisible can improve understanding, help people explore their own thinking and relationships with these complex concepts, highlight problems, power structures and inequalities, reveal hidden truths, connect people better to the world around them, and enable people to act. It is not necessarily about visualizing the invisible—it can be about making it audible, tangible, smellable, or otherwise experienceable: we explored techniques from fields including data visualization, sonification, data physicalization, ubiquitous computing, tangible interaction, analog computing, qualitative displays, and the study of synaesthesia to create ways to materialize these invisible phenomena.

More details, including background reading, in the syllabus.

As a starting exercise we examined some ‘invisible’ and unknown things within the building itself (Margaret Morrison Carnegie Hall), noting questions and ideas with Post-It notes in situ. These ranged from questions about who has access to certain rooms or controls, to what some of the controls are in the first place. There were also traces of action and use—patterns which might be invisible in the sense of not being paid attention to, but nevertheless present in the use of the building.

The class project was to choose a phenomenon which is ‘invisible’ within a physical, digital or hybrid environment, find a way of getting access to it, and design and build / make / create a way of materializing the phenomenon, making it accessible to people more widely. As a group we brainstormed different phenomena which might be investigable, and possible forms of representation.

Ji Tae Kim’s project Whitespace looked at the invisible aspects of communication in text messaging, following on from his previous project Fear of Missing Out. Whitespace explores ways to materialize and express “rich contextual and verbal cues” through “an intuitive extension to instant messaging”. Working prototypes used copper tracks, Bare Conductive ink and Touch Board, and Arduino.

Jasper Tom and Chris Perry‘s project Kairos examined “an invisible phenomenon ingrained in everyday life”: the passage of time in a space, specifically around working at a desk. The question “Where did the time go?” and the idea of desk legacy, the patterns of use left by a previous user of the desk in a shared workspace, informed by analysis of timelapse video of the studio, came together with inspirations such as Daniel Rozin’s Wooden Mirror, MIT Tangible Media Group projects such as Daniel Leithinger’s work, and Tempurpedic foam, to create a desk surface which could ‘play back’ the patterns of how it had been used, via an interface using wooden blocks. A working prototype of part of the surface used Arduino and servo motors to demonstrate the effect.

One interesting aspect discussed during Jasper and Chris’s presentation was how while evidence of physical work is often obvious in space, such as a painter’s palette, the evidence of digital work is often invisible—a slightly worn keyboard, perhaps, but little else.

Gilly Johnson and Ty Van de Zande worked together to explore aspects of human movement (dance and exercise), and the related issues of hydration and focus. Focus + Movement proposed a color-changing bodysuit which could work together as part of a system with a water bottle, both to make the invisible patterns visible, and to enable reflection. Gilly and Ty captured movement by dancers using a Kinect, connected to Max MSP, and then simulated the body suit via After Effects.

Environments Studio: Design, Behavior and Social Interaction

Studying Pittsburgh's Greyhound Bus Station: Jasper Tom
Jasper Tom investigated patterns of people’s behavior in Pittsburgh’s Greyhound Bus Station
 

In this short introductory unit, we looked at ways in which the design of environments, and features within them, affects people’s behavior and interaction with each other. Design influences what people do, but often the ‘links’ are invisible or only apparent by their effects. Or, we notice them in passing, but do not take time to reflect on them or draw parallels across situations.

Studying the fear of missing out with messaging: Ji Tae Kim
Ji Tae (Joseph) Kim examined how the design of messaging and social media leads to ‘fear of missing out’ through unplugging himself for a week
 

As designers pioneering new approaches to creating environments for human experience, cultivating a kind of ‘hypersensitivity’ to noticing—and learning from—the ways in which design and behavior interact can be part of developing the attention to detail which will serve you well professionally. Details of the unit in the syllabus.

Studying a pedestrian crossing: Chris Perry
Chris Perry observed the different ways in which people use a pedestrian crossing at the entrance to CMU, and how the design affects those actions
 

We started with quick observation exercises aimed at developing (or refreshing) a capacity for noticing, for paying attention to the ways in which people and environments affect each other. We looked around campus for instances of points of confusion, unintended uses, constraints, and disobedience in physical environment settings, and discussed how these effects manifest in different ways—what could we find? (Photos here by Chris Perry, Gilly Johnson, Jasper Tom, Ty Van de Zande and Dan Lockton.)

We examined ideas around how environments influence people, and are in turn influenced, both physically and digitally, from thigmotaxis to stigmergy, shearing layers and pace layers, fundamental attribution error and design for behavior change. We also thought about the practice of observation, noticing and deconstruction of people’s actions in different ways, and in different levels of detail. The project brief was around designing a way to do research in this field—designing a ‘probe’ rather than a solution to a problem:

  • Choose a situation where ‘design’ seems to be affecting people’s behavior in an environment (physical or digital)
  • Find a way of studying what’s going on—what patterns exist? In what different ways are people’s behavior affected?
  • Visualize (or otherwise communicate) what you find
  • (optional: suggest ways things could be different, if you feel they need to be)
  • Keep a blog of your process (photos, sketches, notes)

Here are the projects:

Comparing a coffee shop and a tea shop: Gilly Johnson


Gilly Johnson compared structural and systemic aspects of the atmosphere and experience in Coffee Tree Roasters in Shadyside, and Dobra Tea in Squirrel Hill, including the layout and spatial division, and emerging themes such as service and trust: full details of the project.


Fear of Missing Out: Ji Tae Kim

Ji Tae Kim: Fear of Missing Out


Ji Tae Kim examined how the design of messaging and social media leads to ‘fear of missing out’ through unplugging himself for a week: full details of the project.


Greyhound Station: Jasper Tom


Jasper Tom investigated how the design of Pittsburgh’s Greyhound Bus Station influences patterns of people’s behavior: full details of the project.


Managing information across environments: Ty Van de Zande


Ty Van de Zande looked at how people manage information such as to-do lists across physical and digital environments, and developed a framework for investigating this in a structured way: more details of the project.


How to Cross the Road: Chris Perry

Project 1
Chris Perry observed the different ways in which people use a pedestrian crossing at Morewood Avenue and Forbes Avenue, at the entrance to CMU, and how the design affects those actions: more details of the project.