Pixel system overview.

Years

2013-2015

Thesis Advisors

Jon Froehlich, Alison Druin, Hasan Elahi

Tags

physical computing, gestural computing, multimodal computing, live programming, tangible interface

Pixel

Pixel is a proof-of-concept physical computing platform made to explore new workflows for creating interactive objects and systems that use electronic components to sense and control aspects of the physical world. The computing platform consists of one or more cube-shaped modules, called Pixels, that together form a tangible user interface (TUI), to which electronic sensors and actuators can be connected with "snap connectors" on each Pixel, and a graphical programming environment (GPE) for mobile touchscreen devices (see figure below). The workflows supported by Pixel were tuned to enable the use of intuitive knowledge derived from everyday experiences in the physical world instead of technical knowledge that suits only some people.

Figure 1: Conceptual drawing of Pixel, consisting of one or more Pixels (left) and, optionally, a mobile device to program Pixel behavior with the graphical programming environment.

Generally, Pixel was designed to be modular so it can be easily moved within and distributed throughout environments, combined with materials, and embedded in objects. Pixel’s modular design enables systems with sensors and actuators to be easily distributed in physical environments. Pixel enables electronic sensor and actuator components to be quickly connected to and disconnected from modules (i.e., Pixels). This was a deliberate decision weighted against alternative apporoaches, such as the embedding sensors and actuators directly into modules. I wanted to retain the general applicability of Pixel, so I avoided embedding special-purpose components and component-specific support on the circuit or code level.

Systems made with Pixel can interact with the physical environment though sensors—devices for detecting physical phenomena—and actuators—devices for altering the physical environment. Pixel provides three interfaces for defining relationships between sensors and actuators.

First, users can perform spatial gestures by holding one or two Pixels and moving them according to certain patterns. The four gestures that each module can recognize are rest, swung, shaken, and tapped to another module (Figure 2).

Figure 2: The set of gesture that are recognized by Pixel.

Spatial gestures are used solely for defining relationships between Pixels’ connected sensors and actuators. Notably, Pixel doesn’t require the use of an external tool to perform gestures. Surprisingly, I found that this was a key difference between Pixel and related gestural control schemes proposed in the literature, which commonly require an external device to initiate programming. To put it simply, Pixel embeds gestural control throughout its modular interface, whereas comperable tangible programming systems at the time tended to isolate control in a single component of the tangible interface. Usually comperable TUIs were designed so users could carry the control element with them, despite still requiring them to physically visit the programmable elements of the system with the control element.

Pixel’s second interface, the snap connection interface was inspired by the MaKey MaKey. The MaKey MaKey, designed for creating custom controllers for computer software, allowed electronic components to be easily connected with alligator clips and emphasizes the use of construction materials (e.g., cardboard, tape, glue, etc.). Pixel’s snap interface offers comparable functionality, but extends it to support controlling actuators, too. Pixels have two snap connection ports and support one sensor and one actuator. Components can be connected to ports using the removable "snaps," to which they can be connected with alligator clips (Figure 3).

Figure 3: Drawing of Pixel’s "snap connection interface." The component on the Pixel is the "port" to which the removable "snap" couples. The port’s two gray circles represent magnets that hold the snap onto the Pixel. The snap has magnets on the opposite side of that shown. The two circular copper features of the snap are designed specifically for connection with alligator clips.

To assist users with connecting components, magnets are embedded into snaps and ports. The magnets are oriented so connectors can only be snapped onto Pixels one way.

Pixel was one of the first modular control interfaces for programming distributed systems that deeply integrated with a graphical programming environment for small mobile touchscreen devices like smartphones and tablets. GPEs pose a basic design challenge for distributed systems with distributed control because they can draw a user's attention to a central element of the system and distract from physically engaging with other elements of the distributed interface. Ultimately, I decided to include it because the GPE seemed more suitable for expressing programming intentions that lack a corresponding gesture that intuitively corresponds to the intent and is simple to perform with one or two Pixels. In other words, the GPE provides functions for customizing the application-specific behavior for individual Pixels, connected sensors, and actuators. In attempt to preserve the freedom afforded by the modular design, I designed the GPE for small, portable, pocketable touchscreen devices that can be controlled with touch-based gestures that "directly" manipulate graphical representations of a Pixel’s actions.

To show how one can use Pixel to make everyday systems, three example scenarios are given in the following section.

Case Study: Comparison to Arduino Uno

The three scenarios below show how to build three simple systems with Pixel.

Example 1: Making a Light Switch

This scenario shows how to build a simple light switch with both the Arduino and Pixel. This scenario is based on the common Blink example that serves as an introduction to Arduino. In contrast to Blink, this scenario incorporates a physical switch to control a light rather than automatically switching the light after a delay.

The materials required to build a light switch with Pixel are shown in Figure 9. No computer is needed to program the switch circuit.

Figure 8: Drawing of the materials needed to make the light switch.

As with Arduino, the electronic components need to be assembled in a pattern that will result in expected behavior. A component can be connected Pixel by clipping it to a snap connector with two alligator clips, then snapping it to an input or output port. You connect the switch and LED in this way, as shown below.

Figure 9: Drawing of connecting the input switch (top) and output LED (bottom) for the light switch. The complete system is shown in Figure 10.

By default, each pixel functions as a switch. That is, when its input port is activate—if a connected circuit is closed—its output port will actively power the connected component. As a result, creating the light switch can be done entirely through direct physical action. As soon as the switch and LED are snapped onto a pixel, you can use the light switch.

Figure 10: Drawing of the complete light switch.

In this case, no programming was required to make the light switch. Pixel was designed to provide "switch" as default a behavior because it is a design pattern that can to control a variety of electronic components. There is no need to program the I/O relationship because each pixel has only a one input and one output—their default relationship is set automatically. The "switch" behavior was chosen as the default for each pixel because it provides immediate utility and is applicable in a range of situations. Example 2 shows the use of this primitive "switch" behavior (analogous to a control structure) in making a remote light switch.

Example 2: Making a Remote Light Switch

This scenario extends the previous one so the light can be turned on and off remotely.

Adapting the light switch made with Pixel to be a remote switch requires only one additional pixel.

Figure 14: Drawing of additional Pixel needed for remote switch.

The adaptation can be done in three steps with only physical actions (Figure 16). First, you swing the additional pixel in a downward motion (top left). Next, you tap the pixel just swung to the other pixel in the light switch circuit (top right). Finally, you unsnap the switch from the light switch pixel and snap it onto the other module that you swung (bottom).

Figure 15: Drawing of the sequence of actions to adapt the light switch to be a remote light switch.

The complete remote switch is shown in the figure below.

Figure 16: Drawing of the complete remote light switch.

The process of adapting the light switch to be a remote switch with Pixel is relatively simple and involves little in addition to moving the switch to a second module. The additional processes are minimal—swinging a module and tapping it to the other—and the actions to carry them out bear some intuitive relationship to their effect. Swinging a module engages it, indicating that it is the subject of attention, and tapping a module to another indicates that it is the subject of consideration in relation to another module.

Example 3: Making a ''Scarecrow Tree''

This scenario is based on a suggestion of a participant in a Pixel evaluation (discussed in Chapter 5). This scenario was chosen to illustrate a realistic everyday situation in which an information system could be usefully and uniquely applied. The participant characterized the problem as follows.

"Here’s a problem I have. I have a cherry tree. Just when the cherries get real ripe, the birds come and eat them. Now, if I can make some sounds or some kind of flashes or something—a scarecrow, right?—then the birds will not come. Now even a scarecrow that you have, it has to have moving parts on it, or, they say, you can buy a plastic owl and put it somewhere, but if the plastic owl is not moving at all then it won’t work. The birds will learn and it’s useless. So, if one had these kinds of things, and one of them has a motion detector, gets a motion from the birds around or something, then it can signal the other ones which would be on several branches in the tree. Something like that."

This scenario is illustrated in Figure 17.

Figure 17: Drawing of the "scarecrow tree" problem as described by a participant in an evaluation of Pixel.

Below, potential solutions are illustrated for Arduino and Pixel. These solutions extend the remote light switches presented in the previous scenarios. Note that while participant P1 envisioned this scenario for Pixel, the solution for Arduino is shown for consistency with previous Examples.

Making the scarecrow tree with Pixel can be done with five Pixels, one snap connector, an input switch, two alligator clips, and a mobile phone with Pixel’s graphical programming environment (Figure 23).

Figure 23: The materials used to make the scarecrow tree with Pixel.

One Pixel must be chosen to function as the "remote" to cause light and sound to be generated by the other four Pixels. Because each Pixel contains an LED and speaker, separate light and sound actuators are unnecessary. An input switch will be connected to this module. In turn it will cause the remote Pixels to flash light on and off and play a sequence of high pitch tones.

To start making the system, imagine that you swing one of the Pixels. This will function as the "remote" Pixel. Swinging the module engages it, or directs it to become available for further gestural interaction.

Figure 24: The swing gesture.

Recall that gesturing with Pixel is for defining stimulus-response interactions between Pixels. Next, you tap the "switch" Pixel to one of the other Pixels, represented as "Output A" in the figure below.

Figure 25: The tap gesture between two Pixels.

This defines stimulus-response relationship between the "Switch" and "Output A" Pixels in which the stimulating the "Switch" activates "Output A," which responds by emitting flashes of light and generating a succession of sonic tones. To make the "Switch" activate all other Pixels (not just "Output A"), repeat the swing-and-tap gesture sequence, swinging the "Switch" as before, but tapping it to "Output B," "Output C," and "Output D" in turn. You do this as shown below.

Figure 26: From top to bottom, this drawing shows the gestures needed to create define the relationships between the "Switch" and "Output" Pixels.

At this point, stimulating the "Switch" activates all of the "Output" Pixels synchronously. To stimulate the "Switch" easily, snap a toggle switch to "Switch" (with a snap connector), in the same way as was done in the prior scenarios. You do this as shown below.

Figure 27: Drawing of connecting the input switch to the "Switch" Pixel.

The above sequence of swing-and-tap gestures and connecting the toggle switch to the "Switch" Pixel concludes the gestural interactions needed to make the scarecrow tree. However, as the Pixels are performing their default behavior, stimulating the "Switch" activates the "Output" actuators, to which no components are connected. The "Outputs" should, instead, display a sequence of flashing light and play a series of tones with their embedded LEDs and speakers. This can be done in the graphical programming environment.

Figure 28: Drawing of the default state of the graphical programming environment.

When you open the GPE, it appears as shown in Figure 24. It shows a representation of the Pixel’s behavior as a sequence of actions, represented as circles, ordered clockwise around a circular "loop." Pixels perform the action sequence repeatedly. The actions may be conditional, as represented by the loop segment before an action. In Figure 24, the dotted segment represents the "activation" condition, satisfied when the Pixel is activated remotely.

To make the "Output" Pixels produce light and sound to scare birds away from the tree, the corresponding actions must be added to their loops, and those actions must be set to fire only when the "Switch" activates the corresponding "Output." This can be done through a series of single-finger surface gestures with the graphical environment.

Figure 29: Drawing of the graphical state of the GPE after adding the light behavior, setting its activation condition, adding the sound behavior, and setting it’s activation condition.

To start, you remove the existing action by touching and holding your finger to it, then dragging it away from the loop, and finally lifting your finger from the action. Only the actions on the loop will be performed. Actions that are not on a loop can be added to the same loop again or left off-loop to fade away (removing them from memory).

Now you create the new action sequence on each Pixel. First, you touch and hold anywhere off the loop (for about one second) until the possible actions are presented, then lift your finger. Second, you touch and drag the "light" action onto the loop (shown on the left). By default, actions are unconditional, so the Pixel will emit white light. White is the default light color. Third, to make the light action conditional, you tap (touch and then immediately lift) the loop segment prior to the action. This changes the condition that must be met to perform the action (shown second from left). Next, you repeat these three steps for the "sound" action (the result is shown third from the left), adding it to the loop.

These steps must be repeated on the remaining three "Outputs." Swiping left and right across the GPE changes between loops for different Pixels. To complete the scarecrow tree, you swipe left three times, each time repeating the steps done above. Once completed, the Pixels can be installed into the cherry tree. The complete scarecrow tree is depicted in Figure 30.

Figure 30: Drawing of the complete scarecrow tree built as built with Pixel.

Publications

Gubbels, Michael. "Pixel: A Tool for Creative Design with Physical Materials and Computation." (2015).

Gubbels, Michael, and Jon E. Froehlich. "Physically Computing Physical Computing: Creative Tools for Building with Physical Materials and Computation." (2014).