Wednesday, November 17, 2010

CART211: Response

The article on physical computing really highlighted how much trouble I have think in terms of high/low level thinking. I find it very difficult to divorce concept and production. The author presents a good case for keeping the technology and concept separate from the start.

A fascinating aspect that was just touched upon was the the computer's ability to reduce the barriers of time and space. The idea of multi-dimensional-time-travel-illusions is difficult to resist.

Expanding the view of what inputs a computer might want to read off of a human is a bit difficult, the reason why keyboards and mice work is that people have adapted to them to the point of stopping/reducing the sense of body when at a computer. Typing is generally unselfconscious.

Taking it from another angle, if the computer had to take readings off of a human, it would want the most information it could have to make a context dependent, decision. Taking that example of the automatic blinds that could sense temperature and light, maybe it would also want to know if there was someone in that room, what they were doing, maybe even if the brain-waves of that person were indicating an aversion to the heat or light?

The input/output divide is also interesting, but new input methods seems to be a lot less open a topic than new output methods. With both methods, I have a lot of difficulty getting away from the mouse-keyboard/screen-projectors methods. I suppose that comes with time.
Seems like a good book. I'm not sure if I'll finish the parts on circuits though.

-----------------------------------------------


The article on Hiroshi Ishii and his student’s projects was very interesting. The elements that stood out for me were those of engaging more than the eyes and the TUI’s requirements for very mutable objects.

Movies definitely capitalize on the fact that it is very easy to get lost in sight and sound and ignore the physical, but which came first, our ability to be disembodied for entertainment or media that focused excessively on only one or two senses and strove for total immersion?

The idea of having sound, touch and action play a larger role in our interactions with technology brings up some interesting questions: will the computer receive the information straight from the user and need to interpret it, or will this tangibility come as part of a more tangible/audible controller? An object you mould in your hands to control a visual space could be very interesting, but how would it deal with having a lot of functionality? Click/tap/gesture/press where and why? If it is just a learned behaviour, the object becomes a controller with physical methods rather than typing/mouse-movement. If the object interprets physical actions as they come, without this option for controlling the less intuitive parts, the experience is very different.

A Wacom for example, is fairly tangible. Its strengths are gestural, its weaknesses are control. It has trouble being adaptable enough to become the sole way of interacting with the computer data. A clay ball control would probably have similar troubles: being great for manipulating, but bad at triggering and controlling a wide array of programs.

Without the object (like a projection illusion) the options would be more malleable, but the tangible nature is a lot less stressed. Just like the wiimote, it won’t take long for someone to boil such gestural and active interaction down to the least possible movement.

Having worked with databases over the summer, I found the idea of a stackable/rack of objects representing data to be very interesting. The problem of saving, storing and recreating these combinations is the only thing that seems to be a problem to me. If the objects were smaller and easy to store in their combination, it could be a very interesting tool for dealing with complex data analysis.

No comments:

Post a Comment