Here’s one take on what omnipresent visual overlay with network connectivity might enable, although with a slight dystopian bent.
“Morphologically, we’ve built a jellyfish. Functionally, we’ve built a jellyfish. Genetically, this thing is a rat,” says Kit Parker, a biophysicist at Harvard University in Cambridge, Massachusetts, who led the work.
I’ve said it before, but the rise of the cheap sensor, combined with ubiquitous connectivity, is going to do more to change the way we interact with our world than you can imagine.
The coolest thing at Google I/O this year isn’t a cheap tablet or a pair of overpriced glasses or even a killer keyboard. It is, believe it or not, an alarm clock. But not just any alarm clock — this is an alarm clock with potential. What you see above, and demonstrated in the video after the break, is the gadget that was handed out to attendees who went to learn about the Android Accessory Development Kit.
Google’s Project Glass product lead Steve Lee walks us through his experience with the development of the company’s sci-fi-inspired eyewear–from his team’s “hundreds of variations and dozens of early prototypes” to his vision of the future.
I was totally into this stuff in the 90’s, reading Wired and Mondo2000 whenever I could find them, and devouring Gibson and Sterling like candy. I’m a product of the cyberpunk age.
I’ve been talking for awhile now in my presentations and writing about how interfaces are changing, and how the way we design for information retrieval is going to evolve over the next few years as they do. Here’s a prime example of the sort of thing that will be trivial to do in a very short amount of time…gestures as UI for computing.
You know that Internet-of-Things I’ve been talking about? Here’s an example, from two of the hottest products to come from Kickstarter:
Remember all those talks I gave over the last few months talking about a data explosion because sensors were getting so cheap that they will soon be ubiquitous and allow us to measure everything and anything?
Yeah. So that’s happening.
I know how dangerous that laser is, and I still want one.
By “wearable computing” I mean mobile computing where both computer-generated graphics and the real world are seamlessly overlaid in your view; there is no separate display that you hold in your hands think Terminator vision. The underlying trend as we’ve gone from desktops through laptops and notebooks to tablets is one of having computing available in more places, more of the time. The logical endpoint is computing everywhere, all the time – that is, wearable computing – and I have no doubt that 20 years from now that will be standard, probably through glasses or contacts, but for all I know through some kind of more direct neural connection. And I’m pretty confident that platform shift will happen a lot sooner than 20 years – almost certainly within 10, but quite likely as little as 3-5, because the key areas – input, processing/power/size, and output – that need to evolve to enable wearable computing are shaping up nicely, although there’s a lot still to be figured out.