I’ve been talking for awhile now in my presentations and writing about how interfaces are changing, and how the way we design for information retrieval is going to evolve over the next few years as they do. Here’s a prime example of the sort of thing that will be trivial to do in a very short amount of time…gestures as UI for computing.
You know that Internet-of-Things I’ve been talking about? Here’s an example, from two of the hottest products to come from Kickstarter:
Remember all those talks I gave over the last few months talking about a data explosion because sensors were getting so cheap that they will soon be ubiquitous and allow us to measure everything and anything?
Yeah. So that’s happening.
I know how dangerous that laser is, and I still want one.
By “wearable computing” I mean mobile computing where both computer-generated graphics and the real world are seamlessly overlaid in your view; there is no separate display that you hold in your hands think Terminator vision. The underlying trend as we’ve gone from desktops through laptops and notebooks to tablets is one of having computing available in more places, more of the time. The logical endpoint is computing everywhere, all the time – that is, wearable computing – and I have no doubt that 20 years from now that will be standard, probably through glasses or contacts, but for all I know through some kind of more direct neural connection. And I’m pretty confident that platform shift will happen a lot sooner than 20 years – almost certainly within 10, but quite likely as little as 3-5, because the key areas – input, processing/power/size, and output – that need to evolve to enable wearable computing are shaping up nicely, although there’s a lot still to be figured out.
Imagine that you have a big box of sand in which you bury a tiny model of a footstool. A few seconds later, you reach into the box and pull out a full-size footstool: The sand has assembled itself into a large-scale replica of the model.
Will your next home be a printed home?Along with this new technology will come a number of labor-reducing and cost-saving features. The number of people needed to build a home will drop by a factor of ten, maybe more.Over time, we may see old houses torn down with PacMan-like recycling machines, where the material is ground up, reformulated, and an entirely new house is printed in its place – all in less than one day.
We knew Google was working on this, but here’s the first real look at Project Glass, their wearable display/augmented reality hardware.
The Internet is going wild for Tacocopter, perhaps the next great startup out of Silicon Valley, which boasts a business plan that combines four of the most prominent touchstones of modern America: tacos, helicopters, robots and laziness.Indeed, the concept behind Tacocopter is very simple, and very American: You order tacos on your smartphone and also beam in your GPS location information. Your order — and your location — are transmitted to an unmanned drone helicopter grounded, near the kitchen where the tacos are made, and the tacocopter is then sent out with your food to find you and deliver your tacos to wherever youre standing.
With the development of GPS controlled drones, far-reaching cheap radio equipment and tiny new computers like the Raspberry Pi, we’re going to experiment with sending out some small drones that will float some kilometers up in the air. This way our machines will have to be shut down with aeroplanes in order to shut down the system. A real act of war.
We’re just starting so we haven’t figured everything out yet. But we can’t limit ourselves to hosting things just on land anymore. These Low Orbit Server Stations (LOSS) are just the first attempt. With modern radio transmitters we can get over 100Mbps per node up to 50km away. For the proxy system we’re building, that’s more than enough.