All posts in Development

It’s been too long since a public update made its way out for Measure the Future. In light of the rapidly approaching ALA Annual 2016, here’s the current state of the Project….

The Good News

Enormous amounts of code has been written, and we have the most complicated parts nailed down. When we started on this Project, the idea of using microcomputers like the Edison to do the sort of computer vision (CV) analysis that we were asking was still pretty ambitious. We’ve worked steadily on getting the CV coding on the Edison done, and it’s a testament to our Scout developer, Clinton, that it’s as capable as it is.

The other major piece of the Project, the Mothership, was also challenging, although in a different way. Working through the design of the user interfaces and how we can make the process as easy as possible for users turned out to be more difficult than we’d imagined (isn’t it always?). Things like setting up new Scouts, how libraries will interact with the graphs and visualizations we’re making, and most importantly how we ensure that our security model is rock solid are really difficult problems, and our Mothership developer, Andromeda, has been amazing at seeing problems before they bite us and maneuvering to fix them before they are a problem.

But if I specified good news, you know there’s some bad news hiding behind it.

The Bad News

We’re delayed.

We have working pieces of the project, but we don’t have the whole just yet.

Despite enormous amounts of work, Measure the Future is behind schedule. The goal of the project, from the very beginning back in 2014, was to launch at ALA Annual 2016. While we are tantalizingly close, we are definitely not ready for public launch, and with ALA Annual 2016 now just a couple of weeks away, it seemed like the necessary time to admit to ourselves and the public that we’re going to miss that launch window.

As noted in the “Good News” section, the individual pieces are in place. However, before these are useful for libraries, the connection between them has to be bulletproof. We are working to make the setup and connection as automatic and robust as we possibly can, and as it turns out, networking is hard. Really hard. No, even harder than that.

We’re struggling with making this integration between the two sides automatic and bulletproof. There’s no doubt that in order for the Project to be useful to libraries, it has to be as easy as possible to implement, and we are far from easy at this point. We still have work to do.

I’m very unhappy that we’re missing our hoped-for ALA Annual 2016 launch date. There’s a lot of benefit of launching at Annual, and I’m very disappointed that we’re not going to hit that goal. But I’d rather not hit the date then release a product to the library world that isn’t usable by everyone.

Conclusion

Measure the Future is still launching, and doing so in 2016. But we’re going to take a little longer, test a little more thoroughly, get the hardware into the hands of our Alpha testers, and ensure that when we do release, it’s more than ready for libraries to use. We’ve also been able to make a deal with a Beta partner that will really test what we’re able to do, and we’re really excited about the possibilities on that front. This extra time also gives us an opportunity for additional partnerships and planning for getting the tool out to libraries. More news and announcements on that front in a month or so.

We’re going to make sure we give the library, museums, and other non-profits of the world a tool that reveals the invisible activity taking place in their spaces. And we’re going to do it this year. Jason will be attending ALA Annual in Orlando, and he’ll be talking about Measure the Future every chance he gets. We don’t have a booth (a booth without the product in hand seemed presumptive) but it’s easy to get in touch with Jason if you want to ask questions about the project. If you have questions, feel free to throw them his way.

We’re rounding the bend on the last bits of development on Measure the Future before we do Alpha installs. Lots of details to get to, and tons of work yet to do…but the goal is in sight and the things we’re solving for are mostly known. We’re still aiming hard at launching at the ALA Annual Conference in June, and barring unforeseen problems we’re gonna hit that date.

One of the things we’re currently working on is the install of the software for the Scouts and the Mothership onto the respective hardware. Andromeda Yelton wrote a post about her side of that world (the Mothership) and how important it is to Measure the Future to make the installs as easy as we possibly can. From her post:

Well! I now have an alpha version of the Measure the Future mothership. And I want it to be available to the community, and installable by people who aren’t too scared of a command line, but aren’t necessarily developers. So I’m going to measure the present, too: how long does it take me to install a mothership from scratch — in my case, on its deployment hardware, an Intel Edison?

tl;dr 87 minutes.

Is this good enough? No.

For those of you that haven’t seen it, here’s the video (no audio, unfortunately) that we played at the booth at ALA Midwinter a couple of months ago that has demo footage of the Computer Vision that’s going on under the hood.

We’re very excited about getting this project into your hands, and we’re working hard to make it as easy as we can. Keep watching for more info!

Thus far in the development process for Measure the Future I’ve been working to ensure that I understood the need and was heading down a path that librarians found valuable. I did this via a survey given to both of my Alpha partners as well as linked in Library Journal, and after sorting through the dozens of responses I had a much clearer picture of what the community saw as valuable, and what maybe could wait for the version 2 of the project.

IMG_0190The next step was proof of concept for the hardware, which I put together and demoed at the ALA Annual conference in San Francisco in July. That demo went very well, with yet more awesome feedback and renewed interest from libraryland to push us forward. The hardware demos were great, and people seemed to like the progress we were making.

Now is the time in the project that I’ve been working towards since ALA Annual. I knew that the project needed a particular skill set to move forward, and that I probably needed more than one person to make it happen. I needed someone with a solid understanding of computer vision, someone who’s worked with the sorts of problems inherent with using cameras as  a data source. And I needed someone who had a great head for turning numbers into meaning, that could look at the data from the sensors and make those numbers work for the libraries in question. I know how everything fits together, but I learned a long time ago that it’s faster to find experts than it is to try to learn everything that I might need to accomplish a project. So I did just that…found two experts that I’m beyond thrilled to work with.

IMG_0134The first is Clinton Freeman, a software developer from Cairns, Australia. Clinton was introduced to me by the amazing Daniel Flood of The Edge in Queensland, Australia. If you aren’t aware of the things they are doing at The Edge, they (like much of Australian librarianship) are way out in front of the future of the profession, making it happen. Clinton worked with them on a couple of projects, and has a history of working with computer vision to do really interesting things…the more Daniel told me about him, the more I knew he was the right guy for the job. I contacted him and laid out the problems we are trying to solve, and how I see it coming together, and in short order he was sold enough to join the team. Clinton understands the library world, and is going to be working on testing and improving my initial hardware work, the computer vision analysis, and API development for the project.

The second new member of the team is someone who I can almost literally say needs no introduction, at least to the U.S. cadre of online librarians. I needed someone that could take the raw data coming off the sensors and write elegant algorithms to identify patterns and derive areas of interest from the flood of numbers…and do so with a keen eye on what librarians need to know (and more importantly, how not to overwhelm them with information). An amazing developer that groks libraries, is a math wizard, and can do front end web dev with the best of them? That’s a description custom-made for Andromeda Yelton if ever I saw one, and I’m beyond thrilled that she’s agreed to work with Clinton and I on the development of the project.

So this takes my team up to 6 total: myself, my alpha partner libraries represented by Gretchen Caserotti and Jenica Rogers, my amazing hardware advisor from Sparkfun Electronics Jeff Branson, and my development team in Clinton and Andromeda. The development team is in the process of setting development milestones now, in prep for a sprint from now until ALA Midwinter in Boston, where I hope to have the next round of demo hardware with some UI and UX to show off. Our timeline has always been targeting ALA Annual 2016 in Orlando to have the project ready for libraries to use, and I feel more confident than ever that we are going to do just that.

Expect some more reports as we set specific goals for the next few months, as there’s nothing like public expectation to make one hit deadlines. Here we go!