Latest issue of the University of TN at Chattanooga student paper has the Library headline on the front page, and the Game Room headline on the second.
Nice priorities, UTC.
There was a bit of a firestorm online this past weekend, when an Open Source Position paper distributed by Sirsi-Dynix, and authored by Steven Abram, hit the web. This paper was originally believed to be a “leak”, and was even published on wikileaks before Abrams put a link to it directly from his blog, and wrote an entry outlining why the paper was put forward to begin with. From his blog:
Some have expressed surprise about the position paper. Some call it FUD – fear, uncertainty and doubt. I call it critical thinking and constructive debate – something that everyone in libraries should embrace and engage in.
I do not hope to fully outline my thoughts about this here in a single post. Suffice it to say that I think the paper significantly mis-characterizes Open Source software in general, and Open Source library systems specifically. I am currently aware of three different efforts to annotate and answer the recently released, one of which I started in Google Docs in hopes of getting refutations together for the various points brought up in the Sirsi-Dynix piece. There is also an Etherpad collaborative refutation began by Tim Spalding of Librarything, and the Code4Lib group’s version on their wiki.
I’m going to give just a few excerpts here, and brief responses. I respect Stephen a great deal, but even viewing this paper in the loosest sorts of ways, there are just blatantly misleading statements scattered throughout. So, a few thoughts:
Nevertheless, it should be noted that it is rare for completely open source projects to be successful.
This is only true in the same way that saying “it is rare for projects to be successful” would be true. Many things fail…it’s just that in the open source world, you get to see the failures, whereas in a closed/proprietary world you don’t.
It is very unlikely that an open source solution is any less expensive than a proprietary solution. In fact, in all of the data SirsiDynix has collected, we are not seeing quotes that conflict with this assertion. Indeed there are very few green fields in the ILS marketplace. Most libraries already have an ILS and receive upgrades as part of their maintenance contract from us or other proprietary vendors. These maintenance contracts are a small percentage of the initial price.
I do not have numbers at my fingertips, but I feel very, very certain that if you actually calculated TCO in any rational way, open source wins. Why? Because it’s a difference of where you are choosing to put your money…instead of paying for support, the typical library that moves to open source solutions has chosen instead to put its money into personnel, and while short-term the cost structures may look similar, paying for a position is far, far more flexible than paying on a maintenance contract. You can’t get that contract to do other things you might need done, while a technical support position can be repurposed.
Plus, while maintenance contracts are “a small percentage of the initial price”, that doesn’t mean that they are in any way a small amount of money. MPOW is a small academic library, and what we pay in yearly maintenance would go a long, long way towards another staff position.
In many markets, there are major systems in accounting, intranets, e-learning, and so on that must tie in to the ILS. In many cases, open source is still the minority solution because, for example, the number of Linux desktops is meager compared to Microsoft Windows desktops. By choosing a Linux desktop, a user closes the door on some software because it may never be created for or ported to Linux. Add to this the major changes in allied systems that require an adaptation for the ILS and the issue grows exponentially.
So for libraries that choose an open source system, the opportunity to integrate different systems into the solution is limited, at best.
This is just a mess of an argument. Why would anyone knowingly choose any software solution that wasn’t compatible with the remainder of their infrastructure? And the advantage of an OSS solution is that the data is yours, and can be massaged into whatever format you’d like…you don’t have to wait on the vendor to decide to add the connector that you are looking for. This is just _wrong_, and I’m not even sure how you structure an argument like:
Windows is more popular than Linux on the desktop.
Some software doesn’t run on Linux.
Therefore, Open Source ILS solutions are bad for libraries.
Proprietary software has more features. Period. Proprietary software is much more user-friendly.
Proprietary software often does have more features…as an example, Microsoft Word has _thousands_ of features, compared to…oh, Open Office. But Open Office has the 20 features that cover 99% of the use-cases for word processing. To argue that proprietary software has more features that no one will ever use doesn’t strike me as a particularly good argument.
And user-friendly? Again, that’s just a statement with no backing…I’ve used tons of proprietary software that had horrible usability. In my experience, it’s almost always the niche proprietary software designed for very specific solutions (like, oh…library systems) that has the worst usability of all.
I could spend many hours digging through this, but I’ll let the collaborative documents tell the rest of the tale. I completely agree with Stephen that all libraries should carefully examine their needs and resources when deciding on what solutions to move to. But this document paints with far too broad a brush, is misleading at best on many points, and simply fails to any test of accuracy. I understand that this is a sales brochure, but I am disappointed at the tone taken….you can critically evaluate without hyperbolic statements like “jumping into open source would be dangerous, at best.” This is more Fox News than CNN, more USA Today than New York Times. I hadn’t hoped for more from Sirsi-Dynix, but I had hoped for more from Stephen Abrams….whether that is fair or not.
I’ve embedded the Google Doc that I started below, but you should definitely check out both the Etherpad and the Code4Lib wiki to see how a large number of librarians are responding. Not everyone put in their thoughts, but the list of people with access to edit is: Nicole Engard, Chris Cormack, Toby Greenwalt, Kathryn Greenhill, Karen Schneider, Melissa Houlroyd, Tara Robertson, Dweaver, Lori Ayre, Heather Braum, Laura Crossett, Josh Neff, and a few others who have usernames that I can’t decipher.
Aside from my truly epic travel woes, all of which are pretty well documented on my Twitter stream, Internet Librarian 2009 was a great, great conference. I spoke twice, once as a part of a phenomenal mobile panel, and gave a cybertour on the Realtime Web. But it was all of the people and things that I was tangentially a part of that made the trip so exciting. Having an essay up as a part of the Library 101 project was exciting, and being able to be a part of the launching of that project in person was a bunch of laughs.
In addition, I was bowled over by some of the thoughtful comments I received at IL2009. To have people that I respect and adore tell me that they think I’m doing good things, well, nothing could be better. I had multiple people tell me that they hadn’t seen me present before, but that they were impressed with what I did…seriously, I’m all choked up just typing this. Combine that with the massive outpouring of help that manifested when I began having travel troubles, and I don’t think that anyone, anywhere, has a better group of friends. From me, to everyone at Internet Librarian 2009: Thank You!
And finally, because I’m a sucker for visualization, here’s a word cloud of the tweets from IL2009. Thanks to someone (who did this?) there’s an archive of all the tweets tagged #il2009, available not only for display on the web but as delimited text files! I grabbed the tab-delimited version, ground it up with TextEdit and removed the hashtag, along with dates, etc, and fed it to Wordle to see what the result looked like. Here it is….a pretty great representation of what people were talking about at IL2009.
Watch the video, but even more importantly, read the essays (full disclosure: I wrote one) and their list of skills for the future of libraries. Thought provoking stuff, from a couple of guys that I’m proud to call friends.
Here’s a 5 minute or so snippet from my recent presentation to the San Diego Law Library Association on Realtime web. They chose a really interesting few minutes to post…
Incredible article in Wired this month on the Good Enough Revolution, which explores and explains a set of emergent economic principles that I think are equally applicable to information seeking. There’s a degree to which we really need to start looking hard at economic models in library and information science…I think they can really inform the creation and distribution of the services that we offer. Check out this quote, for example…
…it happens to be a recurring theme in Good Enough products. You can think of it this way: 20 percent of the effort, features, or investment often delivers 80 percent of the value to consumers. That means you can drastically simplify a product or service in order to make it more accessible and still keep 80 percent of what users wantâ€”making it Good Enough…
At the OITP panel I was a part of at ALA, I think that Eli and I shocked a few people in the audience when we asserted that quality of information doesn’t matter. That isn’t to say it NEVER matters…I want my doctor and my lawyer to have the best information possible. But for the vast majority of information need, good enough is good enough.
Think about the services in your library, and the amount of effort and resources poured into making your services as good as they can possibly be. What if good enough is really enough, and instead we should be expanding our range of services instead of seeking perfection in any single one? How does that change the way libraries operate?
Yesterday, the Louisville Free Public Library in Louisville, KY was hit with a terrible storm, and was flooded. The initial damage estimates are around $1 million, but given the pictures that were shared yesterday, I’m guessing that’s a lowball estimate. The pics are horrendous.
Steve Lawson has set up a paypal account specifically for donations going to the Library, in the name of the Library Society of the World. I’d like to ask everyone to head over to his post, and donate something…$5, $10, whatever you can afford. The people of Louisville will appreciate it.
So yesterday brought the news that Amazon acquired Zappos. For those not familiar, Zappos is a company that sells shoes (primarily, although they now sell other things) and is known for its nearly insane customer service. Seriously, they will do just about anything they can to make sure you’re happy, and are responsible for crazy customer service stories. This story about Zappos sending a woman flowers is maybe my favorite customer service story of all time. My other favorite thing about them is their “Pay new employee to quit” program.
As a result of the acquisition, Amazon CEO Jeff Bezos released this youtube video
Take a minute, and watch the video…it’s worth it.
So Jeff explains a bit about the deal, and why Zappos and Amazon are a good match. According to Jeff, he only knows these things:
Why do I mention this on what is ostensibly a blog about library science stuff? Because I feel strongly that our future isn’t in content, really…it’s in services. No one does service better than Zappos. If we take Zappos customer service strategy (do anything to make the customer happy) and the four things that Jeff Bezos knows about running a company, how could we change libraries for the better? What can we do to be the Zappos of information?
I’ve been doing a lot of thinking lately about something I’m calling “proactive reference.” The way I’m thinking about it, proactive reference is the monitoring of the real-time web (Twitter, Friendfeed, Seesmic, etc) by librarians who answer questions relating to their area or specialty, whether subject or geographically based. Public librarians who answer questions by searching for mentions of their city, county, or library, and Academic libraries who monitor for mentions of their university are two examples, but are many more possibilities.
I’m doing a bit of it now, just to see how effective it is at marketing the library’s services and such. Is anyone else out there actively monitoring these communication channels right now? My instinct is that this is going to be a HUGE market in a very short time, and that libraries should dive in fast and get used to it.