Presented without comment.
More videos collected from YouTube…these all from a single user, in the last couple of hours.
Looks like another flash mob/flash rave happened at MPOW, UTC Library, last night. From the footage already up on YouTube, looks like this time someone convinced the powers that be to let the students into the library instead of pepper spraying them outside of it.
Here’s the early footage:
EDIT: More videos coming online now
There was a bit of a firestorm online this past weekend, when an Open Source Position paper distributed by Sirsi-Dynix, and authored by Steven Abram, hit the web. This paper was originally believed to be a “leak”, and was even published on wikileaks before Abrams put a link to it directly from his blog, and wrote an entry outlining why the paper was put forward to begin with. From his blog:
Some have expressed surprise about the position paper. Some call it FUD – fear, uncertainty and doubt. I call it critical thinking and constructive debate – something that everyone in libraries should embrace and engage in.
I do not hope to fully outline my thoughts about this here in a single post. Suffice it to say that I think the paper significantly mis-characterizes Open Source software in general, and Open Source library systems specifically. I am currently aware of three different efforts to annotate and answer the recently released, one of which I started in Google Docs in hopes of getting refutations together for the various points brought up in the Sirsi-Dynix piece. There is also an Etherpad collaborative refutation began by Tim Spalding of Librarything, and the Code4Lib group’s version on their wiki.
I’m going to give just a few excerpts here, and brief responses. I respect Stephen a great deal, but even viewing this paper in the loosest sorts of ways, there are just blatantly misleading statements scattered throughout. So, a few thoughts:
Nevertheless, it should be noted that it is rare for completely open source projects to be successful.
This is only true in the same way that saying “it is rare for projects to be successful” would be true. Many things fail…it’s just that in the open source world, you get to see the failures, whereas in a closed/proprietary world you don’t.
It is very unlikely that an open source solution is any less expensive than a proprietary solution. In fact, in all of the data SirsiDynix has collected, we are not seeing quotes that conflict with this assertion. Indeed there are very few green fields in the ILS marketplace. Most libraries already have an ILS and receive upgrades as part of their maintenance contract from us or other proprietary vendors. These maintenance contracts are a small percentage of the initial price.
I do not have numbers at my fingertips, but I feel very, very certain that if you actually calculated TCO in any rational way, open source wins. Why? Because it’s a difference of where you are choosing to put your money…instead of paying for support, the typical library that moves to open source solutions has chosen instead to put its money into personnel, and while short-term the cost structures may look similar, paying for a position is far, far more flexible than paying on a maintenance contract. You can’t get that contract to do other things you might need done, while a technical support position can be repurposed.
Plus, while maintenance contracts are “a small percentage of the initial price”, that doesn’t mean that they are in any way a small amount of money. MPOW is a small academic library, and what we pay in yearly maintenance would go a long, long way towards another staff position.
In many markets, there are major systems in accounting, intranets, e-learning, and so on that must tie in to the ILS. In many cases, open source is still the minority solution because, for example, the number of Linux desktops is meager compared to Microsoft Windows desktops. By choosing a Linux desktop, a user closes the door on some software because it may never be created for or ported to Linux. Add to this the major changes in allied systems that require an adaptation for the ILS and the issue grows exponentially.
So for libraries that choose an open source system, the opportunity to integrate different systems into the solution is limited, at best.
This is just a mess of an argument. Why would anyone knowingly choose any software solution that wasn’t compatible with the remainder of their infrastructure? And the advantage of an OSS solution is that the data is yours, and can be massaged into whatever format you’d like…you don’t have to wait on the vendor to decide to add the connector that you are looking for. This is just _wrong_, and I’m not even sure how you structure an argument like:
Windows is more popular than Linux on the desktop.
Some software doesn’t run on Linux.
Therefore, Open Source ILS solutions are bad for libraries.
Proprietary software has more features. Period. Proprietary software is much more user-friendly.
Proprietary software often does have more features…as an example, Microsoft Word has _thousands_ of features, compared to…oh, Open Office. But Open Office has the 20 features that cover 99% of the use-cases for word processing. To argue that proprietary software has more features that no one will ever use doesn’t strike me as a particularly good argument.
And user-friendly? Again, that’s just a statement with no backing…I’ve used tons of proprietary software that had horrible usability. In my experience, it’s almost always the niche proprietary software designed for very specific solutions (like, oh…library systems) that has the worst usability of all.
I could spend many hours digging through this, but I’ll let the collaborative documents tell the rest of the tale. I completely agree with Stephen that all libraries should carefully examine their needs and resources when deciding on what solutions to move to. But this document paints with far too broad a brush, is misleading at best on many points, and simply fails to any test of accuracy. I understand that this is a sales brochure, but I am disappointed at the tone taken….you can critically evaluate without hyperbolic statements like “jumping into open source would be dangerous, at best.” This is more Fox News than CNN, more USA Today than New York Times. I hadn’t hoped for more from Sirsi-Dynix, but I had hoped for more from Stephen Abrams….whether that is fair or not.
I’ve embedded the Google Doc that I started below, but you should definitely check out both the Etherpad and the Code4Lib wiki to see how a large number of librarians are responding. Not everyone put in their thoughts, but the list of people with access to edit is: Nicole Engard, Chris Cormack, Toby Greenwalt, Kathryn Greenhill, Karen Schneider, Melissa Houlroyd, Tara Robertson, Dweaver, Lori Ayre, Heather Braum, Laura Crossett, Josh Neff, and a few others who have usernames that I can’t decipher.
I’ve been doing a lot of thinking lately about something I’m calling “proactive reference.” The way I’m thinking about it, proactive reference is the monitoring of the real-time web (Twitter, Friendfeed, Seesmic, etc) by librarians who answer questions relating to their area or specialty, whether subject or geographically based. Public librarians who answer questions by searching for mentions of their city, county, or library, and Academic libraries who monitor for mentions of their university are two examples, but are many more possibilities.
I’m doing a bit of it now, just to see how effective it is at marketing the library’s services and such. Is anyone else out there actively monitoring these communication channels right now? My instinct is that this is going to be a HUGE market in a very short time, and that libraries should dive in fast and get used to it.
Sometimes, it’s just nice to laugh at industries that are desperately attempting to hang on to their relevancy in a changing world. Exhibit A for today is the Copyright Clearance Center, and their interesting attempt to educate users about copyright via their Copyright Basics video. Let’s examine the ways in which CCC fails at modern web usage.
First: here’s the opening screen of the video
I think that’s enough said, yes? Among the nearly-unreadable text is the prohibition to “distribute copies of the Program to persons outside your company, or post copies of the Program on any public website (including any video sharing or social networking site).” Â Yep, that’s the CCC…all about education. Wouldn’t want those non-paying people to easily get your content that explains why they should pay for your content.Â
Second: To get a copy of the video to use internally, on a non-public server that is limited to only your employees, you have to fill out a form on this page. Or, you know, just look at the page source:
Where the FLV file is handily linked for anyone who might want to use it.Â
If ever there was a direct example of how the modern web breaks copyright, the CCC just gave it to us. The answer, of course, isn’t to ignore the de facto standards for the distribution of video on the web, to limit the ability to share and distribute content, and to generally treat people who want to use your content like criminals. The way to make yourself valuable and heard is to share what you make as widely as you possibly can…something that the CCC can’t bring itself to do. Â It’s really hard to participate in the modern conversation when your very business model is tied to archaic and irrelevant legalese.
“I’ve said it before and I’ll say it again: I don’t think music should be free,” Reznor says. “But the climate is such that it’s impossible for me to change that, because the record labels have established a sense of mistrust. So everything we’ve tried to do has been from the point of view of, ‘What would I want if I were a fan? How would I want to be treated?’ Now let’s work back from that. Let’s find a way for that to make sense and monetize it.”
How’s that for a customer service mantra? Try that for your library: What would you want, if you were a patron? How would you want to be treated? Work back from that, find a way for that to make sense.
Google is now indexing AND displaying magazines in Google book Search! Hereâ€™s the blog entry where they describe it:
Thereâ€™s no mention of a titles list, and thereâ€™s clearly some limitations on these (Check out Jet, for instanceâ€¦they only have every 5th year of the mag). Popular Science is there in its entirety, but only 2000-Feb 2008.
But in any caseâ€¦itâ€™s an interesting development. If Google decides not to provide a titles list, is anyone interested in crowdsourcing it? Where can we dump the resulting data so that it’s harvestable?
The old standard for the Encyclopedia, the Encyclopaedia Britannica, has just launched a new service called Britannica Webshare that is designed to pull the aging reference work into the 21st Century. It also proves the argument put forth by Chris Anderson in his article (and upcoming book) Free.
The central idea of Webshare is that Britannica is giving away access to its online content for free, by giving away subscriptions to its paywall-side service. But not just to anyone, no, no. They are giving a $0 subscription for one year to “Anyone who publishes regularly on the Internetâ€”bloggers, webmaster, and writers who publish on the Web…”. You have to “apply” for the access, which implies some sort of winnowing of applications, although I applied and received an email with a login code within an hour. This code is a sort of coupon that gives you one year of free access to Britannica online, although you do have to fill out the normal application information for Britannica after you’ve already applied for the free access…a sloppy method of handling the process. Even better, the Terms of Service that you must agree to for the account includes things like:
So even though the free account is for the purposes of content redistribution by blogs, in an attempt to gain mindshare on the ‘net against Wikipedia (please, we all know that’s what’s going on)…they haven’t changed the terms of service which would prohibit any blogger that makes any money from his or her blog (got ads? No Britannica for you!) from even using the service in the first place. I’m sure this is an “oversight” and that we’ll see some form of correction of this, but someone should have pointed it out in the first place.
Or worse, they really do mean it, and this is only for bloggers who don’t have any attempts at monetization going on. This blog is ad-free for now, but if I ever chose to use ads I certainly wouldn’t want to have to comb back through my blog to remove Britannica content from it. Oh, but you say “I’ll not put ads on my blog, so bully for me…I’ll use Britannica for all my encyclopedic blog entries.” The next paragraph in the Terms of Service says:
If you want to post, publish, or use content from (or contained within) the Services on your Web site or in any other Internet activity, you will need permission from Britannica, even though your Web site or Internet activity is free of charge.
Oh. Well then.
Which is it, Britannica? Do you want to push your product across the web via free access, or do you want us bound by your Terms of Service? Can’t have it both ways.
There’s also the tip-o-the-hat to Web 2.0 functionality with embeddable widgets for Britannica content, but the widgets are for things that Britannica gives you, not created by users. That is, they have pre-packaged widgets for a handful of subject areas, but I can’t go in and create my own. Not very 2.0, Brit.
In all, this is the right direction for Britannica to be going if they hope to ever be relevant in the 21st century, but they haven’t gone far enough. You need some serious added value at this point to compete. My suggestions: Go free for public access, with ads for revenue generation; Go paid for institutional access and make it worth their $$ by building tools to make it easy for librarians and such to make patrons lives easier. Widgets for use in Course Management Systems, subject page building built in to the site, and customizable RSS feeds that can be pulled by people into their own systems.
Today was my last day at Computers in Libraries 2008, even though the conference itself goes on through tomorrow. I fly out tomorrow early, in hopes of getting back to TN before dinner.
CiL is always a great conference. Like most conferences, it’s all about the people and the hallway conversations…not that the sessions weren’t great. For instance, the the Academic Library 2.0 preconference was amazing. In all seriousness, there are a lot of very smart people doing very clever things, and a lot of them were at CiL. I’m honored and humbled to be able to hang out with some of them.
I’ll try and do a wrap-up post linking out to all the things I found most interesting later this week after I’ve had a chance to decompress.