Category Archives: Digital Culture

Must watch, right now

My favorite talk I’ve heard in a long, long time was at the LJ/SLJ Ebook Summit, from Eli Neiberger, about How Ebooks Effect Libraries. He’s recently put the recording and slides up on Youtube in two parts. Every librarians needs to watch these, and take notes…Eli gets the issues, lays them out, and doesn’t pull punches. Love. This. Presentation.

Part One

Part Two

For the record, I haven’t forgotten about my ongoing discussion with Bobbi Newman (and now spreading across the web onto other blogs). I’ve still got lots to say, but I’ve been tied up. Will get the words out ASAP, promise.


Amazing article in the New York Times from Kevin Kelly about home schooling his 8th grade son, and the lessons learned from the effort. Some amazing pieces of advice about technoliteracy and what everyone should know and learn about technology…here are my favorite bits:

  • Every new technology will bite back. The more powerful its gifts, the more powerfully it can be abused. Look for its costs.
  • Technologies improve so fast you should postpone getting anything you need until the last second. Get comfortable with the fact that anything you buy is already obsolete.
  • Before you can master a device, program or invention, it will be superseded; you will always be a beginner. Get good at it.
  • Every technology is biased by its embedded defaults: what does it assume?
  • Nobody has any idea of what a new invention will really be good for. The crucial question is, what happens when everyone has one?
  • The older the technology, the more likely it will continue to be useful.
  • Find the minimum amount of technology that will maximize your options.

Just brilliant stuff. Read the whole article.

Seriously, AT&T?

In preparation for ordering iPhone 4, I went about adjusting our AT&T plans this evening…the new tiered pricing actually works out for us, as Betsy rarely uses over 200Megs of data a month. As I was switching her over, I read this amazingly silly EULA from AT&T:

DataPlus 200 MB for iPhone

Terms and Conditions

DataPlus 200MB for iPhone may only be used for the following purposes: (i) internet browsing, (ii) personal email, and (iii) consumer applications. Using iPhone to access corporate email, company intranet sites, and/or other business solutions/applications is prohibited.

Bwahahahaha. Corporate email is prohibited? WTF? Talk about your unenforceable EULA’s….you can’t visit an intranet, for frak’s sake? Seriously, AT&T? Seriously?

And you wonder why people hate you.

Quick Office, not Goodreader

After some prodding from Glenn in the comments of my post on Goodreader and the iPad, it turns out that the security culprit doesn’t look like it’s Goodreader at all. It’s the Port 4242 that gave it away, and much thanks to Glenn for pointing it out…I was too concerned with publishing fast, and didn’t follow up the details as well as I should have.

It looks like Goodreader lets you SEE any shared iPad on wifi, but it doesn’t share openly in the way that I described. The bad guy here appears to be QuickOffice, which DOES use port 4242 and share files by default across a shared wifi LAN. I could see in Goodreader the files that someone else had on their iPad in QuickOffice…not the normal set of events for the iOS devices, as the file systems are normally sandboxed to not allow that to happen.

So: revised security alert! If you use QuickOffice on your iOS device (iPhone, iTouch, iPad) please ensure that you have sharing off by default, so that others aren’t able to see your stuff at all.

Interfaces, part 2

This distinction from the post below, that media can either be collapsed (Content, Container, and Interface as a single piece, as a book) or expanded (each separated, as in a DVD, remote, and screen) explains a bit about why the Touch interface is so visceral. The iPad feels different from other devices when you use it, and one of the reasons that I believe it does is that it collapses what have been expanded media types. With the iPad (and to a lesser degree, the iPhone, Android devices, Microsoft Surface, etc) you directly interact with the media and information you are working with. When you watch a video on the iPad, the Content, Container, and Interface are as-a-piece, and you interact with the video by touching the video itself.

This has a lot to do with the revolutionary feel of these new touch devices…and I think it explains why previous attempts at things like Tablet PCs may have failed.


I’m sure this isn’t an original thought (so very, very few are), but it was novel enough to me that I needed to write it down…and that’s pretty much what a blog is designed for.

I’ve written and talked about how libraries need to become comfortable with the containers of our new digital content, as since we move into the future the containers (ereader, ipad, tablet) will be important to users. We already know, more or less, how to deal with content. I’ve also been thinking about the interfaces that we use to access this content, and it just hit me:

Print is the only example of a media where the User Interface, Content, and Container have been, historically, the same thing. With music and video, we are completely used to the container, the content, and the user interface each being distinct: we put a tape into a player, which we control with kn0bs or buttons, and the content itself is ethereal and amorphous. With print, until very recently, the content, container, and interface were all the same thing…a book, a magazine, a broadsheet, a newspaper. All are content, container, and interface wrapped into a single unit. This may point to one of the reasons that people seem to feel a deeper connection to print materials than to the 8mm film, or the cassette tape.

I’ve been thinking a lot about these distinctions between container, content, and interface….I think that these three concepts could inform the way that libraries conceptualize what we do, and maybe find better ways to do it.

More of me online

So for the last week or so I’ve been playing with feeding various content into this blog, testing some new tools, and trying to find a way to integrate a new Tumblr blog with Pattern Recognition in a way that I liked.

I’ve failed completely.

I’m just not happy with any of it, as non of the WordPress plugins that I’ve tried (FeedWordPress, Wp-o-matic) treat my Tumblr blog RSS properly, and after hacking away at custom post setups, I’ve just decided that I like the idea of having two “blogs” on the net for now.

And so, here’s my plan: PatRec is staying the same…I like it as my occasional posting ground, and it’s going to remain my main blog headquarters. But there’s a ton of other stuff (personal, funny, or other) that just doesn’t fit in here. So for now, that other content is going to live over at Tumblr:, RSS available here I wanted to call it Apophenia, but someone already has all the Google Juice for that. Pareidolia is close enough. If you have any interest in the minutia of my sense of humor or just want to see another side of me, that’s where you’ll see it. Expect lots of silly pictures, youtube videos, and short bits of personal reflection.

It may take me a few days to work out the information flow (what goes to Twitter, what goes to Friendfeed, etc). I’m still using Friendfeed as a “master feed” for my stuff online, so everything I do gets there eventually. One of these days I should post about my digital ecology….the flows and connections between all the stuff I have online. I’ll save that for my Top Secret new writing experiment, coming in January. 🙂

Sirsi-Dynix vs Open Source Software

There was a bit of a firestorm online this past weekend, when an Open Source Position paper distributed by Sirsi-Dynix, and authored by Steven Abram, hit the web. This paper was originally believed to be a “leak”, and was even published on wikileaks before Abrams put a link to it directly from his blog, and wrote an entry outlining why the paper was put forward to begin with. From his blog:

Some have expressed surprise about the position paper. Some call it FUD – fear, uncertainty and doubt. I call it critical thinking and constructive debate – something that everyone in libraries should embrace and engage in.

I do not hope to fully outline my thoughts about this here in a single post. Suffice it to say that I think the paper significantly mis-characterizes Open Source software in general, and Open Source library systems specifically. I am currently aware of three different efforts to annotate and answer the recently released, one of which I started in Google Docs in hopes of getting refutations together for the various points brought up in the Sirsi-Dynix piece. There is also an Etherpad collaborative refutation began by Tim Spalding of Librarything, and the Code4Lib group’s version on their wiki.

I’m going to give just a few excerpts here, and brief responses. I respect Stephen a great deal, but even viewing this paper in the loosest sorts of ways, there are just blatantly misleading statements scattered throughout. So, a few thoughts:

Nevertheless, it should be noted that it is rare for completely open source projects to be successful.

This is only true in the same way that saying “it is rare for projects to be successful” would be true. Many things fail…it’s just that in the open source world, you get to see the failures, whereas in a closed/proprietary world you don’t.

It is very unlikely that an open source solution is any less expensive than a proprietary solution. In fact, in all of the data SirsiDynix has collected, we are not seeing quotes that conflict with this assertion. Indeed there are very few green fields in the ILS marketplace. Most libraries already have an ILS and receive upgrades as part of their maintenance contract from us or other proprietary vendors. These maintenance contracts are a small percentage of the initial price.

I do not have numbers at my fingertips, but I feel very, very certain that if you actually calculated TCO in any rational way, open source wins. Why? Because it’s a difference of where you are choosing to put your money…instead of paying for support, the typical library that moves to open source solutions has chosen instead to put its money into personnel, and while short-term the cost structures may look similar, paying for a position is far, far more flexible than paying on a maintenance contract. You can’t get that contract to do other things you might need done, while a technical support position can be repurposed.

Plus, while maintenance contracts are “a small percentage of the initial price”, that doesn’t mean that they are in any way a small amount of money. MPOW is a small academic library, and what we pay in yearly maintenance would go a long, long way towards another staff position.

In many markets, there are major systems in accounting, intranets, e-learning, and so on that must tie in to the ILS. In many cases, open source is still the minority solution because, for example, the number of Linux desktops is meager compared to Microsoft Windows desktops. By choosing a Linux desktop, a user closes the door on some software because it may never be created for or ported to Linux. Add to this the major changes in allied systems that require an adaptation for the ILS and the issue grows exponentially.
So for libraries that choose an open source system, the opportunity to integrate different systems into the solution is limited, at best.

This is just a mess of an argument. Why would anyone knowingly choose any software solution that wasn’t compatible with the remainder of their infrastructure? And the advantage of an OSS solution is that the data is yours, and can be massaged into whatever format you’d like…you don’t have to wait on the vendor to decide to add the connector that you are looking for. This is just _wrong_, and I’m not even sure how you structure an argument like:

Windows is more popular than Linux on the desktop.
Some software doesn’t run on Linux.
Therefore, Open Source ILS solutions are bad for libraries.


Proprietary software has more features. Period. Proprietary software is much more user-friendly.

Proprietary software often does have more features…as an example, Microsoft Word has _thousands_ of features, compared to…oh, Open Office. But Open Office has the 20 features that cover 99% of the use-cases for word processing. To argue that proprietary software has more features that no one will ever use doesn’t strike me as a particularly good argument.

And user-friendly? Again, that’s just a statement with no backing…I’ve used tons of proprietary software that had horrible usability. In my experience, it’s almost always the niche proprietary software designed for very specific solutions (like, oh…library systems) that has the worst usability of all.

I could spend many hours digging through this, but I’ll let the collaborative documents tell the rest of the tale. I completely agree with Stephen that all libraries should carefully examine their needs and resources when deciding on what solutions to move to. But this document paints with far too broad a brush, is misleading at best on many points, and simply fails to any test of accuracy. I understand that this is a sales brochure, but I am disappointed at the tone taken….you can critically evaluate without hyperbolic statements like “jumping into open source would be dangerous, at best.” This is more Fox News than CNN, more USA Today than New York Times. I hadn’t hoped for more from Sirsi-Dynix, but I had hoped for more from Stephen Abrams….whether that is fair or not.

I’ve embedded the Google Doc that I started below, but you should definitely check out both the Etherpad and the Code4Lib wiki to see how a large number of librarians are responding. Not everyone put in their thoughts, but the list of people with access to edit is: Nicole Engard, Chris Cormack, Toby Greenwalt, Kathryn Greenhill, Karen Schneider, Melissa Houlroyd, Tara Robertson, Dweaver, Lori Ayre, Heather Braum, Laura Crossett, Josh Neff, and a few others who have usernames that I can’t decipher. 🙂