Category Archives: Digital Culture

Quick Office, not Goodreader

After some prodding from Glenn in the comments of my post on Goodreader and the iPad, it turns out that the security culprit doesn’t look like it’s Goodreader at all. It’s the Port 4242 that gave it away, and much thanks to Glenn for pointing it out…I was too concerned with publishing fast, and didn’t follow up the details as well as I should have.

It looks like Goodreader lets you SEE any shared iPad on wifi, but it doesn’t share openly in the way that I described. The bad guy here appears to be QuickOffice, which DOES use port 4242 and share files by default across a shared wifi LAN. I could see in Goodreader the files that someone else had on their iPad in QuickOffice…not the normal set of events for the iOS devices, as the file systems are normally sandboxed to not allow that to happen.

So: revised security alert! If you use QuickOffice on your iOS device (iPhone, iTouch, iPad) please ensure that you have sharing off by default, so that others aren’t able to see your stuff at all.

Interfaces, part 2

This distinction from the post below, that media can either be collapsed (Content, Container, and Interface as a single piece, as a book) or expanded (each separated, as in a DVD, remote, and screen) explains a bit about why the Touch interface is so visceral. The iPad feels different from other devices when you use it, and one of the reasons that I believe it does is that it collapses what have been expanded media types. With the iPad (and to a lesser degree, the iPhone, Android devices, Microsoft Surface, etc) you directly interact with the media and information you are working with. When you watch a video on the iPad, the Content, Container, and Interface are as-a-piece, and you interact with the video by touching the video itself.

This has a lot to do with the revolutionary feel of these new touch devices…and I think it explains why previous attempts at things like Tablet PCs may have failed.


I’m sure this isn’t an original thought (so very, very few are), but it was novel enough to me that I needed to write it down…and that’s pretty much what a blog is designed for.

I’ve written and talked about how libraries need to become comfortable with the containers of our new digital content, as since we move into the future the containers (ereader, ipad, tablet) will be important to users. We already know, more or less, how to deal with content. I’ve also been thinking about the interfaces that we use to access this content, and it just hit me:

Print is the only example of a media where the User Interface, Content, and Container have been, historically, the same thing. With music and video, we are completely used to the container, the content, and the user interface each being distinct: we put a tape into a player, which we control with kn0bs or buttons, and the content itself is ethereal and amorphous. With print, until very recently, the content, container, and interface were all the same thing…a book, a magazine, a broadsheet, a newspaper. All are content, container, and interface wrapped into a single unit. This may point to one of the reasons that people seem to feel a deeper connection to print materials than to the 8mm film, or the cassette tape.

I’ve been thinking a lot about these distinctions between container, content, and interface….I think that these three concepts could inform the way that libraries conceptualize what we do, and maybe find better ways to do it.

More of me online

So for the last week or so I’ve been playing with feeding various content into this blog, testing some new tools, and trying to find a way to integrate a new Tumblr blog with Pattern Recognition in a way that I liked.

I’ve failed completely.

I’m just not happy with any of it, as non of the WordPress plugins that I’ve tried (FeedWordPress, Wp-o-matic) treat my Tumblr blog RSS properly, and after hacking away at custom post setups, I’ve just decided that I like the idea of having two “blogs” on the net for now.

And so, here’s my plan: PatRec is staying the same…I like it as my occasional posting ground, and it’s going to remain my main blog headquarters. But there’s a ton of other stuff (personal, funny, or other) that just doesn’t fit in here. So for now, that other content is going to live over at Tumblr:, RSS available here I wanted to call it Apophenia, but someone already has all the Google Juice for that. Pareidolia is close enough. If you have any interest in the minutia of my sense of humor or just want to see another side of me, that’s where you’ll see it. Expect lots of silly pictures, youtube videos, and short bits of personal reflection.

It may take me a few days to work out the information flow (what goes to Twitter, what goes to Friendfeed, etc). I’m still using Friendfeed as a “master feed” for my stuff online, so everything I do gets there eventually. One of these days I should post about my digital ecology….the flows and connections between all the stuff I have online. I’ll save that for my Top Secret new writing experiment, coming in January. :-)

Sirsi-Dynix vs Open Source Software

There was a bit of a firestorm online this past weekend, when an Open Source Position paper distributed by Sirsi-Dynix, and authored by Steven Abram, hit the web. This paper was originally believed to be a “leak”, and was even published on wikileaks before Abrams put a link to it directly from his blog, and wrote an entry outlining why the paper was put forward to begin with. From his blog:

Some have expressed surprise about the position paper. Some call it FUD – fear, uncertainty and doubt. I call it critical thinking and constructive debate – something that everyone in libraries should embrace and engage in.

I do not hope to fully outline my thoughts about this here in a single post. Suffice it to say that I think the paper significantly mis-characterizes Open Source software in general, and Open Source library systems specifically. I am currently aware of three different efforts to annotate and answer the recently released, one of which I started in Google Docs in hopes of getting refutations together for the various points brought up in the Sirsi-Dynix piece. There is also an Etherpad collaborative refutation began by Tim Spalding of Librarything, and the Code4Lib group’s version on their wiki.

I’m going to give just a few excerpts here, and brief responses. I respect Stephen a great deal, but even viewing this paper in the loosest sorts of ways, there are just blatantly misleading statements scattered throughout. So, a few thoughts:

Nevertheless, it should be noted that it is rare for completely open source projects to be successful.

This is only true in the same way that saying “it is rare for projects to be successful” would be true. Many things fail…it’s just that in the open source world, you get to see the failures, whereas in a closed/proprietary world you don’t.

It is very unlikely that an open source solution is any less expensive than a proprietary solution. In fact, in all of the data SirsiDynix has collected, we are not seeing quotes that conflict with this assertion. Indeed there are very few green fields in the ILS marketplace. Most libraries already have an ILS and receive upgrades as part of their maintenance contract from us or other proprietary vendors. These maintenance contracts are a small percentage of the initial price.

I do not have numbers at my fingertips, but I feel very, very certain that if you actually calculated TCO in any rational way, open source wins. Why? Because it’s a difference of where you are choosing to put your money…instead of paying for support, the typical library that moves to open source solutions has chosen instead to put its money into personnel, and while short-term the cost structures may look similar, paying for a position is far, far more flexible than paying on a maintenance contract. You can’t get that contract to do other things you might need done, while a technical support position can be repurposed.

Plus, while maintenance contracts are “a small percentage of the initial price”, that doesn’t mean that they are in any way a small amount of money. MPOW is a small academic library, and what we pay in yearly maintenance would go a long, long way towards another staff position.

In many markets, there are major systems in accounting, intranets, e-learning, and so on that must tie in to the ILS. In many cases, open source is still the minority solution because, for example, the number of Linux desktops is meager compared to Microsoft Windows desktops. By choosing a Linux desktop, a user closes the door on some software because it may never be created for or ported to Linux. Add to this the major changes in allied systems that require an adaptation for the ILS and the issue grows exponentially.
So for libraries that choose an open source system, the opportunity to integrate different systems into the solution is limited, at best.

This is just a mess of an argument. Why would anyone knowingly choose any software solution that wasn’t compatible with the remainder of their infrastructure? And the advantage of an OSS solution is that the data is yours, and can be massaged into whatever format you’d like…you don’t have to wait on the vendor to decide to add the connector that you are looking for. This is just _wrong_, and I’m not even sure how you structure an argument like:

Windows is more popular than Linux on the desktop.
Some software doesn’t run on Linux.
Therefore, Open Source ILS solutions are bad for libraries.


Proprietary software has more features. Period. Proprietary software is much more user-friendly.

Proprietary software often does have more features…as an example, Microsoft Word has _thousands_ of features, compared to…oh, Open Office. But Open Office has the 20 features that cover 99% of the use-cases for word processing. To argue that proprietary software has more features that no one will ever use doesn’t strike me as a particularly good argument.

And user-friendly? Again, that’s just a statement with no backing…I’ve used tons of proprietary software that had horrible usability. In my experience, it’s almost always the niche proprietary software designed for very specific solutions (like, oh…library systems) that has the worst usability of all.

I could spend many hours digging through this, but I’ll let the collaborative documents tell the rest of the tale. I completely agree with Stephen that all libraries should carefully examine their needs and resources when deciding on what solutions to move to. But this document paints with far too broad a brush, is misleading at best on many points, and simply fails to any test of accuracy. I understand that this is a sales brochure, but I am disappointed at the tone taken….you can critically evaluate without hyperbolic statements like “jumping into open source would be dangerous, at best.” This is more Fox News than CNN, more USA Today than New York Times. I hadn’t hoped for more from Sirsi-Dynix, but I had hoped for more from Stephen Abrams….whether that is fair or not.

I’ve embedded the Google Doc that I started below, but you should definitely check out both the Etherpad and the Code4Lib wiki to see how a large number of librarians are responding. Not everyone put in their thoughts, but the list of people with access to edit is: Nicole Engard, Chris Cormack, Toby Greenwalt, Kathryn Greenhill, Karen Schneider, Melissa Houlroyd, Tara Robertson, Dweaver, Lori Ayre, Heather Braum, Laura Crossett, Josh Neff, and a few others who have usernames that I can’t decipher. :-)

Google Wave and Igor

For the BIGWIG Showcase this year, I talked about and put together a presentation on Google Wave, and what I think it will do to library services. One of the things I talked about was the ability for software robots to watch the Wave, and alter it in specific ways. Well, it looks like we’ve got our first bibliographic example of this, with Igor. Stew over at Flags & Lollipops has put together a robot that will watch a given Wave for mentions of citations, and then query and automagically fill in footnotes from PubMed, Connotea, or CiteULike (for now, I’m sure that Zotero and other coverage is easily possible).

I’ve got no idea how he did this, given that Wave isn’t public yet…but the demo shows what’s going to be possible with Wave. Take a look, and get ready….Wave might change everything. You may need to click through and enlarge the player to really see what’s going on.

Igor – a Google Wave robot to manage your references from Stew Fnl on Vimeo.

Igor is a robot for Google Wave written in Java and running on Google App Engine.

It allows users to pull in references from PubMed & personal libraries on Connotea or CiteULike by querying services with keywords that they supply inline with the article you’re writing.


Just found an awesome new reference tool…LyricRat is a site that will take a snippet of lyrics that you give it, and then tell you the song, album, artist that the lyrics are from.

My favorite bit? If you tweet a lyric to @lyricrat, they will reply with the song and a link to the lyricrat site!

So very cool, and easy to use. Huge fan of services like this that provide a service in an almost completely transparent way: no sign up, no log in, no barriers.

CluetrainPlus10 – Thesis 17

Companies that assume online markets are the same markets that used to watch their ads on television are kidding themselves.

As many will probably say about The Cluetrain Manifesto, it’s almost scary how precient it was. To put it into perspective, when the authors were writing Cluetrain, Google had less than a dozen employees and has just moved out of a garage. The word “blog” had yet to be used to describe a chronological website. Napster hadn’t shattered the media industry yet. And statistics put the number of people on the Internet at just about 150 million, or around 10% of the current number.

Christopher Locke, Rick Levine, Doc Searls, and David Weinberger put together an amazing set of principles that are even more relevant today than they were 10 years ago.  The sad part about Thesis 17, in particular, is that companies haven’t yet learned this lesson. Some of them are trying, with standouts like Zappos. But far too many companies are failing to see the benefits of participatory marketing and extreme customer service.

The market is no longer passive. Almost no one under the age of 35 these days interacts with products in the way the older generation did…we expect to be involved in our consumption, connected to it. We ask friends, we poll our social networks, we take recommendations of the people we know very seriously. We have to love both the object and the process or we just don’t buy. And loving means becoming involved, knowing more, interacting with the makers, asking questions, and otherwise being active.

We want a relationship with our products, and producers who try to feed us advertising may be ok short-term, but the days of the passive are over. The new market is fragmented and participatory, and content producers will have to adjust or die. Making a better product isn’t enough. The companies that will thrive in the coming years are the ones that understand and cultivate the one-to-one relationships with their customers and their potential customers.

This post is a part of the larger CluetrainPlus10 project. Follow other reflections on the Cluetrain there!