Tag Archives: web2.0

Last thoughts on SWIFT at CiL2008

SWIFT at CiL2008Just wanted to wrap up a few thoughts I had after sitting through the “input session” organized by ITI and The Otter Group on the whole SWIFT/CiL thing. Several really important points came up during that session, which I felt like needed to be pulled out for comment.

First was Ryan Deschamps comment during the (admittedly somewhat tense) discussion. Paraphrased, it was “you don’t just have to be good, you have to be better than me.” This wasn’t said egoistically, just to point out that any particular tool, especially a tool that is commercial in nature, has to be better in significant ways than the tools that are available for free. As well, the tool has to do something that the individual attendees of the conference can’t do either as easily or as quickly by themselves. While I’m quite sure that not everyone at Computers in Libraries is as talented as Ryan, I’m equally sure that anything he or any of the other seriously talented people who were at the input session were to build would be sharable and community driven. As Michael Sauers pointed out on llyfrgellydd.info, several presenters created tools for free, for the hell of it, that ended up being huge drivers of the conference. SWIFT has to be better than them, and it’s not.

Second was the issue that I have with perceived audience of this product. The product is marketed at people who use tools that rely on tags as metadata…flickr, blogs, delicious, etc. It, by necessity, has to have tags in order to pull all the disparate pieces together. But the very people using those services are the people who don’t need SWIFT. The Otter Group developed a platform that is useless for the very people that must use it for it to work.

Kathleen GilroyThird and last is what the session turned out to be. Meredith Farkas has, as usual, a thoughtful post on her take on the session, and comments on the very real tension in the room. I think the tension was a result of the clash between expectation and implementation…we expected an actual feedback session, and we got a sales pitch. Meredith got to the party a little late, and might have missed the fascinating anecdote about where Kathleen got the name for the product (SWIFT is named after a bird!).

We. Don’t. Care. We use products called things like ooVoo, Tumblr, Hulu, and Twitter. Clearly names are not at the top of our list when we choose products or service. We didn’t care about the history of the product, nor even really about its intended use. The street finds its own uses. The point of Web2.0 and Library 2.0 is to provide tools.

Several people in the room commented on the fact that The Otter Group seemed not at all interested in really hearing about the problems with the product. Everything was blamed on “being beta”, or on the lawyers, or something. My take on it is that they just don’t seem to get the social web, as hard as they tried and as much history as they have in trying to make it a commercial product. They fell hard once with their ALA Bootcamp, and if possible fell even harder with Cil2008 and SWIFT.

Oh, and since I know that eventually Kathleen and The Otter Group will see this: Who won the Wii?

Some clarification re:conferences

Just some quick thoughts about my recent posts, as responses to comments.

Todd said:

Virtual meetings are all very nice, but its never the same as meeting people in person. In addition, real-life conferences give you a chance to see places you might not have gone otherwise…virtual conferences mean you stay right where you are.

I completely agree. My suggestion to virtualize aspects of the conference experience are not intended to downplay the value of face-to-face meetings. It’s just that we need to recognize that F2F isn’t necessary for communication to happen…it brings bonuses, but for the purposes of conducting business (like, for instance, the ALA Midwinter meeting), virtual is the appropriate realm.

Tom Hogan said:

Jason, thanks for attending Internet Librarian in Monterey, and I hope you found it worthwhile. As I mentioned in my little welcome speech, we had a record turnout this year, which is due primarily to the variety and excellence of the presentations from people dedicated to the information profession.

I’m sure that online conferences have their place, but I have to agree with Todd that people still want to meet in person from time to time. As long as that holds true, Information Today will continue to organize them. Regards.

Hi Tom! I was one of your “people dedicated to the information profession” that presented at IL this year. I was one of the faculty for what I believe was the largest preconference at IL.

Again, I’m not saying that virtual is done at the expense of F2F. Ideally it should be done in conjunction with…the very fact that we are talking as if there is a dichotomy between the two is ludicrous.

The truth is that the best librarians I know are both virtual and physical, all the time. They are connected, and consider their prioperceptive virtual self without effort. They are the librarians that you saw at IL who were sitting next to each other, blogging the session, updating Twitter, and IM’ing the person next to them with comments about the session. To pretend that being at IL “in real life” precludes a virtual component is to miss the forest for the trees.

My point is that we also should be reversing this equation: we should be making the virtual a significant and integral part of the ongoing F2F conference experience. The fact that at a conference called Internet Librarian we still have physical pieces of paper for people to sign up for dinners around town is, to put it mildly, amusing.

Library Building 2.0

As a follow-up to my previous post discussing the current mania at MPOW regarding our new library building, I can now share with the world our wiki:

UTC Library Building Project

So far, I’m thrilled with the way this is coming together. Using 2.0 tools to put this project in motion has saved us enormous amounts of time, and just allowed us to do things that couldn’t have been done before. Tagging Flickr photos to let the designers know which chairs you like? Annotating video of your site visits so that the architect can see just what that reference area has that you want to mimic? Brilliant!

Anyway, we’re making this entire process as transparent as possible, to the point of actually rejecting mechanisms if they aren’t transparent. We’re committed, so it’s time to see what the rest of the world thinks.

We’re on track to complete our program plan in April. The external committee has just been formed, and will meet for the first time next week. Wish us luck, and let me know if you have a cool idea for us to use, or just tips for moving forward with the project.

Information R/evolution

Just another amazing video from the maker of The Machine is Us/ing us. Digital information, as much as we like to treat it like paper, is just different.

The sooner librarians get their heads around this, the better for our patrons. I’m trying desperately to wrap my head around how this influences our new library building…

Google Book Search: My Library

Google has made an interesting move with Book Search…they just added a “My Library” component, which allows you to catalog your home library using Google.

Now, if you do a search in Google Books, one of the options is “Add to My Library”

Google My Library

If you click the link, and are logged into Google, it starts your collection:

Google My Library 2

The links on the side give an option to Import/Export you library, but the import options is woefully weak…it only allows you to paste in a list of ISBNs. No CSV or Delimited files, no xml, no other formal metadata. Just ISBNs.

Export is possibly even worse. Google My Library exports an XML file with the following structure:

<title>Pattern Recognition</title>
<contributor>William Gibson</contributor>

Google? What’s with the non-existent metadata? I can do better at Amazon, not to mention a real library tool like LibraryThing.

Google My Library also has the ability to display just the cover view of your library, but there doesn’t appear to be any ordering/sorting options…although it will limit a search to just your library, it would still be nice to be able to order. How about some faceted browsing, Google?

Google My Library Cover View

This is an interesting product from Google. It is yet another set of information they can use to target advertisements (if they know the contents of your library and 982734987234 other people, they can cross reference that and target ads). But as a product from the consumer’s view, this seems way less useful than LibraryThing, which has given serious thought to what people want to do with their own books, and gives a nearly obsessive number of tools to the user.

On the other hand, this is Google. They are likely to gather a huge number of users from their existing base, even when there may be better tools out there for the given job. Haven’t seen this over at the Thingology blog…Tim, what do you think about this?

Shirky FTW!

Clay Shirkey with a brilliant, well-reasoned reply to Gorman, for the win!

He even makes the same analogy I did regarding translations of the Bible. 🙂

If Gorman were looking at Web 2.0 and wondering how print culture could aspire to that level of accessibility, he would be doing something to bridge the gap he laments. Instead, he insists that the historical mediators of access “…promote intellectual development by exercising judgment and expertise to make the task of the seeker of knowledge easier.” This is the argument Catholic priests made to the operators of printing presses against publishing translations of the Bible — the laity shouldn’t have direct access to the source material, because they won’t understand it properly without us. Gorman offers no hint as to why direct access was an improvement when created by the printing press then but a degradation when created by the computer. Despite the high-minded tone, Gorman’s ultimate sentiment is no different from that of everyone from music executives to newspaper publishers: Old revolutions good, new revolutions bad.

More excellent responses at:

UPDATE: Clay Shirky (and Gorman by association) makes BoingBoing.

Gorman, again


Karen Schneider twittered the latest Michael Gorman insanity, written on (no surprise here) the Britannica Blog. Long time readers of this blog might remember that I’ve publicly disagreed with Gorman on a number of things, and this latest rant isn’t any different.

In Web 2.0: The Sleep of Reason, Gorman rambles over the landscape of authority, truth, and web 2.0 like a lost puppy, not quite sure where he’s supposed to be going, but sure he has a destination. And that destination is TRUTH. I believe that he has no idea what he is talking about re: Web 2.0, and that his article clearly illustrates the significance of his misunderstanding.

Let’s begin with some examinations of his quotes, shall we? The opening paragraph is a doozy:

The life of the mind in the age of Web 2.0 suffers, in many ways, from an increase in credulity and an associated flight from expertise. Bloggers are called “citizen journalists”; alternatives to Western medicine are increasingly popular, though we can thank our stars there is no discernable “citizen surgeon” movement; millions of Americans are believers in Biblical inerrancy—the belief that every word in the Bible is both true and the literal word of God, something that, among other things, pits faith against carbon dating; and, scientific truths on such matters as medical research, accepted by all mainstream scientists, are rejected by substantial numbers of citizens and many in politics.

I suppose we’d be better off, Michael, if journalists were required to get a governmental approval pass before they could write? The US has a long history of “citizen journalism”…if Thomas Paine were alive today, he’d have a blog.

And to equate the social movement inherent in Web 2.0 with creationism and alternative medicine is not only a category mistake of the largest sort, it is also just insane. It isn’t that there is a “flight from expertise”, Mike…it’s that we are re-defining “expert”. You sound like the Catholic loyalists railing against the Protestant movement…only the priests are allowed to talk to God! Bibles will only be printed in Latin!

The fact that information changes forms or source has no effect on its Truth. Truth judgments arise because the information itself is reflective of the world at large, testable and reproducible in the case of claims about the world (scientific claims) and verifiable in the case of claims about information itself. The goddamn source of the information has absolutely no bearing on the truth of it. None. Zero. Nada. Ziltch.

Ah, but Mike has a bit about that:

Print does not necessarily bestow authenticity, and an increasing number of digital resources do not, by themselves, reflect an increase in expertise. The task before us is to extend into the digital world the virtues of authenticity, expertise, and scholarly apparatus that have evolved over the 500 years of print, virtues often absent in the manuscript age that preceded print.

The reason that the “scholarly apparatus” evolved isn’t because of some desire to desperately produce only the best knowledge…it evolved because of economic pressures. In print, not everything can exist. Print costs money, and in the world of the academic the things we put our financial faith in, mostly, are things that pass the “scholarly test” of peer review. We have to have some limiting process because there is only so much money, NOT because the process itself is holy.

In the digital world, money is often the least of the concerns of information production. That simply means that we have to critically examine each piece of information as it lies with the web of knowledge, and draw coherence lines between the pieces. But we don’t want to get bogged down in the old way of doing things just because it worked in print. Digital is different, and demands different processes and analysis.

The structures of scholarship and learning are based on respect for individuality and the authentic expression of individual personalities. The person who creates knowledge or literature matters as much as the knowledge or the literature itself. The manner in which that individual expresses knowledge matters too.

Ummm…no? After holding up the Scientific Method so often in his article, you’d think he’d understand it a bit more. The point of the scientific method is to eliminate the person and make it about the knowledge, writ pure. The person does not matter, can not matter when it comes to the expression of the knowledge…keep in mind, we aren’t talking about the native intelligence necessary to invent or have insight. We’re talking about the information itself.

This is a rambling, nearly incoherent piece of writing when you try to connect logical lines between his arguments. He moves from comparing Web 2.0 to Creationism, to how his research on Goya done via print is the best way to do it, to comparisons between Web 2.0 and Maoism, to finally accusations of antihumanism.

I can’t decide if the whole article is best described as a Straw Man, Questionable Cause, or if it’s just one enormous fallacy of Appeal to Authority (yes, Virginia, that is a faulty method of thinking).

And let’s not ignore the final indignity: this is an essay decrying Web 2.0 posted on a blog, with multiple RSS feeds, and a Share This section for adding it to del.icio.us, Furl, Reddit, and Digg.

Irony, much?