Categories
Legal Issues Media MPOW Music Technology

iTunes and Libraries question

In thinking about Michael Sauers recent brilliant post on cataloging Creative Commons works, I’m considering setting up an iTunes instance on our Student network in MPOW. On that system, we could load…well, that’s the crux of this post. Long time readers of this blog know my stance on copyright, and that I keep up with the latest issues, especially vis a vis digital copyright. I could, at the very least, load CC licensed music on this system. But what else?

So, I ask you, blogosphere: What can I legally load on that iTunes instance? It would be openly shared, streamable to anyone connected to our student network…but, as anyone who has used iTunes knows, not downloadable. Can I load the majority of the library music collection on that machine? Why not? If it is legal for me as a private citizen to rip my purchased music to digital form (yes, I realize that not everyone thinks this is legal, but it is the current position held by most copyright thinkers), then why would it not be legal for “me” as a library? Once ripped, can it possibly be illegal for me to use functionality that iTunes has built into it?

Is anyone out there doing this? It would mean that every student could stream any of our music collection from any computer with iTunes as long as they were connected to our network…which would, of course, be any computer in the library (or their own computer).

Once more, oh blogosphere, I ask you: what’s wrong with this idea?

Categories
Books Digital Culture Library Issues Media Music Technology

The new information economy

Over the course of the last 20 years, there has been a radical shift in the economies of information. We’ve moved from a world in which information was plentiful but distributed and difficult to find to a world where information is even more plentiful, but ubiquitous and easy to find. Libraries are suffering now as a result of their inability of unwillingness to change based on the new method of information indexing, exchange, and archival.

Libraries were a central part of the public sphere because of that information imbalance. Most libraries have moved to a new model that emphasizes access and comfort instead of being the storehouse of knowledge they once were. Access is something that libraries have on their side, because information, in defiance of the normal rules of supply and demand, still insists on being expensive.

Prepare for another shift, because the next 5-10 years is going to change the rules again.

Chris Anderson, in the latest Wired magazine, outlines the next information revolution: Free.

The rise of “freeconomics” is being driven by the underlying technologies that power the Web. Just as Moore’s law dictates that a unit of processing power halves in price every 18 months, the price of bandwidth and storage is dropping even faster. Which is to say, the trend lines that determine the cost of doing business online all point the same way: to zero.

Anderson outlines his argument in the context of business, but his points really show us that the nominal cost of information delivery is the core of the revolution. Of course, the fact that the delivery is free does not immediately mean that the information being delivered is free…that change arises from more traditional competitive pressure. What are traditional information services like books, movies, and television competing with these days? They are competing with free, easily available, highly portable, and in nearly every way more useful unauthorized versions of themselves.

When customers look at the following options, what do you think they choose?

Buying TV shows on iTunes, where they can watch them on their authorized computer and iPods, but not on their Zune or PSP or anywhere else they might want OR downloading a .torrent of their favorite TV show that is higher in quality than the iTunes download that they can watch anywhere they want.

Buying an audiobook from Audible, which has limited playability on only approved devices, or grabbing a P2P copy of that audiobook with no limitations (and no price).

Reading a book on Harper-Collins website, embedded in your browser is one option. Another is the Tor model, where once a week they are providing a free book, in multiple formats (pdf, html, mobi) for you to do with as you will. Move it to an ebook reader. Read it on your computer. Put it on your cellphone. Another option is the library.

It’s obvious that things that are free have an immediate advantage, and libraries have been free for a very long time in the US. But even free vs free has its calculus. If we look at the above examples, it’s very important for libraries to realize that they aren’t competing with iTunes and Audible. They are competing with .torrents and other P2P technologies that disintermediate the information distribution process.

But even free has choices: One example is Hulu, the beta site for NBC/Fox/etc. They pulled their shows from YouTube, citing copyright violations, and launched Hulu, where they can control the message and availability. Then there is OpenHulu, a site that scrapes Hulu and provides the ability to watch the same shows with no login or account creation. Yet another choice is the aforementioned Torrent or other P2P distribution, where there are no commercials, no requirement to stream instead of download, and the ability to watch them on the device of your choice. The advantage of Hulu and OpenHulu over torrenting is instant gratification. Which wins?

So when there are two freely available sources for information, what drives choice? Lots of different aspects of the interaction between the patron and the information make the difference. Ease of use. Availability. Speed. Quality. Brand recognition. Marketing.

Anderson points out that free is the future of commerce, and I absolutely see it as the future of media and information generally. How do libraries then compete in a world where their major advantage is completely nullified? What do we bring to the new information economy, because we need to be planning and implementing now to have any hope of competition.

I think I know some of the ways we compete, but that’s another post. What do you think we can do to stay relevant?

Categories
Library Issues Technology

Win a Wii, donate to an amazing cause


Jenny
and Michael, in a fit of brilliance, have set up a Win a Wii donation drive for Blake Carver, who runs LISHost, my very favorite internet hosting service. If you read a library-focused blog, there’s a better than average chance that Blake is hosting it…including Pattern Recognition.

So: go and donate, and maybe win a Wii!

Categories
Library Issues Technology

Autonomous Self-Checkout Idea

Here’s an idea I had today that I wanted to get down so I don’t forget it…autonomous self-checkout with cell phones. Here’s the idea:

You write a web-service that logs the customer into their account from their cell phone browser, and then takes over the camera on their cell. They point the camera at a bar code on the book in question, and you software looks it up in the catalog and checks it out to the patron.

The difficult part for the library is how to enable the deactivation of the security strips that most of us use…ideally, the security system would be tied to the catalog, and would know when a book was checked out and when it wasn’t, and alarm only as appropriate.

This would take library staff completely out of the checkout process (which self-checkout already does) but would ALSO take any specialized equipment out, and allow for nearly complete patron autonomy in the stacks.

The interesting thing is, I’m pretty sure that all of this is possible with current open source software. Certainly there would need to be some development, but I don’t think anything would have to be completely written from scratch…maybe connectors that transfer data from one system to the other.

Thoughts? Is this being done anywhere? Or did I actually have an original thought?

Categories
Digital Culture Library Issues Technology

I really hate Zotero

There, I said it. Zotero should warm the heart of any academic, but somehow it escapes me. I’ve been loath to admit it for a long time, especially since I was part of the beta, and tried it for a long time. Plus, it’s exactly the sort of tool that I should really love.

Except I don’t.

Why not? Well, after examining my prejudice, I came to one conclusion: I no longer have any patience with applications that are local. Unless the application I want AND my data live in the cloud, I just won’t use it. I’ve found myself, over the last 6 months to a year, moving nearly everything I do online. Documents are created with Google Docs, I prefer Gmail to any local mail client I’ve tried, heck, I’ve even started using Flickr’s editing deal with Picnik to do my photo edits, and I luuuuuuurve me some photoshop.

What’s up with this change? I really only use two computers; my work PC and my Macbook. It wouldn’t be that hard to use local programs, and sync my documents. The problem is that it’s any effort at all. Syncing my documents shouldn’t be something that I think about, it should just happen…Mac nearly has it right with their .mac syncing, but the PC world just doesn’t operate like that without some serious effort on the user’s part. If Apple would move hard into this space, perhaps with Google as a partner…I think they could revolutionize computing yet again, especially if they leveraged their media power as a part of the cloud storage.

But I digress…

After using Zotero for awhile, I found myself cursing the fact that I had two different databases of information…the “macbook” stuff and the “desktop” stuff. This is why the third lobe of my brain is del.icio.us…I don’t have to think about where I might need that information. It just goes to the cloud, and I pull it down no matter where in the world I may be. I know that Zotero has listed on it’s homepage:

Remote library backup
Shared collections
Access your library from anywhere via the web

Give me that, and maybe it becomes a tool that is useful to me. But until then, local just doesn’t cut it anymore.

Categories
Digital Culture Images Technology

Product Review: Eye-Fi

As a part of having a new Eliza around, we went camera shopping for a new digital camera. I asked for feedback from a bunch of friends as to what digital camera they used and liked, and took those suggestions and matched them against my list of requirements. I wanted something that would do both still and video well, preferably HD video for future-proofing, and had a decent pocketability.

After a failed attempt at locating the Sanyo Xacti model that I wanted, I discovered the Canon TX-1. We’ve loved our Canons from the past…our last two cameras were Canon; one of the original Elphs, back with they were APS film based, and then our immediate past digital, a 4 megapixel Elph. With our history with Canon, plus the TX-1’s optical image stabilization, we decided to give up on the Xacti and just go with the Canon.

But that’s not the product I wanted to sing the praises of in this post. No, I’m beyond thrilled with the memory card I bought to go with the TX-1. Yep, the memory card…if your digital camera takes SD cards, you should immediately buy one of these:

eyefi

A brilliant little piece of tech, the Eye-Fi combines a 2 gig SD card with built in wifi, giving any camera the ability to automatically upload pictures that you take to your computer, to any of dozens of web photo sites, or both. The card comes with a little USB dock, and has the software necessary to tie the card to your computer and website on itself. You plug it into your computer, and it walks you through linking the card, your wifi point, the computer you’re on, and the website in question.

Once it is set up, the process is simply take a picture, and…that’s it. You take a picture and as long as your camera stays on it will upload your pics to your computer and to the web automagically. Now there are limits, and ways that I would love to have the product behave that it doesn’t.

For one, the card attaches itself to your wifi point…not to your computer. I would vastly prefer the card to attach via an ad-hoc network to my computer, and then have the computer do the heavy lifting to the web. That way it would work even when I was traveling, without having to logon to the Eye-Fi manager website and manually change the settings. You can make it work now, but it’s less easy than I’d like.

Still, if you do most of your uploading from home, it doesn’t get any easier than this.

Categories
Technology

Updated feeds

For those following along via feedreader, you may notice a temporary hiccup while I transition my feeds through Feedburner. I believe this will happen automagically, and that no one should notice a thing…but I’ve been wrong before.

Thanks for subscribing, and if you notice anything truly strange, please leave a comment and let me know. I hope to be back to regularly scheduled blogging very shortly.

Categories
Technology

Google launches OpenSocial

Huge news for social networking on the web. Google launches OpenSocial, a set of common tools/API’s for existing social networking sites that allow them to talk to each other and exchange information. For librarians: think of this like…a new metadata standard that allows MySpace to talk to Facebook. Potentially.

Brilliant, revolutionary, and potentially genre-busting. We’ll see who decides to implement it, but it won’t take much for it to be a defacto standard among networks.

Categories
Digital Culture Library Issues Technology

Information R/evolution

Just another amazing video from the maker of The Machine is Us/ing us. Digital information, as much as we like to treat it like paper, is just different.

The sooner librarians get their heads around this, the better for our patrons. I’m trying desperately to wrap my head around how this influences our new library building…

Categories
Library Issues Technology

Academic Library 2.0

In my last conference attendance for some time, I’ll be at Internet Librarian 2007 at the end of this month. It’s been a long time since I’ve been so excited about a conference, and it’s not just that it’s in Monterey in October. It’s that I have a chance to lead a workshop with the most exciting bunch of librarians I know:

Workshop 11 — Academic Library 2.0
9:00 a.m. – 4:30 p.m. FULL DAY
MODERATOR:
Amanda Etches-Johnson, User Experience Librarian, McMaster University
FACULTY:
Chad Boeninger, Reference & Instruction Technology Coordinator, Ohio University
Michelle Boule, Social Science Librarian, University of Houston
Meredith Farkas, Distance Learning Librarian, Norwich University
Jason Griffey, Head of Library Information Technology, The University of Tennessee at Chattanooga

What do the terms Web 2.0 and Library 2.0 mean for academic libraries and librarians? Join our panel of 2.0 practitioners and experts for a day of exploration and discovery as we navigate the 2.0 landscape, exploring what 2.0 tools and technologies can do for academic library users. Through a combination of presentations, discussion, and hands-on activities, our dynamic speakers introduce you to technologies such as blogs, wikis, RSS, mashups, social bookmarking and online social networks. This interactive session provides practical examples of academic libraries that are using these tools and technologies, arms you with the expertise and techniques to introduce these technologies in your own library, and share strategies for getting buy-in from staff, administration, and patrons. A worthwhile day for those interested in implementing changes to keep up within the Web 2.0 world.

If you know anyone attending IL for a preconference, this one is going to knock their socks off.