Google researchers have developed a deep-learning system designed to help computers better identify and isolate individual voices within a noisy environment. As noted in a post on the company’s Google Research Blog this week, a team within the tech giant attempted to replicate the cocktail party effect, or the human brain’s ability to focus on one source of audio while filtering out others—just as you would while talking to a friend at a party.
Hello, surveillance state!
Engineers have previously investigated the possibility of having a camera sensor power itself with the same light that falls on it. After all, it’s basically just two different functions of a photovoltaic cell — one stores the energy that falls on it while the other records how much energy fell on it.
Ok, I totally want my own personal droid. It’s kind of silly how much I’d like to have this if I were in a city.
This past month I traveled to a place I wasn’t sure I’d ever visit…Doha, Qatar. I was brought to Doha for an awesome reason, to deliver the Gloriana St. Clair Distinguished Lecture in 21st Century Librarianship. The topic that I was asked to prepare remarks on was Blockchain (which I chose to broadly construe as decentralized technologies) and how it (they) might matter to the information professions in the near future. The actual title of my talk was Decentralization & Blockchain: Possibilities & Problematizations for Libraries, and the goal was to explain the technology, but also to bring to light potentials and risks that surround blockchain and decentralization technologies as they relate to libraries and information systems. There is a huge amount of potential in this technology, beyond the fintech hype and insanity of the moment. There is also risk, especially for organizations that are centered around the very notion of centralization of resources.
Here’s my lecture, along with the accompanying slides below it. If your consortium or company is interested in possibilities for blockchain in the information space and are looking for a consultant to help you understand it, I’m available.
After well over a decade of being a part of ALA and LITA, and working at (almost) every level of the division, I was asked and accepted the nomination to run for the position of Vice President/President-Elect for the Library & Information Technology Association. I’ve served as an organizer of an Interest Group, been the chair of multiple committees, served as a Director-as-Large, and spent two years as Parliamentarian for the Division. I’m excited that I have the opportunity to stand for election, and I hope that members find it worthwhile to vote for me. If you’re reading this, I hope I can count on your vote, and ask you to let your friends in LITA know that I would appreciate their vote as well.
What does this mean? If elected, it means I would spend the next three years following an arch of leadership in LITA (as Vice President, then President, and then finally Past President) at a time of what could be great change. The recently released Working Document – Exploration of Integration and Realignment Opportunities for ALCTS, LITA, and LLAMA is the beginning of a long discussion among members of the respective divisions. The TL;DR of the document is that all three divisions recognize that their individual challenges may be mitigated in part by joining forces…not an easy nor straightforward goal, but one that has the potential to strengthen the opportunities for and service to all members.
I’m excited by the opportunities a change like this represents. My time with LITA has been punctuated by efforts to make systems better for members, first as an IG chair with BIGWIG where we moved the needle on how presentations might work at the Annual conference through the Social Software Showcase, then as chair of the Programming Planning Committee where I led the team that completely revised how programming was done by moving from an entirely analog process (7 copies of your proposal plus in-person meetings at Midwinter…) to a digital one. Even now, when I’ve been tasked with re-thinking how LITA Forum works, my focus is always on what we can do to empower and reduce the friction necessary for members to be involved.
Meanwhile, the rest of the world of technology will keep marching, and I will work to maintain focus on issues that are at the heart of the future of the profession. I’ve tried to outline some of those on my Election Website, but I would LOVE to hear from members (and potential members!) about where you would like LITA to focus. If I’m elected, I’m going to need a ton of help…but I’m excited to have the opportunity to serve in this role, to work to make LITA better for members, and to hopefully chart a better course for the future of library technology.
If you have any questions for me, or just want to drop me a note about anything, I’d love to hear from you. You can @ or DM me on Twitter @griffey, or feel free to send me an email at griffey at gmail.
If you are a LITA member: I ask for your vote, and appreciate your faith in me if you do. Voting opens Monday, March 12, closes Wednesday, April 4, and you should receive details on voting in your email.
I will say, this is maybe 2 years earlier than I thought this would happen. But no doubt that it would, or that eventually this will be commonplace (except of course for Designer Brands that sell privacy as a service).
In a test due to begin this year, L.L. Bean plans to ship a line of coats and boots with sewn-in sensors that send data to the public Ethereum blockchain platform. The retailer is building a data tracking and analytics system to use customer data stored on Ethereum. Loomia, a Brooklyn-based technology company, plans to provide sheets of flexible circuitry to embed in the apparel, along with a small hardware device that uses near-field communication signals to collect data from the circuits while the custome
From the outside, the Vaunt glasses look just like eyeglasses. When you’re wearing them, you see a stream of information on what looks like a screen — but it’s actually being projected onto your retina.
Super interesting maker/prototyping experiment from Google…use a few pieces of cheap hardware and paper construction to build a device that listens to your voice, and acts as an ambient information sensor. Potentially really useful for maker programs in libraries (if you don’t mind the creepy Google factor).
Paper Signals are build-it-yourself objects that you control with your voice.
Lulzbot, everyone’s favorite 3D printer company, announced some amazing new stuff today. The first is a new version of their customized Cura, my choice for quick and easy slicing/plating for Lulzbot printers. But the really interesting stuff is all the hardware they announced!
Modular Bed System for both the Taz and Mini
Dual Extruder v3, with a new water-soluble support filament
A new inexpensive enclosure for either the Taz or Mini, and very exciting for a lot of libraries, a stand-alone controller for the Mini that just clips on and allows for computer-free printing directly from an SD card.
It’s great to see Lulzbot continue to innovate and make their printers even more useful. There’s a reason they are my number one choice for libraries looking at a 3d printer purchase.
This post is a short excerpt from my upcoming Library Technology Report on Smart Buildings. I’m just returning from attending LITA Forum 2017, and had a fantastic experience. My one disappointment was in the lack of problematization of data collection, retention, and analysis…especially as it relates to the “Internet of Things” and the coming flood of data from IoT.
This excerpt contains no solutions, only questions, concerns, and possible directions. If anyone has thoughts or would like to start a dialogue about these issues, I’d love to talk. The full Library Technology Report on Smart Libraries will be published by ALA TechSource in the next few months.
The end-game of the Internet of Things is that computing power and connectivity is so cheap that it is literally in every object manufactured. Literally everything will have the ability to be “smart”; Every chair, every table, every book, every pencil, every piece of clothing, every disposable coffee cup. Eventually the expectation will be that objects in the world know where they are and are trackable and/or addressable in some way. The way we interact with objects will likely change as a result, and our understanding of things in our spaces will become far more nuanced and details than now.
For example, once the marginal cost of sensors drops below the average cost for human-powered shelf-reading, it becomes an easy decision to sprinkle magic connectivity sensors over our books, making each of them a sensor and an agent of data collecting. Imagine, at any time, being able to query your entire collection for mis-shelved objects. Each book will be able to communicate with each book around it, with the wifi basestations in the building, with the shelves, and be able to know when they are out of place. Even more radical, maybe the entire concept of place falls away, because the book (or other object) will be able to tell the patron where it is, no matter where it happens to be shelved in the building. Ask for a book, and it will be able to not only tell you where it is, it can mesh with all the other books to lead you to it. No more “lost books” for patrons, since they will be able to look on a map and see where the book is in their house, and have it reveal itself via an augmented reality overlay for their phone.
The world of data that will be available to us in 10-20 years will be as large as we wish it to be. In fact, it may be too large for us to directly make sense of it all. My guess is that we will need to use machine learning systems to sort through the enormous mounds of data and help us understand the patterns and links between different points of data. The advantage is that if we can sort and analyze it appropriately, the data will be able to answer many, many questions about our spaces that we’ve not even dreamed of yet, hopefully allowing the designing of better, more effective and useful spaces for our patrons.
At the same time, we need to be wary of falling into measurements becoming targets. I opened the larger Report with Goodhart’s Law, credited to economist Charles Goodhart and phrased by Mary Strathern, “When a measure becomes a target, it ceases to be a good measure.” We can see this over and over, not just in libraries, but in any organization. An organization will optimize around the measures that it is rewarded by, often to negative effects in other areas. This is captured in the idea of perverse incentives, where an organization rewards the achievement of an assessment, only to realize that the achievement undermines the original goal. The classic example of this is known colloquially as the “Cobra effect”, named after the probably-apocryphal story of the British colonizers in India rewarding citizens for bringing in dead cobras in an attempt to control their deadly numbers in cities. Of course, the clever people of India were then incentivized to breed cobras in secret, in order to maximize their profits….
Libraries should be wary of the data they gather, especially as we move into the next decade or two of technological development. The combination of data being toxic to the privacy of our patrons and the risks of perverse incentives affecting decisions because of measure’s becoming targets is actively dangerous to libraries. Libraries that wish to implement a data-heavy decision making or planning process need to be extraordinarily aware of these risks, both acute and chronic. I believe strongly in the power of data analysis to build a better future for libraries and our patrons. But used poorly or unthoughtfully, and the data we choose to collect could be secretly breeding own set of cobras.