The New Curators: Weaving Stories from the Social Web
Back in March Robert Scoble verbalized something I have been feeling for a long time: that we need better tools to weave together the diverse and distinct bits and pieces of information on the web. These days news and information is being distributed and discussed on a range of platforms – places like Twitter, Facebook, RSS readers, YouTube, blogs and websites. However, the stories and issues at the heart of those various bits of content are much bigger than the sum of its parts. How, then can we bring these various parts of the web in conversation with each other, add context and tell a fuller story?
Scoble’s post was called “The Seven Needs of Real Time Curators” and he described it as “a guide for how we can build ‘info molecules’ that have a lot more value than the atomic world we live in now.”
“First, what are info atoms? A tweet is an atom. A photo on Flickr is an atom. A conversation item on Google Buzz is an atom. A Facebook status message is an atom. A YouTube video is an atom. Thousands of these atoms flow across our screens in tools like Seesmic, Google Reader, Tweetdeck, Tweetie, Simply Tweet, Twitroid, etc.
A curator is an information chemist. He or she mixes atoms together in a way to build an info-molecule. Then adds value to that molecule.”
I’m not going to go through his seven points. You should go read the whole post. Instead, I want to take a look at three interesting projects that get us part way to what Scoble was describing: Slices of Boulder, Storify and Swift River.
Slices of Boulder
I first wrote about Slices of Boulder here, before the actual site had launched, and at the time I described it as more of a mapping effort than a curation effort. Now that the site is live and I’ve had the opportunity to spend some time on it, I see it as an interesting – if not perfect – experiment in some of what Scoble was describing. It is not itself a tool for curation, but rather an example of one particular tool in action.
To understand Slices of Boulder you have to understand a bit about what’s happened to the news, and what that has meant for communities. It used to be that you could get a fairly good round-up of key local issues and conversations by reading the local weekly or daily paper. However, many newspapers are not what they used to be, and the nightly news is mostly sports and weather. At the same time local community issues are being reported on new citizen and hyperlocal blogs, social networks and independent media outlets. There is a lot of vital information and robust conversation out there – but it is fragmented and diffuse.
This is the problem that Slices of Boulder seeks to address. The project, a collaboration between the Digital Media Test Kitchen at the University of Boulder and Eqentia, claims to be “a curated news-and-information aggregator of links to the massive amount of content streaming from many digital sources serving Boulder, Colorado, and its surrounding area.” Eqentia’s technology drives the site. Interestingly, Eqentia was designed to help industries (think education, healthcare, etc…) manage and make sense of the huge flow of information being created in and about their field. CU Boulder worked with Eqentia to apply that idea to a geographical area.
The result is impressive both in the amount of live and constantly updating content and the ability to dive in deep and “slice and dice” the site’s feeds based on your area of interest (think events, arts, politics, etc…). Building the site required a lot more than just popping a bunch of URL’s into a feed reader. The combination of human curation and computer aggregation started with the creation of a detailed taxonomy of the city of Boulder, including physical areas (neighborhoods, nearby towns, etc.), entities (business, government, and organizations), and topical areas and interests (industries, local issues, popular local activities, etc.). In short, it took a real depth of understanding of the local community. You can read more about building the site here.
I like Slices of Boulder as a proof of concept but its usability and the look and feel of the site need to be much enhanced before the average community member would be able to easily navigate it. The home page – with it’s various content verticals and fast moving stream of Twitter updates – feels like information overload at first. It takes a bit of orientation before one can see the potential of the site and understand the full extent of the tools that the Eqentia software provides. It’s a powerful first step and I’ll be watching it with interest.
The first blog post on Storify’s site appeared the same month as Scoble’s post, and indeed, makes a nod to Scoble. However, Storify’s founders were working on these ideas well before Scoble so clearly defined his “seven needs” (see their work on this earlier project, Publitweet). Storify is in early beta testing and I have been playing with it a bit. In general, I really like what I see.
Storify encourages you to “Turn what people post on social media into compelling stories. You collect the best photos, video, tweets and more to publish them as simple, beautiful stories that can be embedded anywhere.” Storify is based on two panes: 1) Navigating various content feeds (i.e. a Twitter search, a Facebook stream, as well as content from YouTube, Flickr and more) and 2) A blank stream where you can drag and drop elements from those streams to build your story. It is blissfully simple and the results can be beautiful (see the Storify blog for great examples of the tool in action).
Whereas Slices of Boulder aims at providing current and up-to-date information, Storify is much more about archiving and adding context. The founders note that part of real-time curation is taking things out of “real-time” and making that impactful for a “long time.” They write, “Millions of people are sharing content through social media. But these streams of information are quickly lost in the never-ending stream of updates. With Storify, you can put together the best Tweets, photos and videos to make stories that will be remembered.”
The biggest problem I ran into was when working with some very long streams in both panes, the navigation and scrolling got a bit challenging. I also couldn’t embed the story on any of my sites at this time – but they are working on that. Nonetheless, I’m very excited about Storify. I’ll be interested in how the tool develops and whether there will be ways to bundle or group “stories” created on the site (imagine if 20 people created stories about an event, and the usefulness of somehow tagging or connecting those different accounts). Storify is shaping up to be a powerful tool, even in these early stages.
Swift River is the tool I’ve had the least experience or interaction with. Based on what I have read, it shares some characteristics of both the tools above. Built by the folks behind Ushahidi, the much heralded mapping application, Swift River was designed with crisis situations in mind. The developers describe it as a “platform that helps people make sense of a lot of information in a short amount of time.”
“The SwiftRiver platform was born out of the need to understand and act upon a wave of massive amounts of crisis data that tends to overwhelm in the first 24 hours of a disaster… In practice, SwiftRiver enables the filtering and verification of real-time data from channels such as Twitter, SMS, Email and RSS feeds. This free tool is especially useful for organizations who need to sort their data by authority and accuracy, as opposed to popularity.”
As you can imagine, with crisis and data information as with journalism, reputable sources and trust matter immensely and those things play a key role in Swift River. The Swift River team talks about how their tool can help triage and verify crowdsourced information. This is a process that requires learning on both sides of the computer screen: “where humans can distribute work to many, use machines to aggregate the output of that productivity, and then work with smart tools that learn from the users needs and expectations. If our code isn’t smart enough to make sense of data on it’s own (it’s not) but humans are (yet they aren’t as fast or organized), then perhaps part of the solution lies in optimizing human efforts at filtering content, adding context and using the result as the base for improving future algorithmic decisions.”
I haven’t seen Swift River in action, but am eager to see what it can do.
I’m sure there is more out there – I know of a few in the early stages of development – but what have I missed? Add yours in the comments below.