Intelligent, automatic filtering for desired censorship

When I lived in a different city than I do now, a friend and I watched soccer games together, particularly UEFA Champions League matches. These games were televised during the middle of the day and mid-week, so he would DVR them so we could watch during non work hours. In many cases we watched them one or two days later. In order for the result to be the most surprising to us, we both fasted from soccer news sources, including websites, RSS feeds, and the SportsCenter ticker. While we did not track how successful we were at avoiding news of the game, we would, inevitably, stumble upon some bit of information which gave away a key aspect of the game if not the final score.

What I wanted then (and now, as I continue the same practice of delayed viewing of soccer games) was a personalized information agent capable of filtering (censoring) all things related to the game in question or information which would clue me in to the result even if not tied directly to the match (such as the top goal scorers in the competition). Now, I support open, transparent sharing and re-use of information on principle. Censorship, of the media or Internet access or the arts, is generally an ill of society precisely because the censoring is imposed by some external power. The type of intelligent filtering described here is self-imposed and functional.

Similar to the sports scenario, other types of news information could be filtered by an agent. Think about those people who, for example, decided not to stay up late watching the Oscars. Instead, they recorded the show for later viewing. Until that viewing, they did not want to hear or read about the award for Best Picture. This means, while taking the bus to work the next morning and listening to NPR on a portable radio device, they want the Oscar news to be filtered. They want their Twitter feed to be filtered of Oscar award news. They want their web browsing on company time to be filtered of Oscar award news. And so on for every information source.

Filtering the conversations of co-workers around the proverbial water cooler is another issue entirely. Of course, our fictional person could announce her desire to her colleagues and hope for the best. This method might work in the office but not on the bus when other passengers are gabbing it up about how unworthy the Best Picture actually was.

Another desired feature of the intelligent agent is the filtering of redundant information (which, according to Floridi, is not information since it is not new). Take your Twitter feed, for an example. Most likely, if you use Twitter, you follow groups of feeds which strongly overlap in topic. It is not unlikely that one or more of these feeds retweet each other. After you read a tweet for the first time, what is the value of reading the retweet if it is identical? Perhaps some annotation is worth reading. Or take reading news online as another example. If my local paper picks up a New York Times article I read yesterday, I don't even want to see it listed (or at least modified somehow, such as grayed out) when I browse the stories from my local paper.

On the other hand, not all duplicate information is noise. The Ushahidi team, as one example, set up the mechanism for people on the ground in Chile to report on emergency needs after the earthquake. If ten reports come in about the same building collapse, this is actually an internal verification mechanism of the probability of need. So, in this, and other cases, duplication, to an extent, is important and need not be filtered out. The information might be aggregated and collated in some useful way, but not blacked out temporarily or permanently.

The last example reinforces the contextual nature of how the intelligent agent should act depending on circumstance, audience, information stream, and purpose.