Sunday, April 19, 2009

Virtual Playground Monitors

Via yesterday's New York Times, a great article by Leslie Berlin (who is the project historian for the Silicon Valley Archives at Stanford) about the different techniques and technologies used to moderate children's virtual worlds for inappropriate/dangerous content and risky behaviours. The article focuses on vw's that try to monitor "intent as well as content"...rather than simply blocking keywords or limiting communication altogether. It also describes that bullying and disclosing personal info remain the most common dangers faced by young people online. According to Berlin, the biggest challenges for vw moderators are keeping up with "user innovations" aimed at bypassing moderation tools (such as "workarounds" or "secret codes"), as well as striking a balance between technological solutions and human judgement when it comes to deciding which words, workarounds and behaviours should ultimately be filtered out. However, this "balance" is becoming increasingly reliant on technology and sophisticated in-game surveillance tools. She describes the process as "a continuing game of cat and mouse between the young people and the technology designed to protect them." For instance:
NetModerator, a software tool built by Crisp Thinking, a private company based in Leeds, England, can monitor online chat “for intent as well as content,” says Andrew Lintell, the company’s chief executive. To build the tool, he says, Crisp Thinking analyzed roughly 700 million lines of chat traffic, some from conversations between children and some, like conversations between children and sexual predators, provided by law enforcement groups.

The software is integrated into a virtual world’s site. If the technology uncovers phrasing, syntax, slang or other patterns in a conversation that match known signs of bullying or sexual predation, it sends an alert to a moderator, who can then “drill down” to look not only at the entirety of the specific conversation, but also at every posting from either participant.

“We can capture a full picture of a user’s history on the game,” Mr. Lintell says. NetModerator also includes a filter that is updated regularly to include new words, abbreviations or character combinations that can be read as words, like “sk8.”

Berlin goes on to describe that the NetModerator program is already being used in a number of virtual worlds for children, including Cartoon Network's FusionFall, which uses the software to monitor open chat and player-to-player e-mail. Is this reminding anyone else of Minority Report? Private surveillance technologies being used for extensive data-mining and profiling...not exactly what pops into your mind when you think of a playground monitor, nevermind the deeper questions this raises about how these particular "patterns" are established and what other behavioural trends are being tracked concurrently.

For instance, another example described in the article is a new technology by Keibi Technologies, which is used to "determine whether user-generated content — like videos, images and text — contains objectionable material." It also builds a profile of every person who uploads content, and is currently used by "several sites aimed at children and teenagers"...which makes those profiles all the more comprehensive (and potentially invasive).

In some cases, using these tools has allowed sites to reduce moderator staff from over 20 people to just 3 or 4. But of course, the sites and designers don't want us thinking that they're relying solely on automated filter software (which has in the past proven itself almost invariably ineffective and/or overly-restricting). The argument is that because so much of the obvious stuff is being flagged by the system, moderators now have more time to focus on the "subtle or hard-to-interpret messages"...interactions and behaviours that aren't outright "offensive", but that may hint at something much more serious (such as the suspected presence of a predator, or that a user may be suicidal). The work that moderators do is definitely incredibly important -- particularly in children's websites -- it's difficult, complicated and for the most part surprisingly undervalued. If these technologies are truly making their jobs easier and allowing them to get into the grey areas a bit more, then that's great...but seeing as so much gets by moderation systems as it is, and that even the most advanced software still can't accurately predict human intention...and seeing that the increased reliance on automated systems is resulting in drastic reductions of human moderator staff rather than a simple shift in focus...it kind of makes this whole line of argumentation pretty sketchy.

Berlin and her interviewees also describe some of the "workarounds" that kids and teens use to circumvent filter technologies, highlighting the continued weakness of these systems. For example, they use:
...spammers’ tricks like inserting random spaces or deliberate misspellings into their chats. Ms. Littleton has heard of users trying to identify their hometowns in code. (“Opposite of pepper, body of water” is Salt Lake City.) Ms. Marshall has intercepted them trying to share phone numbers by typing the letters that correspond to the digits on a telephone keypad.

When one site blocked the word “love,” Ms. Marshall says, the children substituted the word “glove.”

Such ingenuity is one reason that technology alone will never provide enough protection, she says. “Kids will find a way to communicate,” she says, “and you can’t block every word in the dictionary.”

Technologists agree with her. “There’s no silver bullet from a technical standpoint to do this,” says Mr. Smith at Keibi. “This is about helping humans to make a decision about whether there is a problem.”


The article also has some useful stats about growth in the kids' vw sector, such as:
By the end of this year, there will be 70 million unique accounts — twice as many as last year — in virtual worlds aimed at children under 16, according to K Zero, a consulting firm. Virtual Worlds Management, a media and trade events company, estimates that there are now more than 200 youth-oriented virtual worlds “live, planned or in active development.”

My big takeaways from this article? The sense that the more things change the more things stay the same...despite the big push for "safety first" in many of these sites and guarantees of 24 hour human moderator services, there's still a huge drive to rely on technologies that seem...for the most part...not all that different from the automated filter systems of five years ago. More sophisticated maybe, but not exactly a horse of a different colour. Second - more and more evidence of the huge piles of data being amassed in these sites, as well as the troubling convergence of surveillance and profiling...using "safety and security" as an excuse to not only monitor and analyze children's behaviours online, but also to make decisions about them based on "predicted intentions." Third - the incorporation of UGC into the mix....I'm sure they are definitely looking for offensive materials, but how is that measured, what are they looking for, whose interests are being prioritized (corporate copyright regimes, anyone?), what's the impact on free speech/expression, and...of course...what else are they looking at (likes, personal habits, consumer trends, etc.).

1 comment:

Shaping Youth said...

Great piece, Sara...Just linked to it a couple of places in today's post on Shaping Youth: http://blog.shapingyouth.org/?p=6372

Would love to hear some of your VW specifics that I should direct to the Ypulse mashup powers that be from YOU and your readership too...Been remiss in keeping up with so much info here. I've GOT to get some extra hands. whew.

I'd love to 'RSS feed' your posts into a separate page of 'recommended reading'...we're working on ways to make the blog function better too, as I can't find much without Googling it anymore the content is so dense! ttys, Amy