Monday, 5 August 2013

Community, Regulate Thyself - Microsoft's New Route to Online Moderation

Oh yeah, this will work out brilliantly.

Let's face it, partaking in an online community is often a risky thing, although perhaps risky is the wrong word to use. Rather, I would say that it's often a frustrating, needlessly vitriolic thing that gets bogged down by a bunch of people that you'd rather not associate with without the ability to punch them in their faces. I'm sure that we've all been there at least once in our online travels: the griefers, the trolls, the people who ceaselessly complain about everyone else's performance. The list is probably longer than my arm, and too extensive to cover here, but it is a problem, have no doubt.

There are multiple ways to try and deal with problematic elements in the community: most often there are reporting systems and moderators and/or administrators that have been hired to specifically keep order in the community. However, it's not surprising that when a handful of people out of thousands or tens of thousands are asked to keep up with such a workload that they get a little overwhelmed. That's why Microsoft is hoping to introduce a measure of community self regulation via their latest initiatives.

Now, from what I can see, there are some measures to attempt to reign in a level of control with these systems. For example, people will not be able to choose names that they are given on the screens, they just get a random selection that they can either tag as innappropriate or not. Also, the idea that decisions will have to be looked at by more official, higher level users should, in theory, prevent the system from being used to troll.

That being said though, from where I stand this system is only one of two things: grossly ineffective, or grossly redundant.

The enforcement FAQ reads as follows:

Enforcement United incorporates information from multiple participants into an algorithm. That algorithm takes several factors into account, including how many members believe there had been a violation and how reliably those individual members’ historical decisions aligned with the general consensus. As a result of that process, the system may:
  • Automatically determine there was a violation and apply an enforcement action (e.g. require a member to change an inappropriate Gamertag)
  • Automatically determine there was no violation with no impact to the account in question
  • Escalate the complaint for one of our enforcers to review, if there was insufficient data or a lack of clear consensus from participants

How can you be sure these decisions are accurate?

We’ve carefully crafted a system that allows us to crowdsource information on potential violations while continually calibrating itself to understand how reliable that data and the sources it comes from are. To be clear, no individual member will wield unchecked power over another.

Although there is a random sampling to keep things fair, it also means that for the most part the actual power isn't really in the hands of anyone at all, it's just one big decentralized thing. While this is good in that it prevents abuse by those that would be inclined to do so, it also means that it's near impossible for one person to make a difference in any one matter.

If enforcers need to review decisions anyways, then why not just get more enforcers? I assume, perhaps incorrectly, that enforcers are people that are paid to go over potential content violations and the like. If that's the case, then why not beef up their numbers?

I dunno, it ultimately seems like an idea that's cute, but rather meaningless in execution, at least at the moment. Although perhaps there will be greater powers given later to those that actually earn trust, at the moment this doesn't really seem to go beyond basic reporting functions on forums and the like. Oh well, can't be perfect I guess.

No comments:

Post a Comment

Note: only a member of this blog may post a comment.