Earn Great Returns. $25 Sign-Up Bonus. Borrow up to $25K. Rates as low as 7.00%.



View Preston Parker"s profile on LinkedIn

Monday, November 15, 2004

Online Identity

I got thinking about if cooperation (however defined) can succeed without accountability and trust. I’ve come to the conclusion that it can. Just think of all the Usenet newsgroups. People can ask questions and get responses from people who have no reputation and/or very little identity. The questioning person can then use the information from the responder to act in a certain way. Thus, cooperation can result.

Now, yes, I agree that a receiving person will act differently depending on the perceived reputation or identity of the sender. Think about asking for medical advice in a newsgroup. Let’s get even more specific. Think about asking for advice about how to deal with cancer. Anybody could respond and the actions could actually be highly effective or highly detrimental.

So say the cancer patient asks for advice and two online identities (presumably two different people) respond. If one of the responses appears to have come from someone with a medical background—someone who a lot of others have given positive feedback, thus having a good reputation—then the receiver might actually do what the responder says with little hesitation. Now, if the other response comes from an anonymous person or someone with no track record, then the receiver might do some more research before doing what this responder says to do. Does one interaction imply more or less cooperation? I don’t think so. It’s just a different kind of cooperation. And, I do not perceive one type of cooperation to be more valuable than another. In one instance the information could be more valuable, but the cooperation value is the same.

So, what about accountability? What about, using this same example, a person who is deliberately sending out deceptive signals? Yes, that would be hard to imagine that someone would intentionally deceive a cancer patient, but it could happen. In such a case, the costs of finding the deceiver and proving the trail of deception is very difficult, as Judith S. Donath mentions in her article http://smg.media.mit.edu/people/Judith/Identity/IdentityDeception.html . My opinion is, why try to catch such a person? What’s the point? I would rather have a system where the bad signals are filtered out by all the good. For example, what if our cancer patient asked his question and instead of only two responders, there were 100, and, perhaps they could build upon each other’s responses. Then, the bad would likely be filtered out and the good would kind of become the average of the responses of those that remained. That would be a better system for our cancer patient.

But then, I’m caused to question something else. What is the trust, reputation, and identity of large groups in the virtual world? How is this determined? And, how can there be a group track record establishing trust, reputation, and identity? Is it based on the individual users’? That could be time-consuming to determine. And does every single group have a different identity? Or could there be an average identity that can be expected for any given question? Would there be an expected value of the signals that could be received based upon the question and the number of responders in the group? I wonder?

No comments: