[Home]DouglasReay/MediatedTrust

ec2-18-119-131-72.us-east-2.compute.amazonaws.com | ToothyWiki | DouglasReay | RecentChanges | Login | Webcomic

First order unmediated trust is where person A decides how much or little they trust a statement by person B based upon their own assessment of person B's previous statements in that area.

Second order human mediated trust is where person A decides how much to trust a statement by person B based upon person C's assessments of person B's previous statements in that area.

Third order computer mediated trust is where person A decides how much to trust a statement by person B based upon a calculation their computer makes based upon various criteria person A has previously given the computer and upon assessments of B's previous statements made by persons D, E and F none of whom person A may know anything about.


Examples of TrustMetrics? are Advogato?, SlashDot?, Google and STRN?.  A nice example is Amazon, which can tell you what books are also liked by a majority of the people who like the books I have already indicated that I like.

[Everything2] uses a Slashdot like reputation/voting system rather than a true TrustMetric.


One big problem with these trust metrics is that we have no proof that they continue to work.  They work whilst setting up a network - are they realistic, able to change naturally, once the network has been built - or do they tend to calcify?  Amazon is a great example of how it can all go horribly wrong sometimes.  Whilst this is amusing, when it recommends the wrong book - it's going to be a lot less funny when it recommends that your house be reposessed.  --Vitenka (Not that I don't think such systems will improve - but no one has really come up with a decent way to do it yet)
I think the algorithm used must eventually be under the control of the end user.  Each user (or rather each  user's computer, at the user's direction) will decide how to weigh the mass of cryptographically signed statements.  No one central service can be allowed to have the power to say who is trusted and who is not. --DR


Wired wrote [an article about the different strategies used by news websites to find and rank news stories].

The two main strategies contrasted were computer search (eg google news) vs human search (eg digg).

I think they are missing the point.  You can't make one news page that everyone likes, whether through automated searching or through human searching.  Because not everyone likes the same things.

What they need is third order trust.

Human editors making the decisions, that's first order trust.  That is where you are directly trusting another person or group of people.

Google search algorithm making the decisions, that's second order trust.  Google may be basing its algorithm on data provided by humans (linking), but the algorithm itself is under Google's control, not your control.

Third order trust is where other humans provide the recommendation data (eg links or diggs), and a computer processes this data but YOU control how the computer does that.  You tell the computer on what basis it should decide whose recommendations to weigh heavily.  So there is no single page rank algorithm that decides in advance that everyone will think Bill's Whatsit Page is untrustworthy or unimportant.  The computer does not make the key decision.  Instead you are in control of how your trust is spread, and the computer only mediates this trust for you.



Here's something I wrote for [Diaspora]



There's a concept, "orders of trust", that might be relevant when considering what data it would be useful to let Diaspora users store about each other.

First order unmediated trust



You directly allocate ratings to authors, based on your own reading of stuff they have written. The drawback is that your only way to find new authors you like is to try reading them at random.

Second order human mediated trust



You choose to trust a third party or agglomeration of known third parties to allocate ratings. These editors and publishers may use first order, second order or any other sort of trust in order to choose their sources. The drawback is that control is out of your hands. There are two forms of this. Editors, such as Wired magazine, choose a specific area to cover (technology) and you must then subscribe or not. Whereas search engines, such as Google, let you choose the area, but the results they supply are still dependant on the algorithm (pagerank) that they have chosen to implement. Either way you are at the mercy of the biases, tastes and interests of a third party that will never perfectly coincide with your own in all areas.

Third order computer mediated trust



You stay in charge of how you decide to spread your trust, but delegate a computer to implement your algorithm and keep track of the results. So other humans provide recommendation data (eg links or diggs), and a computer processes this data but you control how the computer does that. You tell the computer on what basis it should decide whose recommendations to weigh heavily. So there is no single page rank algorithm that decides in advance that everyone will think Bill's Whatsit Page is untrustworthy or unimportant. The computer does not make the key decision. Instead you are in control of how your trust is spread, and the computer only mediates this trust for you. The key is that people supply data recommending not just primary sources, but also rating how good they think specific others are at making particular types of judgement. This allows each user to spin a distinct web of trust, based on the data and their own chosen metric.


How could this be applied to Diaspora?

1. Let users link to a tag ontology, to facilitate automatic translation between how different users categorise the topic of something they've written (http://tomgruber.org/writing/ontology-of-folksonomy.htm)

2. Let users 'subscribe' or 'friend' not only other users, but specific tag streams of those users.  And rather than making it a binary 'subscribe' or 'don't subscribe' decision, allow them also to rate or set a priority to that subscription, which the user can then have affect the order they get shown new stuff.

3. Let the users expose these ratings in anonymous and non-anonymous form.  So if I 'friend' a person, PeterRabbit?, who generally posts stuff under three different tags "bicycles", "unix distributions" and "the life of saint john the divine", I can see how others have rated his comparative 'expertise' at writing on those topics.

4. for each user create an additional synthetic 'tag' "my ratings", which lets people rate how good specific other users are at rating people.

5. allow multiple trust metrics to be used to take this data and spin from it webs of trust.


So for example, suppose three users "Tom", "Dick" and "Harry" all subscribe to PeterRabbit?'s writing.

Tom rates PeterRabbit?:bicycles as 9/10, PeterRabbit?:unix_distributions as 4/10 and PeterRabbit?:christianity as 1/10

Dick rates PeterRabbit?:bicycles as 8/10, PeterRabbit?:unix_distributions as 3/10 and PeterRabbit?:christianity as 1/10

Harry rates PeterRabbit?:bicycles as 5/10, PeterRabbit?:unix_distributions as 5/10 and PeterRabbit?:christianity as 10/10


Now let's look at three users new to PeterRabbit?; Alice, Eve and Lilith.

Alice uses the default trust metric that gives everything an equal weighting.  So she sees all PeterRabbit?'s postings, whatever their topic, in the order that PeterRabbit? writes them.

Eve uses a recursive trust metric that takes into account that on average people think Tom is fairly reliable at rating people, Dick is very reliable at rating people and that Harry is pretty bad at rating people.  Her trust metric gives Dick's ratings most weight and suggest a default for PeterRabbit? of PeterRabbit?:bicycles as 8.2/10, PeterRabbit?:unix_distributions as 3.2/10 and PeterRabbit?:christianity as 1.4/10.  Since Eve uses a threshold cutoff of 6, she only sees PeterRabbit?'s writings on bicycles and unix_distributions.

Lilith uses a trust metric that starts from specific ratings she has already made, and seeks recommendations from others who have rated those same people at a similar level.  It so happens that Harry has been spot on at rating highly several other uses who Lilith also rated highly, so despite the general view, Lilith gives Harry's ratings a high weighting, and ends up reading only PeterRabbit?'s writings on christianity.


There's an interesting discussion on [what level rating Mentifex should have on Advogato].

It seems clear that there needs to be some sort of feedback mechanism, or other corrective characteristic, that captures people being careless of who they certify, unclear on the implications of doing so, or unaware of what someone is upto.  Perhaps a 'stock market'-like investment where you 'invest' shares of your own reputation in whether someone else's rep will go up or down?




SeeAlso: [The Evolution of Reputation Archive at Smart Mobs], [Social Text on Whuffie] [Lisa Rein on Reputation Economics] [Tim Bray on Social Networks]

BBC: ["Fake forum comments are 'eroding' trust in the web"]


CategoryFuture
See also: /TheFuture /DistributedComputing /LivingApplications /KnowledgeStructures /MediatedTrust /SocialConsequences /AmiCog /ToothyCog

ec2-18-119-131-72.us-east-2.compute.amazonaws.com | ToothyWiki | DouglasReay | RecentChanges | Login | Webcomic
This page is read-only | View other revisions | Recently used referrers
Last edited January 30, 2015 12:12 pm (viewing revision 13, which is the newest) (diff)
Search: