[Home]DouglasReay/DistributedComputing

ec2-3-145-94-130.us-east-2.compute.amazonaws.com | ToothyWiki | DouglasReay | RecentChanges | Login | Webcomic

Distributed Computing



Most multi-cellular organisms, if you chop them up, they die.

Slime moulds are different.  You can chop them up.  You can even incinerate half of one and pass the rest of it through a sieve.  You still end up with a functioning organism.

In 2003, most computer systems are fragile.  If you have 10 computers on a local network (say a firewall, a gateway, an external web server, an internal file server, and some terminals) if you turn off any one of those computers you have a loss of functionality.  If you blow up the wrong one, you don't just have one fewer terminals available, your whole network goes down and user data gets permanently destroyed.

There are a number of technologies existing now that attempt to address this problem.


2012
[MEGA] - encrypted free anonymous cloud file system, designed to be hard for governments to shut down, and easy to interface to (you can just map a drive letter to it)

2015
[Invisible Internet Project] - anonymous distribution, to crypto addresses not ip addresses
[IIP] - you can browse it yourself
[Kademlia : The Next Generation] - uses attack-resistant trust metrics
[ipredia] - an OS layered over a distributed network
[Retroshare] - works on RaspberryPi?, via a multi-hop swarming system

Google have [allegedly] [addressed] this [problem], and appear to have been successful. They're [hiring]. --MoonShadow
Thanks for the links.  It looks, though, like while they have fall-over and load balancing down to a T, they don't really have the amorphous symmetric reliability and distributiveness I'm thinking of here.  You can't sit down at any random machine in their cluster, have it fall over, and carry on your session on the next machine over like nothing happened. --DR

Tim O'Reilly wrote [an interesting piece] on the link between peer to peer and distributed computing.

Part of Steve Yegge's drunken rant about [maths every day] explains how this is probably how computers were intended to work in the first place. --SGB

In 30 years time is going to be a lot more reliable.  You will not be limited to the computing resources (CPU, memory, storage) stuck in a box in front of you.  And losing the box in front of you will become a minor inconvenience instead of a tragedy.

This presumes you trust a third party to store your stuff for you, both from a security and a reliability POV. Related: MoonShadow/DistributedFileSystem, http://www.m-o-o-t.org/ etc

Instead the resource available to you will become a question of access.  In physical terms that comes down to bandwidth and the topology of which networks are gated to which.  In more abstract terms it become a question of identity and permission.  Whether that is measured in mojo, reputation, dollars or recommendations, the effect is /MediatedTrust.

You're still depending on network structure for reliability. Redundant means of network access is way too expensive for most people, and is likely to remain so for as long as it is not free, since most people object to paying for what they perceive as the same thing more than once. If the network gateway goes down you've lost everything you've been storing remotely for the duration and can't do any work.

Another problem you may have with this kind of ideal: Say you're sitting at a computer and it falls over, so you move to another computer, and try to carry on. You lose the last five minutes of work (or five seconds - it doesn't matter). But it turns out that it was only the network cable that got pulled out the back of the machine, and your friendly admin has in the meantime come along and plugged it in. So now the system has two branching histories to reconcile - parallel universes which may disagree. One standard limitation with distributed systems like this is that the systems that form the "authority" for the data (in your case, all of them) must be present in a quorum of more than half their number in order to authoritatively say anything about the data. This prevents parallel universes because no more than one opinion about the data can be validly claimed. So, to knock out your distributed system, you only need to kill half of the nodes. --Admiral



Possible way to implement an AmiCog? - [iwi ki.info/ ik iwi ki] can run over [GitTorrent].



CategoryFuture
See also: /TheFuture /DistributedComputing /LivingApplications /KnowledgeStructures /MediatedTrust /SocialConsequences /AmiCog /ToothyCog

ec2-3-145-94-130.us-east-2.compute.amazonaws.com | ToothyWiki | DouglasReay | RecentChanges | Login | Webcomic
This page is read-only | View other revisions | Recently used referrers
Last edited January 30, 2015 12:04 pm (viewing revision 31, which is the newest) (diff)
Search: