• The #Hadoop Distributed File System
    http://www.aosabook.org/en/hdfs.html

    chapitre du #livre « The Architecture of Open Source Applications » consacré au #filesystem distribué Hadoop ; je trouve intéressante la partie sur la durabilité des données :

    Replication of data three times is a robust guard against loss of data due to uncorrelated node failures. It is unlikely Yahoo! has ever lost a block in this way; for a large cluster, the probability of losing a block during one year is less than 0.005. The key understanding is that about 0.8 percent of nodes fail each month. (...) The probability of several nodes failing within two minutes such that all replicas of some block are lost is indeed small.

    Correlated failure of nodes is a different threat. The most commonly observed fault in this regard is the failure of a rack or core switch. (...) If the loss of power spans racks, it is likely that some blocks will become unavailable. But restoring power may not be a remedy because one-half to one percent of the nodes will not survive a full power-on restart. Statistically, and in practice, a large cluster will lose a handful of blocks during a power-on restart.

    In addition to total failures of nodes, stored data can be corrupted or lost. The block scanner scans all blocks in a large cluster each fortnight and finds about 20 bad replicas in the process. Bad replicas are replaced as they are discovered.

    http://www.aosabook.org/images/cover.jpg