• Comment lutter contre la hausse du prix des disque durs, quand on doit approvisionner un datacenter


    The rain started in August and by mid-October 2011, violent floods in Thailand had crippled the factories that helped produce nearly half of the world’s hard drives. As an online backup company, Backblaze fills more than 50 TB of new drives every day. To survive this crisis without raising prices or compromising service, Backblaze deployed every last employee, as well as friends and family, to acquire drives in what became known internally as “drive farming”. What follows is how we did it.

    #HDD #farming #hacking #retail #cotsco #thailande #datacenter #cloud

  • Faire un backup complet de son Android avec adb

    Voici une petite astuce plutôt rapide pour vous expliquer comment sauvegarder et faire un backup complet de votre smartphone sous Android en étant sous Ubuntu (et ses dérivées). Bien sûr, cela fonctionne aussi sur Linux en règle général, et sur Windows, à la condition d’avoir installé adb (des android-tools). Ça se passe dans un simple […] #adb #android #backup #sauvegarde #smartphone #tablette

  • PaperBack - Sauvegarder des #données informatiques sur #papier


    “PaperBack is a free application that allows you to back up your precious files on the ordinary paper in the form of the oversized bitmaps. If you have a good laser printer with the 600 dpi resolution, you can save up to 500,000 bytes of uncompressed data on the single A4/Letter sheet. Integrated packer allows for much better data density - up to 3,000,000+ (three megabytes) of C code per page.”

    #backup #data

  • #Obnam 1.5 has a #FUSE plugin to access #backup repositories read-only: https://gist.github.com/Vayu/4547295

    # Copyright (C) 2013 Valery Yundin
    # This program is free software: you can redistribute it and/or modify
    # it under the terms of the GNU General Public License as published by
    # the Free Software Foundation, either version 3 of the License, or
    # (at your option) any later version.
    # This program is distributed in the hope that it will be useful,
    # but WITHOUT ANY WARRANTY; without even the implied warranty of
    # GNU General Public License for more details.
    # You should have received a copy of the GNU General Public License
    # along with this program. If not, see <http://www.gnu.org/licenses/>.
    import os
    import stat
    import sys
    import logging
    import errno
    import struct
    import obnamlib
    import fuse
    fuse.fuse_python_api = (0, 2)
    except ImportError:
    class Bunch:
    def __init__(self, **kwds):
    fuse = Bunch(Fuse = object)
    class ObnamFuseOptParse(object):
    '''Option parsing class for FUSE
    has to set fuse_args.mountpoint
    obnam = None
    def __init__(self, *args, **kw):
    self.fuse_args = \
    'fuse_args' in kw and kw.pop('fuse_args') or fuse.FuseArgs()
    if 'fuse' in kw:
    self.fuse = kw.pop('fuse')
    def parse_args(self, args=None, values=None):
    self.fuse_args.mountpoint = self.obnam.app.settings['to']
    for opt in self.obnam.app.settings['fuse-opt']:
    if opt == '-f':
    if not hasattr(self.fuse_args, 'ro'):
    class ObnamFuseFile(object):
    fs = None # points to active ObnamFuse object
    direct_io = False # do not use direct I/O on this file.
    keep_cache = True # cached file data need not to be invalidated.
    def __init__(self, path, flags, *mode):
    logging.debug('FUSE file open %s %d', path, flags)
    if ((flags & os.O_WRONLY) or (flags & os.O_RDWR) or
    (flags & os.O_CREAT) or (flags & os.O_EXCL) or
    (flags & os.O_TRUNC) or (flags & os.O_APPEND)):
    raise IOError(errno.EROFS, 'Read only filesystem')
    self.path = path
    if path == '/.pid' and self.fs.obnam.app.settings['viewmode'] == 'multiple':
    self.read = self.read_pid
    self.release = self.release_pid
    self.metadata = self.fs.get_metadata(path)
    # if not a regular file return EINVAL
    if not stat.S_ISREG(self.metadata.st_mode):
    raise IOError(errno.EINVAL, 'Invalid argument')
    self.chunkids = None
    self.chunksize = None
    self.lastdata = None
    self.lastblock = None
    logging.error('Unexpected exception', exc_info=True)
    def read_pid(self, length, offset):
    logging.debug('FUSE read_pid %d %d', length, offset)
    pid = str(os.getpid())
    if length < len(pid) or offset != 0:
    return ''
    return pid
    def release_pid(self, flags):
    return 0
    def fgetattr(self):
    logging.debug('FUSE file fgetattr')
    return self.fs.getattr(self.path)
    def read(self, length, offset):
    logging.debug('FUSE file read(%s, %d, %d)', self.path, length, offset)
    if length == 0 or offset >= self.metadata.st_size:
    return ''
    repo = self.fs.obnam.repo
    gen, repopath = self.fs.get_gen_path(self.path)
    # if stored inside B-tree
    contents = repo.get_file_data(gen, repopath)
    if contents is not None:
    return contents[offset:offset+length]
    # stored in chunks
    if not self.chunkids:
    self.chunkids = repo.get_file_chunks(gen, repopath)
    if len(self.chunkids) == 1:
    if not self.lastdata:
    self.lastdata = repo.get_chunk(self.chunkids[0])
    return self.lastdata[offset:offset+length]
    chunkdata = None
    if not self.chunksize:
    # take the cached value as the first guess for chunksize
    self.chunksize = self.fs.sizecache.get(gen, self.fs.chunksize)
    blocknum = offset/self.chunksize
    blockoffs = offset - blocknum*self.chunksize
    # read a chunk if guessed blocknum and chunksize make sense
    if blocknum < len(self.chunkids):
    chunkdata = repo.get_chunk(self.chunkids[blocknum])
    chunkdata = ''
    # check if chunkdata is of expected length
    validate = min(self.chunksize, self.metadata.st_size - blocknum*self.chunksize)
    if validate != len(chunkdata):
    if blocknum < len(self.chunkids)-1:
    # the length of all but last chunks is chunksize
    self.chunksize = len(chunkdata)
    # guessing failed, get the length of the first chunk
    self.chunksize = len(repo.get_chunk(self.chunkids[0]))
    chunkdata = None
    # save correct chunksize
    self.fs.sizecache[gen] = self.chunksize
    if not chunkdata:
    blocknum = offset/self.chunksize
    blockoffs = offset - blocknum*self.chunksize
    if self.lastblock == blocknum:
    chunkdata = self.lastdata
    chunkdata = repo.get_chunk(self.chunkids[blocknum])
    output = []
    while True:
    readlength = len(chunkdata) - blockoffs
    if length > readlength and blocknum < len(self.chunkids)-1:
    length -= readlength
    blocknum += 1
    blockoffs = 0
    chunkdata = repo.get_chunk(self.chunkids[blocknum])
    self.lastblock = blocknum
    self.lastdata = chunkdata
    return ''.join(output)
    except (OSError, IOError), e:
    logging.debug('FUSE Expected exception')
    logging.exception('Unexpected exception')
    def release(self, flags):
    logging.debug('FUSE file release %d', flags)
    self.lastdata = None
    return 0
    def fsync(self, isfsyncfile):
    logging.debug('FUSE file fsync')
    return 0
    def flush(self):
    logging.debug('FUSE file flush')
    return 0
    def ftruncate(self, size):
    logging.debug('FUSE file ftruncate %d', size)
    return 0
    def lock(self, cmd, owner, **kw):
    logging.debug('FUSE file lock %s %s %s', repr(cmd), repr(owner), repr(kw))
    raise IOError(errno.EOPNOTSUPP, 'Operation not supported')
    class ObnamFuse(fuse.Fuse):
    '''FUSE main class
    def root_refresh(self):
    logging.debug('FUSE root_refresh is called')
    if self.obnam.app.settings['viewmode'] == 'multiple':
    repo = self.obnam.repo
    generations = [gen for gen in repo.list_generations()
    if not repo.get_is_checkpoint(gen)]
    logging.debug('FUSE root_refresh found %d generations' % len(generations))
    self.rootstat, self.rootlist = self.multiple_root_list(generations)
    logging.exception('Unexpected exception')
    def get_metadata(self, path):
    #logging.debug('FUSE get_metadata(%s)', path)
    return self.metadatacache[path]
    except KeyError:
    if len(self.metadatacache) > self.MAX_METADATA_CACHE:
    metadata = self.obnam.repo.get_metadata(*self.get_gen_path(path))
    self.metadatacache[path] = metadata
    # FUSE does not allow negative timestamps, truncate to zero
    if metadata.st_atime_sec < 0:
    metadata.st_atime_sec = 0
    if metadata.st_mtime_sec < 0:
    metadata.st_mtime_sec = 0
    return metadata
    def get_stat(self, path):
    logging.debug('FUSE get_stat(%s)', path)
    metadata = self.get_metadata(path)
    st = fuse.Stat()
    st.st_mode = metadata.st_mode
    st.st_dev = metadata.st_dev
    st.st_nlink = metadata.st_nlink
    st.st_uid = metadata.st_uid
    st.st_gid = metadata.st_gid
    st.st_size = metadata.st_size
    st.st_atime = metadata.st_atime_sec
    st.st_mtime = metadata.st_mtime_sec
    st.st_ctime = st.st_mtime
    return st
    def single_root_list(self, gen):
    repo = self.obnam.repo
    mountroot = self.obnam.mountroot
    rootlist = {}
    for entry in repo.listdir(gen, mountroot):
    path = '/' + entry
    rootlist[path] = self.get_stat(path)
    rootstat = self.get_stat('/')
    return (rootstat, rootlist)
    def multiple_root_list(self, generations):
    repo = self.obnam.repo
    mountroot = self.obnam.mountroot
    rootlist = {}
    used_generations = []
    for gen in generations:
    path = '/' + str(gen)
    genstat = self.get_stat(path)
    start, end = repo.get_generation_times(gen)
    genstat.st_ctime = genstat.st_mtime = end
    rootlist[path] = genstat
    except obnamlib.Error:
    if not used_generations:
    raise obnamlib.Error('No generations found for %s' % mountroot)
    latest = used_generations[-1]
    laststat = rootlist['/' + str(latest)]
    rootstat = fuse.Stat(**laststat.__dict__)
    laststat = fuse.Stat(target=str(latest), **laststat.__dict__)
    laststat.st_mode &= ~(stat.S_IFDIR | stat.S_IFREG)
    laststat.st_mode |= stat.S_IFLNK
    rootlist['/latest'] = laststat
    pidstat = fuse.Stat(**rootstat.__dict__)
    pidstat.st_mode = stat.S_IFREG | stat.S_IRUSR | stat.S_IRGRP | stat.S_IROTH
    rootlist['/.pid'] = pidstat
    return (rootstat, rootlist)
    def init_root(self):
    repo = self.obnam.repo
    mountroot = self.obnam.mountroot
    generations = self.obnam.app.settings['generation']
    if self.obnam.app.settings['viewmode'] == 'single':
    if len(generations) != 1:
    raise obnamlib.Error(
    'The single mode wants exactly one generation option')
    gen = repo.genspec(generations[0])
    if mountroot == '/':
    self.get_gen_path = lambda path: (gen, path)
    self.get_gen_path = (lambda path : path == '/'
    and (gen, mountroot)
    or (gen, mountroot + path))
    self.rootstat, self.rootlist = self.single_root_list(gen)
    logging.debug('FUSE single rootlist %s', repr(self.rootlist))
    elif self.obnam.app.settings['viewmode'] == 'multiple':
    # we need the list of all real (non-checkpoint) generations
    if len(generations) == 1:
    generations = [gen for gen in repo.list_generations()
    if not repo.get_is_checkpoint(gen)]
    if mountroot == '/':
    def gen_path_0(path):
    if path.count('/') == 1:
    gen = path[1:]
    return (int(gen), mountroot)
    gen, repopath = path[1:].split('/', 1)
    return (int(gen), mountroot + repopath)
    self.get_gen_path = gen_path_0
    def gen_path_n(path):
    if path.count('/') == 1:
    gen = path[1:]
    return (int(gen), mountroot)
    gen, repopath = path[1:].split('/', 1)
    return (int(gen), mountroot + '/' + repopath)
    self.get_gen_path = gen_path_n
    self.rootstat, self.rootlist = self.multiple_root_list(generations)
    logging.debug('FUSE multiple rootlist %s', repr(self.rootlist))
    raise obnamlib.Error('Unknown value for viewmode')
    def __init__(self, *args, **kw):
    self.obnam = kw['obnam']
    ObnamFuseFile.fs = self
    self.file_class = ObnamFuseFile
    self.metadatacache = {}
    self.chunksize = self.obnam.app.settings['chunk-size']
    self.sizecache = {}
    self.rootlist = None
    self.rootstat = None
    fuse.Fuse.__init__(self, *args, **kw)
    def getattr(self, path):
    if path.count('/') == 1:
    if path == '/':
    return self.rootstat
    elif path in self.rootlist:
    return self.rootlist[path]
    raise obnamlib.Error('ENOENT')
    return self.get_stat(path)
    except obnamlib.Error:
    raise IOError(errno.ENOENT, 'No such file or directory')
    logging.error('Unexpected exception', exc_info=True)
    def readdir(self, path, fh):
    logging.debug('FUSE readdir(%s, %s)', path, repr(fh))
    if path == '/':
    listdir = [x[1:] for x in self.rootlist.keys()]
    listdir = self.obnam.repo.listdir(*self.get_gen_path(path))
    return [fuse.Direntry(name) for name in ['.', '..'] + listdir]
    except obnamlib.Error:
    raise IOError(errno.EINVAL, 'Invalid argument')
    logging.error('Unexpected exception', exc_info=True)
    def readlink(self, path):
    statdata = self.rootlist.get(path)
    if statdata and hasattr(statdata, 'target'):
    return statdata.target
    metadata = self.get_metadata(path)
    if metadata.islink():
    return metadata.target
    raise IOError(errno.EINVAL, 'Invalid argument')
    except obnamlib.Error:
    raise IOError(errno.ENOENT, 'No such file or directory')
    logging.error('Unexpected exception', exc_info=True)
    def statfs(self):
    logging.debug('FUSE statfs')
    repo = self.obnam.repo
    if self.obnam.app.settings['viewmode'] == 'multiple':
    blocks = sum(repo.client.get_generation_data(gen)
    for gen in repo.list_generations())
    files = sum(repo.client.get_generation_file_count(gen)
    for gen in repo.list_generations())
    gen = self.get_gen_path('/')[0]
    blocks = repo.client.get_generation_data(gen)
    files = repo.client.get_generation_file_count(gen)
    stv = fuse.StatVfs()
    stv.f_bsize = 65536
    stv.f_frsize = 0
    stv.f_blocks = blocks/65536
    stv.f_bfree = 0
    stv.f_bavail = 0
    stv.f_files = files
    stv.f_ffree = 0
    stv.f_favail = 0
    stv.f_flag = 0
    stv.f_namemax = 255
    #raise OSError(errno.ENOSYS, 'Unimplemented')
    return stv
    logging.error('Unexpected exception', exc_info=True)
    def getxattr(self, path, name, size):
    logging.debug('FUSE getxattr %s %s %d', path, name, size)
    metadata = self.get_metadata(path)
    except ValueError:
    return 0
    if not metadata.xattr:
    return 0
    blob = metadata.xattr
    sizesize = struct.calcsize('!Q')
    name_blob_size = struct.unpack('!Q', blob[:sizesize])[0]
    name_blob = blob[sizesize : sizesize + name_blob_size]
    name_list = name_blob.split('\0')[:-1]
    if name in name_list:
    value_blob = blob[sizesize + name_blob_size : ]
    idx = name_list.index(name)
    fmt = '!' + 'Q' * len(name_list)
    lengths_size = sizesize * len(name_list)
    lengths_list = struct.unpack(fmt, value_blob[:lengths_size])
    if size == 0:
    return lengths_list[idx]
    pos = lengths_size + sum(lengths_list[:idx])
    value = value_blob[pos:pos + lengths_list[idx]]
    return value
    except obnamlib.Error:
    raise IOError(errno.ENOENT, 'No such file or directory')
    logging.error('Unexpected exception', exc_info=True)
    def listxattr(self, path, size):
    logging.debug('FUSE listxattr %s %d', path, size)
    metadata = self.get_metadata(path)
    if not metadata.xattr:
    return 0
    blob = metadata.xattr
    sizesize = struct.calcsize('!Q')
    name_blob_size = struct.unpack('!Q', blob[:sizesize])[0]
    if size == 0:
    return name_blob_size
    name_blob = blob[sizesize : sizesize + name_blob_size]
    return name_blob.split('\0')[:-1]
    except obnamlib.Error:
    raise IOError(errno.ENOENT, 'No such file or directory')
    logging.error('Unexpected exception', exc_info=True)
    def fsync(self, path, isFsyncFile):
    return 0
    def chmod(self, path, mode):
    raise IOError(errno.EROFS, 'Read only filesystem')
    def chown(self, path, uid, gid):
    raise IOError(errno.EROFS, 'Read only filesystem')
    def link(self, targetPath, linkPath):
    raise IOError(errno.EROFS, 'Read only filesystem')
    def mkdir(self, path, mode):
    raise IOError(errno.EROFS, 'Read only filesystem')
    def mknod(self, path, mode, dev):
    raise IOError(errno.EROFS, 'Read only filesystem')
    def rename(self, oldPath, newPath):
    raise IOError(errno.EROFS, 'Read only filesystem')
    def rmdir(self, path):
    raise IOError(errno.EROFS, 'Read only filesystem')
    def symlink(self, targetPath, linkPath):
    raise IOError(errno.EROFS, 'Read only filesystem')
    def truncate(self, path, size):
    raise IOError(errno.EROFS, 'Read only filesystem')
    def unlink(self, path):
    raise IOError(errno.EROFS, 'Read only filesystem')
    def utime(self, path, times):
    raise IOError(errno.EROFS, 'Read only filesystem')
    def write(self, path, buf, offset):
    raise IOError(errno.EROFS, 'Read only filesystem')
    def setxattr(self, path, name, val, flags):
    raise IOError(errno.EROFS, 'Read only filesystem')
    def removexattr(self, path, name):
    raise IOError(errno.EROFS, 'Read only filesystem')
    class MountPlugin(obnamlib.ObnamPlugin):
    '''Mount backup repository as a user-space filesystem.
    At the momemnt only a specific generation can be mounted
    def enable(self):
    mount_group = obnamlib.option_group['mount'] = 'Mounting with FUSE'
    self.app.add_subcommand('mount', self.mount,
    ['single', 'multiple'],
    '"single" directly mount specified generation, '
    '"multiple" mount all generations as separate directories',
    'options to pass directly to Fuse',
    metavar='FUSE', group=mount_group)
    def mount(self, args):
    '''Mount a backup repository as a FUSE filesystem.
    This subcommand allows you to access backups in an Obnam
    backup repository as normal files and directories. Each
    backed up file or directory can be viewed directly, using
    a graphical file manager or command line tools.
    Example: To mount your backup repository:
    mkdir my-fuse
    obnam mount --viewmode multiple --to my-fuse
    You can then access the backup using commands such as these:
    ls -l my-fuse
    ls -l my-fuse/latest
    diff -u my-fuse/latest/home/liw/README ~/README
    You can also restore files by copying them from the
    my-fuse directory:
    cp -a my-fuse/12765/Maildir ~/Maildir.restored
    To un-mount:
    fusermount -u my-fuse
    if not hasattr(fuse, 'fuse_python_api'):
    raise obnamlib.Error('Failed to load module "fuse", '
    'try installing python-fuse')
    self.cwd = os.getcwd()
    self.repo = self.app.open_repository()
    self.mountroot = (['/'] + self.app.settings['root'] + args)[-1]
    if self.mountroot != '/':
    self.mountroot = self.mountroot.rstrip('/')
    logging.debug('FUSE Mounting %s@%s:%s to %s', self.app.settings['client-name'],
    self.mountroot, self.app.settings['to'])
    ObnamFuseOptParse.obnam = self
    fs = ObnamFuse(obnam=self, parser_class=ObnamFuseOptParse)
    fs.flags = 0
    fs.multithreaded = 0
    except fuse.FuseError, e:
    raise obnamlib.Error(repr(e))
    def reopen(self):
    except OSError:
    self.repo = self.app.open_repository()
    - very nice to have such comfort !

    • Perso je stocke toutes mes photos (40 000 à ce jour) sur #flickr, avec un script que j’ai développé pour l’occasion. Elles sont presque toutes en mode “privé”. Un second script me permet de retélécharger soit tout (pour avoir une copie locale), soit par album, par tag, etc.

      Mais il faut qu’il identifie un peu plus précisément son besoin : archivage, classement, partage ?

      Aussi, Flickr ne permet d’héberger que des JPEG, pas de RAW, enfin je crois.

    • J’avais répondu à l’ami (on parle de 4To de données, sous forme de fichiers #RAW) :

      4TB probably forces you to have multiple hard drives, any of which will fail at any moment.

      Maybe your best software bet is to use #git-annex assistant.

      see the video, it shows how it works: it manages your files and puts them in several places of your choice, according to certain rules (for example, have at least 2 copies of each file in different locations). It’s all free software and there are no tricks, so if for some reason it’s broken, your files will still be available by other means.

      Then you’ll have to do some maths to know how much it’s going to cost.

      If you have your own hard drives, you must set them up with #RAID, and an off-site #backup, which means you need at least 500€ of hard drives. You’ll have to replace them once in a while, because they fail. Also, add in electricity :^)

      In the cloud:

      – If you use #Gandi_Simple_Hosting, you’ll need 4 instances that each costs 186,52 €/month

      – With #OVH_Cloud_storage ( http://www.ovh.com/fr/cloud/stockage ), you’ll pay something like 360€/month.

      – If you choose #Amazon_S3 to store the data, it will cost about the same: 440$/month.

      – A much better option financially is to store your data on #Amazon_Glacier ; it will cost you “only” 44$/month ( http://aws.amazon.com/fr/s3/pricing ); it’s probably an excellent option for backup, but you have to know that it is very slow when you need to retrieve a file.

      Also, if you want to share files with people, or look at your photos through a website, you’ll probably need another system, just for that. This system would only need to have the database of your files, and thumbnail images, and know how to communicate with the large storages. This, in terms of hosting, is cheap. But it will need a little programming.

    • Finally I decided for this options:
      – 2 hard disks 2,5 inches of 2TB each usb3 always with me when I travel. I found a really cheap offer less of 200€ for the couple.
      – When my wallet will be ready around 300/500 € I’ll install my NAS 4HD with a proper Raid system inside a friend’s web-farm.

      After many investigations I decided for these indepenpent solutions. My archive is made up 95% by RAW files.

    • Coucou, alors de mon coté :

      j’ai entre 5TO et 6TO de donnés photos à gerer ( des raws en particulier )

      – sur ma machine de retouche, j’ai 6TO en raid que je sauvegarde sur un Drobo en FW 800 toute les semaines .

      – je sauvegarde les 6TO sur amazon glacier avec le logiciel Arq sur macosx . Amazon glacier ne coute quasi rien par rapport aux autres solutions mais c’est du stockage lent . Il faut faire fondre la sauvegarde pour recuperer les fichiers.

      – je sauvegarde certaines series sur photoshelter et sur flickr .

  • btsync : preserve file ownership on sync

    This feature is not implemented yet. Honestly speaking, I don’t know how to implement it. Keep in mind that Sync is cross platform, so if you have file that is created by jon and synchronize it with Windows, how we should preserve the user name.

    Pourtant la seule chose qui me manque dans cet outil... Il faudrait un genre de rsync over bittorrent...

    #forum.bittorrent.com #btsync #rsync #bittorrent #backup #windows #unix

  • AutoMySQLBackup

    AutoMySQLBackup with a basic configuration will create Daily, Weekly and Monthly #backups of one or more of your #MySQL databases from one or more of your MySQL servers.

    Other Features include:
    – Email notification of backups
    – Backup Compression and Encryption
    – Configurable backup rotation
    – Incremental database backups

    quelqu’un connaît ? ça semble aussi simple que apt-get install automysqlbackup et ça remplacerait mes scripts persos


  • Chacun sa backup, et les fichiers seront bien sauvegardés (1ere partie) - MARIE & JULIEN

    Si je vous parle de backup, vous allez me dire que oui, vous en avez une, mais que vous l’avez rencontrée pendant les vacances, qu’elle est dans un autre collège dans une autre ville, que de toutes façons je ne la connais pas, mais vous jurerez sur la tête de vos mère que vous l’avez faite récemment. Dans ce billet, je vais faire un tour d’horizon des solutions de sauvegarde existante, puis dans la 2e partie, vous parler de ce que j’ai mis en place pour mon usage personnel.

    Chacun sa backup, et les fichiers seront bien sauvegardés (2e partie, ma stratégie perso) - MARIE & JULIEN

    Dans la 1ère partie de ce dossier backup, nous avons fait un tour d’horizon de quelques solutions de sauvegardes existante. Ici, nous allons voir laquelle nous avons mis en place, et pour quel type de données.

  • Introducing BRIC (Bunch of Redundant Independent Clouds) « Bitcartel Blog

    Online storage providers are handing out free storage like candy.  Add them all up and soon you’re talking about a serious amount of space.  So let’s have some fun by turning ten different online storage providers into a single data grid which is secure, robust, and distributed.  We call this grid a BRIC (Bunch of Redundant Independent Clouds).

    The BRIC solution presented here will use the open source project Tahoe-LAFS to perform the RAID like function of striping data across different storage providers. Here’s how Tahoe-LAFS describes itself:
    Tahoe-LAFS is a Free and Open cloud storage system. It distributes your data across multiple servers. Even if some of the servers fail or are taken over by an attacker, the entire filesystem continues to function correctly, including preservation of your privacy and security.

    Solution de backup utilisant https://tahoe-lafs.org/trac/tahoe-lafs

    #backups #storage #infonuagique #tahoe-lafs

  • Il est rare qu’un acteur majeur de l’Internet documente avec autant de détail un gros plantage opérationnel. Ici, le #RIPE-NCC décrit en détail leur dernier problème #DNS. Murphy était en forme et plusieurs problèmes successifs sont survenus.


    Les ingénieurs système expérimentés ne seront pas surpris d’apprendre que, lorsque le RIPE-NCC a décidé d’utiliser les sauvegardes, ils sont sont aperçus qu’il n’y en avait pas...

    #backup #sécurité #résilience

  • #seen-local : un outil pour créer une copie locale statique de mes seens

    # ma config
    # stockage de mon backup xml
    # mes fichiers exportes
    # mac os x

    Voici un script qui découpe un backup #seenthis en petits fichiers sur le disque dur ; avec l’idée du coup de pouvoir retrouver mes seens en local, directement dans l’indexation de mon disque dur ; couplé avec git ça devrait être pas mal ?

  • La sauvegarde sur un disque dur réseau via Time Machine sur un disque dur non « officiel » (Time Capsule, OS X Serveur) fonctionne. Toutefois, une fois arrivé à saturation, un problème subsiste. En effet, plutôt que d’effacer la ou les plus anciennes sauvegardes pour faire de la place pour la nouvelle, Time Machine décide d’effacer toutes les sauvegardes sont la dernière en date.

    En fait, en entreprise, Time Machine est un Cheval de Troie visant à vendre des Time Capsules et des Mac supplémentaires sous OS X Serveur. ;-)

    #os_x #mac #time_capsule #time_machine #sauvegarde #backup #réseau #business #cheval_de_troie #entreprise

  • Box Backup

    Box Backup is an open source, completely automatic, on-line backup system. It has the following key features:

    – All backed up data is stored on the server in files on a filesystem - no tape, archive or other special devices are required.

    – The server is trusted only to make files available when they are required - all data is encrypted and can be decoded only by the original client. This makes it ideal for backing up over an untrusted network (such as the Internet), or where the server is in an uncontrolled environment.

    – A backup daemon runs on systems to be backed up, and copies encrypted data to the server when it notices changes - so backups are continuous and up-to-date (although traditional snapshot backups are possible too).

    – Only changes within files are sent to the server, just like rsync, minimising the bandwidth used between clients and server. This makes it particularly suitable for backing up between distant locations, or over the Internet.

    – It behaves like tape - old file versions and deleted files are available.

    – Old versions of files on the server are stored as changes from the current version, minimising the storage space required on the server. Files are the server are also compressed to minimise their size.

    Je l’utilise depuis des mois et j’en suis très content : backups sur un serveur externe en continu et chiffrés. Seule la création des comptes est un peu laborieuse.

    #backup #boxbackup

  • slight paranoia: How Dropbox sacrifices user privacy for cost savings

    What this means, is that from the comfort of their desks, law enforcement agencies or copyright trolls can upload contraband files to #Dropbox, watch the amount of bandwidth consumed, and then obtain a court order if the amount of data transferred is smaller than the size of the file.

    Last year, the New York Attorney General announced that #Facebook, MySpace and IsoHunt had agreed to start comparing every image uploaded by a user to an AG supplied database of more than 8000 hashes of child pornography. It is easy to imagine a similar database of hashes for pirated movies and songs, ebooks stripped of DRM, or leaked US government diplomatic cables.

    #cloud #sécurité #privacy