Posted by Andre Kuehnemund, Last modified by Andre Kuehnemund on 06 May 2014 13:37
On file systems without Hardware Snapshot capability, P5 creates and maintains the snapshots using soft and hard links. This includes for instance HFS on Mac, NTFS on Windows, ext4 on Linux. A lot of effort has been put into that implementation, but it is limited with regard to the total number of files the underlying file system can handle.
Such a data repository based on links requires additional internal work:
There is no exact number of workstations that limits the process, also, there is no exact number of files as the parameters depend on the hardware.
That system is great for smaller solutions with up to 20-25 workstations, depending on how many files per workstation are saved and how often
backups are done.
On bigger installations, we highly recommend a repository on a file system that supports snapshots. Currently, Solaris with a ZFS file system and Linux with a BTRFS file system are supported. On these machine, the filesystem will create snapshots natively, using a technology called COW, copy on write. Such systems can handle more data and more files, as the folder structure does not require explicit maintenance.
On such systems, many more files can be maintained, so systems with up to 100 - 150 workstations may be possible.
Still, the I/O load and the network limitations to the Backup2Go server limit the total size and total number of files, and same as above, the values are hardware dependent.
When the server turns out to be too slow, even with hardware snapshots, we recommend to split the installation into multiple B2G servers.
In addition to the above, there are some hardware recommendations for Backup2Go servers:
b2go backup2go repository snapshots