Client had 7 offices each connected via a T1. Each office had a Windows 2003 server with nightly backups via tape. Each system had roughly 100 Gigabyes of data to backup. Existing issues were that complete automation was impossible because individual office managers had to replace tapes, recovery was slow and required searching individual tapes for the correct data, and the high cost of gigabyte tapes. The client was looking for an automated solution that would allow for remote access, off-site storage, easy rollback to any day's work, and a system that could be used by anyone easily.
I proposed a low-cost, high-reliability solution to migrate the company to a tapeless backup system. The proposal included a timeline, software requirements, hardware requirements, and ROI. Met with principles and convinced IT team and reluctant office managers that system was secure, robust, and more convenient than existing system. Scripted open-source, enterprise-level incremental backup system spanning the 7 offices with both local and off-site storage. Took advantage of hard links and inodes in Linux to setup system that allows each office to have daily, time-stamped directories in which there is a complete filesystem backup for each and every day without filling up hard drive. Took advantage of open-source tools which provided data compression, binary differentials, and file hashing to backup complete 100 Gig system nightly over T1 WAN in minutes. Took advantage of Linux OS to make daily notifications of backup success/failure to relevant parties.
Wrote backup system in bash and later migrated to ksh to allow use of binary deliverables. Later released as opensource "backupinator"
Gave advice and support to company for presentation at trade show on system and gave advice and recommendations on process for commercializing the backup system.
Scalability Results At just one of the 7 offices:
Local Office Statistics:
- Number of files to backup nightly: 395,411
- kBytes to backup nightly: 170,360,690 = 170 GB
- Number of months to store data: 4
- Stored Number of files/directories after 4 months : 88,805,169 ( e.g. roughly 89 million files)
- Total Size Used after 4 months: 551GB
If you are backing up 170 GB per night and each night's snapshot is a complete record the entire system. Then why after roughly 120 days does the system only take up 551 GB instead of (120)(170GB)=20,400GB? Because of the use of hard links creates instant deduplication so that only the changed files need to be stored.
Each office syncs with a central office server. The central office server has a directory for each office. Statistics for the central office server
- Number of Offices: 7
- Time for data storage:
- 12 months for general files
- infinity for projects (now at 2 years)
- Total Size Used after 2 years: 2.7 TeraBytes
- Total # Files and directories stored (e.g. ls -R) 354,271,589
Again the reason the file system doesn't take up 14,000 TB is because even though a complete file system snapshot appears on the backup archived by day - only the files where there are binary differences add to the disk space taken.
Later company asked for a more complete deduplication system that worked at each office to reduce file sizes at each office since employees were often uploading identical files. Setup a files to hashes database for automated deduplication.