cPanel & WHM Version 98 has been released, and brings a slew of great updates. Take a look at what is included, and then upgrade today!

Immediate Transfer of Remote Backup Archives

PbG shared this idea 8 years ago
Completed

The new backup system stores all archives for remote backup locally and transfers them only after all have been created. The impact of this may be a significant increase in disk space usage. The legacy backup system transferred the archive to the remote server and logged the details of the same. The new backup system in 11.38.0 (build 19) does not do either. Nor does it calculate diskspace needed to store all these archives locally or remotely. If inadequate diskspace are found the backup fails. Thus all resources used to create/transfer archives are wasted. This could be hours of resources on a production server, disastrous if temporarily stored on /home and disk space are exhausted.

Best Answer
photo

Hey everyone, I need to offer a bit of clarification on this one. In the current system (the "new" system) backups will start to be transferred to the remote location as soon as the backup for each account is complete. There are a couple of situations in which I might expect to see something that appeared to present different behavior, however.

a) If your remote backup destination is unreachable

b) If your connection to the remote destination is slow or unstable, thereby causing the completion of the transfer of the backups to timeout or fail

If any of you is seeing something different than that, it would be considered a bug. I'm going to mark this as completed, but if you have any questions or are seeing any behavior that's different from that, please please please do open a ticket with your webhost or with our support team to get that addressed. If there are bugs we want to know about them!

Replies (17)

photo
2

This is really imperative.

photo
6

Dear CPANEL Staffs,

This is a critical option since on shared webservers which holds tones of data on it, we can't leave big free spaces unused only for Cpanel backups. since each time it wants to put all of the backups on HDD before transferring it to the remote destination


We believe this is a drawback in your new BackUp system and why it should not be there like previous backup system?

photo
2

Please, please add this very basic 'feature'. We've had to roll back to the legacy system on many machines due to completely hosing the server after filling the disk 100% on each backup run.

photo
2

We OBVIOUSLY need this!!! My server continuously runs out of space during the new backup process!! PLEASE fix this!

photo
1

This is a urgent feature that we need! Our servers run out of disk space since the new backup system.

photo
1

This is a pretty critical failure of the new backup system. I've been having this exact problem for months. Every night at specified time the new backup system kicks in, an hour later the disk is 100% usage, but the backup system just keeps going. Only once all remote archives have been copied offsite, do all the local backup files get removed and disk usage becomes normal again. I end up with 1 hour of non-usable websites and failed mail deliveries, every few days (when backup runs) due to zero disk space free.


I suspect the cPanel developer team just don't realise how bad the problem is, or haven't really been made aware of it.


In my opinion this shoudn't be a feature request, it should be a bug / issue.


To guarantee all backups are successful, I'd have to ensure disk space is always less than 50% usage - which is completely impractical.

photo
1

Completely agree. Currently I can only use 1/2 of my available cPanel disk space for actual website files since the backup takes the other half before it's uploaded/deleted. Crazy!

photo
1

Extremely critical feature as it is too easy to run out of disk space.

This needs to be fixed ASAP!

photo
1

Please fix this

photo
1

I'm not sure how this isn't considered a bug, but it should be addressed soon. I'm experiencing crashed servers because network doesn't keep up with (SSD) backups, and it seems absurd to have to keep 50% of the disks free just for potential backups. Please!

photo
1

I totally agree! i cannot believe the devs can't see this as a MAJOR problem and cost incursion from a space perspective! The new system is great but leaves ALOT to be desired still!

photo
1

+1 from me too. Our servers with SSD drivers are too fast and creating backup archives fill up all free space faster than created archives are transferred to the remote storage. This cause server crash or empty backup archives.


My suggestion is to add option where we can define how much available space (percent or GB) needs to be on drive before backup script can continue with creating new archives.

Currently, creating backup archive is paused if average cpu load goes over defined threshold, so in the same way backup script can be paused while waiting for some free space (or existing archives are transferred and deleted).

photo
1

Definitely a plus one from me, this is a major issue and needs to be addressed quickly. It needs to transfer the account backup as soon as it's completed, then remove the backup before then doing the next account and so on.

photo
1

I disagree with this thread a bit. We design our servers with a backup drive that's double the space of the home drive to provide an appropriately sized local target and allow limited local retention. I much prefer the way the new system currently works because it allows the heavy I/O work to occur during the night and the light off-site transmission workload to occur during the day. Legacy we would see performance hits throughout the day as backups sporadically started and then back to normal as the file pushed.


The new backup system does asynchronously start remote transmission as

backups are generated, but cleanup doesn't occur until job completion. Moving

cleanup as a post transmission step at the account level would alleviate

this and allow job pausing as space gets thin. If there's an option to execute the older one account at a time method, so be it, but going back to that old way as the only option doesn't work for me. A properly designed physical environment negates the need for this request, IMO. The real world realities are different, and so I understand. Account-level cleanup coupled with smart space detection and job pausing will solve this.


I have seen at least 5 other feature requests on this very topic. The dilution factor isn't helping the cause...

photo
1

If we start with the premise that backups can fill the available space too quickly because off-site transmission takes longer in most cases, then I believe there are different solutions that present. I LOVE the new backup system. We used to run a heavily hacked up version of cpbackup that ran all of the flow and off-site the way we wanted. The new backup system gave us 90% of that (compared to 10% from Legacy), so we use it now.

Backups take time, but take a lot of resources. I very much like running all backups to disk first, which easily runs overnight, and allow the off-site to take its leisurely time throughout the day. Our hacked version did this, and it was a happy welcome with the new system. But we design our physical environment with a dedicated backup disk 2x of home disk. That gives us some flex for local retention and ensures we don't fill up a drive. I wish everyone did the same, but this is clearly not the reality.

The problem with the new system is not the change in process because off-siting happens asynchronously as backups finish. The problem is two-fold:


  • Cleanup procedure
  • Space detection


Since remote transmission is occurring asynchronously, it seems foolish to leave cleanup until job completion. Cleanup the retention and local copies at completion of transmission at the account level. Doing so allows rough space detention to pause the backup job until enough transmission and cleanup occurs to allow it to continue.

These fixes cause no change to how the backups execute for us, but in more space constrained setups can give more of that process/procedural flow of Legacy. A checkbox to allow for the old procedural workflow would be fine too, but lets not go backwards entirely on this one.

photo
2

This is one of those 'must have' features. Can't believe that this still isn't implemented.


People use remote backups, a lot of the time to reduce space consumption.

photo
1

Hey everyone, I need to offer a bit of clarification on this one. In the current system (the "new" system) backups will start to be transferred to the remote location as soon as the backup for each account is complete. There are a couple of situations in which I might expect to see something that appeared to present different behavior, however.

a) If your remote backup destination is unreachable

b) If your connection to the remote destination is slow or unstable, thereby causing the completion of the transfer of the backups to timeout or fail

If any of you is seeing something different than that, it would be considered a bug. I'm going to mark this as completed, but if you have any questions or are seeing any behavior that's different from that, please please please do open a ticket with your webhost or with our support team to get that addressed. If there are bugs we want to know about them!

Replies have been locked on this page!