backup question

Does anyone out there know of any backup client for windows that will pause the backup whenever the workstation is in use and resume it when the workstation goes idle again?

8 Responses to “backup question”

  1. ClintJCL Says:

    Nope, but can’t you just set the process to the lowest priority?

  2. Cygnostik Says:

    Buckle down, get one good backup off and schedule frequent enough incremental to start and complete while you’re not going to be using it. 😉

  3. sheer_panic Says:

    That doesn’t actually work for my situation. I have a bunch of users, they’re going to use the workstations at unpredictable hours and I’m being paid to make sure that the workstations are always available and perky. If something doesn’t exist, I’ll have to write something.

  4. sheer_panic Says:

    In answer to Clint’s suggestion, the problem isn’t CPU (which task priority works for), but disk bandwidth and disk seeks (which task priority does not). A disk seek is a pretty non-interruptable operation, so if the backup has started a disk seek and the user’s application needs the heads somewhere else, it has to wait for the backup’s seek to complete. Hence, a workstation that is doing backups is going to have very poor disk performance even if the thread is set for minimum task priority

  5. Cygnostik Says:

    No offense but that’s just goofy. There’s no reason you should have to write something to compensate for people not being able to deal with reality. You just don’t have cake AND eat it too. If people need protection against dataloss you mirror drives. If they need protection against dataloss due to user error they have to deal with maint. windows for backups. (or take a hit in the internals of their activity being adjusted to make backups before changes)

    It certainly is rare incremental backups would cause much performance loss, done frequently enough – unless they’re too cheap to throw the right hardware at the task. (and if they can’t handle one big initial backup then the data or productivity overall isn’t important enough to them)

    I hate to be the one shoving users and companies into their place but the simple fact is that no one wants to pay what it’s worth to do what’s difficult, people sure as fuck don’t want to pay for the “impossible” (impractical to the point that no one has done it… for a reason). If they did things would work differently and we’d all be living in floating cities in the clouds over the ocean.

    Call it customer control, call it user control but there’s more to life than letting people get whatever they want (especially without paying for it) and nobody wants to pay what even simple tasks are really worth. Many extremely needy, heavy media content, high traffic application sites who depend on being seen as the peak of performance and technoligical revolution deal with these issues in a reasonable way all the time.

    Also reminds me of a bigass client who called one day wondering why shit was kind of slow. They were happy to find it was because after a few years a disk failed but they didn’t lose data and the slowness was because the new drive was being populated.

    Write what you will, but you’ll always be happier just making people understand reality from the begining. 😉

    Also, you’ll resent them less. 😉

    Often I like to relate it to Scotty. He multiplied the difficulty and ETA of everything by 4… In these cases (sure not always but often) if you just help them understand how things are done and why (plus I like to call “lowering expectations”) when you do the best that is reasonable they *love* you for it.

    Alternatives that could be suggested might involve Xen, even UML… Which you could use to creatively control even I/O – but STILL someone is going to take a performance hit. I usually like to argue both sides but in this kind of situation it just doesn’t make sense to me.

  6. ClintJCL Says:

    I don’t really notice my computer slowing down too much when I move 4.5G files ONTO it, so i would think reading it for backups wouldn’t be THAT bad. But then again, my machine is pretty badass and only had 1 user 😀

  7. Sheer Says:

    Well, the first thing I would observe is that moving files onto it permits write caching, and the second thing I would observe is that you probably aren’t doing read-intensive things at the same time, and the third thing I would observe is that it’s possible the 4.5G files you are moving are rate limited at the source or by the link, and the fourth thing I would observe is that moving a single 4.5G file isn’t nearly as long a process as backing up a 200-300G filesystem.

    Anyway, these are single user workstations but they are being used by extreme power users and they were spec’d out by people who think that two channels a RAID array makes. I am trying to enlighten said people about the performance advantages of having 6 or 8 channel RAIDs in individual workstations (my workstation at home has a 6 channel RAID) but upgrading everyone’s individual workstations is going to be a slow process.

  8. Stonehog Says:

    Have you looked at Iron Mt Connected Backup for PC? If I recall, this was a compressed trickle backup subscription that is good for mobile users as well as connected PCs.

Leave a Reply