View Single Post
Old January 9th 18, 07:15 AM posted to,
external usenet poster
Posts: 1,011
Default Why is a smaller folder taking longer to be backed up than alarger one?

Yousuf Khan wrote:
Okay, so I do daily Macrium file backups of my entire User folder, with
monthly fulls at the beginning of each month. All except one specific
subfolder which takes longer than all of the rest of the User folder
combined to backup. So I've excluded it, and I back it up in a separate
backup job which only runs twice a month, because I can't afford the
time to run that daily.

The User folder stats (minus excluded folder) are this:

Total Number of Files: 342862
Total Size: 913.78 GB
Backup Completed Successfully in 05:02:40

Now the excluded folder stats are this:
Total Number of Files: 651658
Total Size: 1.57 GB
Backup Completed Successfully in 11:47:09

As you can see, an approximately 1 TB folder is fully backed up in about
a mere 5 hours, whereas a puny 1.5 GB folder (almost 600x smaller!) can
only be fully backed up in about 12 hours? The only difference is that
there are about twice as many files in the smaller folder than the
larger folder. Both folders are on the exact same physical drive (HDD,
not SSD). File system is NTFS. What could be causing such a drastic

Yousuf Khan

For someone playing along at home, this is what you're looking at.
Dialog box and so on.

Now, does it use VSS for that ? Probably.
To freeze the volume before starting the backup, such
that any files still changing, don't get their latest version
backed up. It will backup a copy of the file system frozen
in time, at the instant the VSS completes the "freeze".

Now, to examine files and their locations, should File Explorer
be involved ? No.

OK, how about:

// Find the first file in the directory.

hFind = FindFirstFile(szDir, &ffd);

// List all the files in the directory with some info about them.

while (FindNextFile(hFind, &ffd) != 0);

I don't think that goes near File Explorer. It shouldn't
be using a shell to do things like that. It should be
making a file system call, at a guess. The file system call
will be against the shadow volume identifier.

I've seen bugs in File Explorer, where at around 60,000 files
in a folder (individual frames from a movie), if you delete
a few files, Explorer rails the CPU on one core, for each
Explorer window where the bug is triggered. This prevents
Explorer from re-painting the window showing a few files
that you've deleted. One way to "escape", is to "drain" the
folder to zero files, delete the folder, and magically
Explorer recovers for the File Explorer window in question.

But that doesn't happen at the file system level.


Small files, can be small enough to have their data payload
stored in the $MFT. This should not cause a problem.


Windows Defender can be "scanning the ****" out of any file
your programs happen to touch. Doesn't matter that the files
are "just going into a backup". There is an Admin Powershell command
to turn off Windows Defender real time.

Set-MpPreference -DisableRealtimeMonitoring 1

It will probably keep that setting, until your next reboot,
or until you set it to zero again.


To establish baseline performance, you can use Robocopy to copy the
files to a RAMdisk. That should expose any source HDD performance

And in the "not a fair fight" category, you can also try 7ZIP
and use "Store Mode", which does not compress a file tree when
creating an archive on another storage device. That should
return a much better result than your Macrium result.

It does take Macrium about a minute, to generate an index at
the end of the run, during which the disk light will stop
flashing. And then it writes out the index before completing
the backup.


For the HDD, run HDTune 2.55 and benchmark the drive, to make
sure there aren't any "bad patches" evident. If the drive
had a lot of reallocations, that may account for a bit of
the trouble.