24/7 working programs, like syncservers ?

Ghost62

New Member
Hi, i have a doubt ... if i plan a scheduled cleanup for the unused space (or also just a hand-launched one) on a machine that is running a permanent syncserver program, does this affect the files handled from the program ?

Basically, the syncserver is simply a program that run in background in permanence (like, as example, some P2P programs that work 24/7), and continuously check for the connection of a group of users, keeping always synchronized their workgroup files ... for do this, the program always read and write the content of the archive disk, saving files also on-the-fly, and due to the fact that the disk is NTFS this may also be done using "sparse" files handling and "reserved space" for files (when the connection is slow, and a file is too big for be transferred in a single step, or when the connection fail, so the program anticipately "reserve" the space for the file, that can also be transferred in multiple parts in different times)

I have not made that server, only host one of its datapoints, so i do not know exactly how it manage the files ... need to know if, for any reason, a "clean unused space" operation can delete or corrupt those parts of files or incomplete files, and if that operation can work in concomitance with the syncserver program, or the fact that the syncserver is continuously and randomly writing on the disk can cause problems.

Thanks for any informations, and best regards.
 
In considering the answer to this question, you need to understand how erasing free space works. I think you will find that there is good news and bad news.

A free space erase is in fact, three separate operations carried out sequentially. Firstly (and optionally), cluster tips (the unused spaces between the end of the file data and the end of the final, partially used, block of space allocated to each file) are overwritten, for those files which the program has permission to access. Then randomly named files containing (in the default case) random data are written to the drive until it is full, and these files are deleted. Finally MFT entries marked as unused or belonging to deleted files are overwritten.

These operations, particularly the second, need, as you would expect, as much hard drive bandwidth as the machine can provide; free space erasing is a slow process and speed is limited primarily by hard drive performance, so the standard advice is to have as few programs as possible running in the background. In particular, for reasons which I hope are obvious, it is usually a good idea to pause antivirus programs while a free space erase is running.

The good news about Eraser from your point of view is that, apart from cluster tip erasing which can be disabled, free space erasing does not touch any space not marked as free. Also, Eraser knows about sparse files and leaves them well alone. If the process of reserving space for files is, in effect, itself a file creation process, Eraser would not touch any space allocated to the file.

The bad news is (1) erasing free space can (obviously) compromise disk performance for applications using the target drive and (2) results may be unpredictable if files are written to the drive at the point when the free space erase has filled or almost filled the target drive. This assumes that the target drive (including such things as RAID arrays that the OS sees as single drives) is where the sync server is storing its archive. Erasing free space on other drives on the same machine which are not used by the sync server would not, I guess, give the server any problems.

You will gather that Eraser was not designed for the use you describe. While I cannot be sure, I think that running a free space erase on a drive on which a sync server is actively running is likely to be problematic, and, in the worst case, could crash the machine with a full drive. If the the sync server is off line, doing a free space erase while it is not active should not create problems.

I hope this helps.

David
 
Thanks, i think i've understood all (hope :))

Anyway, the syncserver is not a true "full server" unit, is a program that is running on a PC connected permanently to the internet, and that use a separate disk for store the files of the workgroup and keep them synchronized each time one of them connect to the net, or modify and save part of the work (so all the ones that are connected can have the last versions of all the files) ... it's not full time writing, but that operation is discretely frequent ... more when is sending or receiving files, ofcourse ... so, seeing that what you say, is probably always better to stop the syncserver program, before doing a cleanup on its own disk.

I was hoping that it can be possible on-the-fly, like with the defrag program, cause our workgroup actually only have 2 machines with the syncserver installed and that can stay up 24/7, so all the times one of the 2 stop, the other get a bit overloaded and all slow down ... but if that cannot be done, well, patience.

And yes, i already used it on the other disks where the syncserver does not save its own files (system, storage, and backup disks), and none of them had problems performing the cleanup, nor gave any problem to the syncserver, except a little slow-down in saving new data (but i think this is normal ... being working to another disk, to slow down a bit the MB controller response, i mean), and the cleanup worked good too.
 
I understand. Unless Joel thinks otherwise, my advice would be to use only one of the syncserver machines while you do a free space erase on the other. Expect the erase to run for a long time (typically many hours, but it will depend on the speed of the drive and the amount of free space to be erased). If the drive is more or less de-fragmented, that should help in my view.

David
 
The way you describe your server program is such that it will preemptively allocate extents on the disk ("preallocation.") How does it accomplish that? If you use the Windows APIs, then it should be fine. But if you're just keeping tabs within your program and nowhere else, you'll be running into problems (not just with Eraser)

My main concern here is data integrity -- slow downs should not be that big of a problem since you are sending the data over the internet, which is inherently slow.
 
Hi, and thanks again for the informations.

I don't know exactly how the program manage the reserved space, i have not built it personally (in the workgroup i'm not a programmer, my main parts are hardware tech, prototypes planning/building/testing, and sometimes 3D modeling ... my knowledge about programming languages is very low) ... I just have on my machine one of the 2 syncservers, cause i can keep it working and connected 24/7, and had space for add a 4th hard disk inside it.

Anyway, i asked the person that built it, and he said that, for save time, he used part of the "core" source of a P2P program, EmuleMorph, and modified it for convert it from a P2P application to a private synchronizer tool, adding encryptions and safety and autosynchronization features, but, he said, he have not changed the way in which it store the files and reserve the space, other than adding an automatic backup copy and crc checks (as he said me, the program now make 2 copies of each file on 2 different folders, and check their reciprocate CRC each time it need to send the file to an user, for more safety, but have not changed any of the codes that write the files and reserve "sparse" space for incomplete files, so writing and deletion of the files are performed in the usual windows ways ... whatever it exactly means :P ), and that these routines are probably already known from programmers cause the source is public ... can this help in knowing how it work ?
 
Then in all likelihood it should be fine. Test first, though.
 
Thanks.

I checked with other users and can do it this way ... next sunday, i can ask the main programmer to disconnect its own server (for safety, cause they interact), make a backup copy of the files on another disk, and then start the syncserver, asking to some of the users to left their machines connected (also if they are not working, their syncserver clients check for updated files each 30 minutes, so my server will still receive requests), then launch the wipe free space cycle, and see what happens ...

If this gives any problem, or if it work good, i will let you know after this try.
 
Hi, sorry for double posting.

Today, when i tried eraser launched manually over some large files (Single pass by default, old toons, usually all over 300Mb, someone 700Mb or similars), it gave me always the same error for each file.

The file name disappear from the folder immediately, same as normal windows deletion, but the disk does not work and the message in the lower balloon say always "completed with errors" - then, when i open the eraser interface, i find all the operations in the "erase schedule" window, as not done, and when i right click on them and choose to do them immediately, they just disappear (no writing operations on the disk nor anything similar ... i tried on big files intentionally, so i can see if the disk works, cause for overwrite some 100s of MB it need time).

I tried to right click on them and show the logs too, and all of them are marked with the same error, it say that the file(s) "could not be erased because the file was either compressed, encrypted or a sparse file." ... btw, this til now happened with all the files (tried 20 of them, still have some for other tests anyway), regardless that i choose them one by one, or in groups.

Now, i Know that i have no compression enabled on my disks (only indexing from OS), and that the files was not encrypted, i was just testing the last version that i have installed (6.0.9.2343) over some old plain files that had to be deleted anyway for free space ... also, i'm using XP pro with all updates installed, and am logged as administrator ... can be that the files are saved as "sparse" from the OS itself ? ... but in this case, this mean that any file that the OS write as "sparse" cannot be deleted using Eraser, also launching the operation manually ?

Let me know if there are other tests that i can do for help with this.
 
Quote from the Eraser manual:
Because encrypted files, compressed files and sparse behave differently when applying the standard erasure procedure, Eraser will not erase such files when they are encountered and will instead log an error.

David
 
Thanks.

Yes, i seen the exceptions in the user guide, but it does not refer to "sparse" files, only to encrypted and compressed ones ... so this happens also for sparse ones, and also for the direct erasure operations (i mean, when i select a file and say to the program to erase it directly).

Anyway, is strange that it see the files as "sparse" (i'm sure they are nor encrypted nor compressed), cause are just old files archived times ago in a folder, that i'm eliminating for make space ... is normal that the OS creates those files as "sparse" without different indications from the user ? ... if so, windows is working in a very strange way (ah, well, any windows user think that it work in strange ways, after all :P :D).

The only other possibility that come to me in mind, is the defrag program (third part one, not the windows one) ... maybe it stored the files in "sparse" mode ? ... really don't know.

By the way, this may cause problems to Eraser users ? ... i mean, the impossibility to erase, also intentionally, those files that the OS creates as "sparse" ? ... or, maybe, there is some settings for say to the program to ignore the fact that the files are "sparse" types, ad still overwrite them manually ? ... just an idea ...
 
'Sparse' is a file attribute that programs can set. When it is set, the NTFS file system only allocates space for non-zero data, thereby (in effect) stripping out the zeroes from the stored data and restoring them only when the file is read. As many files have large blocks of zeroes, this obviously saves significant amounts of space. However, there is obviously also a degree of disconnect between what the MFT describes and what is present on the drive, so Eraser cannot safely handle sparse files and leaves them alone.

My guess is that whatever software created the large files marked them as sparse; with large files, it might very well make sense for it to do so. The only way to erase them is to delete them and then erase free space.

David
 
From my observation, the most common program to make sparse files are BitTorrent clients - with good reason, since the stuff they are downloading off the network may not be immediately available, and it could be prudent to allocate space only when it is needed.

Unfortunately moving and copying the file around does not seem to get rid of the Sparse attribute.
 
Thanks.

No, i'm not using bittorrent or other similar softwares ... also, the files are not on the syncserver dedicated disk (that use sparse files), but only on the other 3 disks ... i suppose it may be defrag program guilty, but i'm not sure about that, and also, i suppose that there is no ways for "manually" eliminate the "sparse" attribute, like as for "hidden" and similar (my XP Pro is set for show all the files and folders, but never seen "sparse" as attribute in any property windows, so i suppose only the OS can manage this ... and also searching the net, at least til now i've found no tools for manage this thing).

Anyway, if normally deleting them and then perform an unused space cleanup is enough, i think this is still ok ... probably, also better doing it after a defrag ( i mean, delete files > defrag disk > clean unused space, this may be also more efficent, right ? )

Well, anyway, if you need some deletion tests on "sparse" files, let me know ... i just discovered that both my data and backup disks are filled of "sparse" files that i can eliminate (around 800 files, aproximatively 350GB of total space, so the material for "evil" and "destructive" tests does not lack :D :P )
 
No, defragmenters should be incapable of producing sparse files. It is an attribute that is set when the file is first created.
 
Then i have no ideas ... other than, maybe, that the original files was downloaded from the net using some torrent or P2P programs, maybe ... but on a different machine, "sparse" is still set ?

I mean ... all those files, i received them from a friend that have a big collection of them ... probably he got all them from the net using P2P programs ... but, all those files was first copied from his PC to an USB disk, and then copied from the USB disk to my PC, in the transfer process ... do you think that, also if they passed 2 cpoy operations (that, theorically, must "compact" all the files), they are still marked as "sparse" files, for the OS ?

If not this, then i really don't have any other ideas, about why they are marked this way :shock:
 
I think Joel's point is precisely that the sparse attribute will survive many copying operations. Also, if any of the data streams within the file are sparse, the system will regard the whole file as sparse. And, as far as I know, compression/decompression algorithms will not change this.

David
 
Yup, David got my point right.
 
Hi, thanks i understood now, then (sorry, my English is self-learned, so it's not too good) ... residual "sparse" files was survived to multiple copy operations, and, i suppose, does not exist any way for eliminate this attribute in an alternative way ( Does Microsoft say nothing about this thing ? )

In the meantime, i made a test on a different disk (archive disk E), with 32GB of free space, launching the "Erase unused space" task manually, after a manual "standard" deletion of some files that was not erased on-the-fly ... most of them was, apparently, overwritten and eliminated with success (at least, FileScavenger does not find them, not yet tried with some more powerful data recovery software, but til now FileScavenger always worked well, so i think is a good result) ... still the task was marked as "completed with erors".

It took 3 hours, approximatvely, and the errors in the log was the following ones:

178 files had the following message as information (in black): " did not have its cluster tips erased because it is compressed, encrypted or a sparse file " - these files was not deleted files, and are still on the disk, so it's probably normal, being part of the "sparse" files.

one file had this as error (in red): " did not have its cluster tips erased because of the following error: The specified path, file name, or both are too long. The fully qualified file name must be less than 260 characters, and the directory name must be less than 248 characters" - i checked, it was an old file with really too long name, no problems.

One file had this as error (in red): " did not have its cluster tips erased because of the following error: Accesso negato. (Exception from HRESULT: 0x80070005 (E_ACCESSDENIED)) " (means access denied, in Italian - i discovered that this file was deleted from antivirus as containing a TrojanDropper script, and no more present in the indicatd folder ... is this normal ? )

12 files had this as error (in red): " did not have its cluster tips erased. The error returned was: Risorse di sistema insufficienti per completare il servizio richiesto" (means insufficent system resources for complete the requested service, in Italian ... not previously deleted files, don't know why about the error, i was normally working with the PC during all the time, and it gave me no errors about system resources, nor hangups, nor crashes)

23 files had this as error (in red): " did not have its cluster tips erased. The error returned was: Impossibile accedere al file. Il file è utilizzato da un altro processo. (Exception from HRESULT: 0x80070020) " (means impossible to access the file, the file is used from another process, in Italian ... i checked, they was all video files that i viewed in the previous days using mediaplayer ... i suspect that mediaplayer keep the files viewed as "in use" for long time, after you stopped to use it, cause i already had similar problems trying to delete some video files after viewed ... the message was always "impossible to delete, used from another program", and unlocker always had to unlock them from mediaplayer, before i was able to manually delete them, but i'm not sure about why nor how it act this way)

One error (in red) was: " Files in E:\System Volume Information did not have its cluster tips erased because of the following error: Access to the path 'E:\System Volume Information' is denied" - i suppose this is due to the OS.
 
Ghost62 said:
One file had this as error (in red): " did not have its cluster tips erased because of the following error: Accesso negato. (Exception from HRESULT: 0x80070005 (E_ACCESSDENIED)) " (means access denied, in Italian - i discovered that this file was deleted from antivirus as containing a TrojanDropper script, and no more present in the indicatd folder ... is this normal ? )
Your antivirus could just be "quarantining" the file. Check your AV settings/quarantine.

Ghost62 said:
12 files had this as error (in red): " did not have its cluster tips erased. The error returned was: Risorse di sistema insufficienti per completare il servizio richiesto" (means insufficent system resources for complete the requested service, in Italian ... not previously deleted files, don't know why about the error, i was normally working with the PC during all the time, and it gave me no errors about system resources, nor hangups, nor crashes)
This is weird. What file system is this drive?

Ghost62 said:
unlocker always had to unlock them from mediaplayer, before i was able to manually delete them, but i'm not sure about why nor how it act this way)
Eraser has its own unlocker. It should be able to release the files. Did you enable it?

The rest of the messages are normal.
 
Back
Top