File-rewriting feature needed?

lauritz99

New Member
Eraser, Far as I can see, only overwrite clustertips and free space. (Should it do more, please ignore this posting. [:p])

Also, as stated in the related papers, a one-pass overwrite may not be enough to stop recovery by physical examination of the drive.

If I delete a file the old-fashioned way and my system then overwrites the sector with a new file, this exactly constitutes a one-pass overwriting of the old file. However, this sector will not be touched by Eraser, and so the old file remains vulnerable to physical recovery.

To overcome this vulnerability, you might take a backup of your HD, then use the Nuke-disk in dos to erase everything, put the files back and from that day on always use "erase recyclebin" instead of emptying it.
But alas, something still slips through. files may be deleted by a program (internet cache for instance) not passing through the recycle bin or files may simply shrink in size, leaving behind an unused sector that gets occupied by a new file before a scheduled ERASER-run has been performed.

Given that scheduling a Nuke-disk wipeout and reinstallation on a daily basis is unappealing from a practical viewpoint, maybe theres a need for a file-rewrite feature? This weakness might be a small one, but since most other erasing-programs dont seem to do this either, Eraser has a chance to get ahead of the competition here [8D]

Possible method:
When cleaning file F, Eraser creates a swapfile where file F is copied into. The contents of F is then overwritten with a given algorithm before F is written back in its old place. Note that simply writing F into the same sector several times over dosnt give the same security (no randomness). The files to perform this on could be selected based on their "last modified" property, making this a relatively quick operation for scheduling.

Youll probably note weaknesses in this method, particularly how a file can get "lost in limbo" if the machine crashes during the operation, but Im just tossing this in as a proof-of-concept.
Maybe you can make the system move the file to an empty and previously cleaned sector? such moving around happens all the time on a sector-to-sector basis during defragmentation.


Your devoted Eraser-user
Lauritz
 
quote:If I delete a file the old-fashioned way ...

Why wouldnt you use Eraser to delete the file? And use a erasing algorithm that overwrites the file in more than one pass? Or did I completely mis-understand your post?
 
jsb1: My point came further down in the text [:)]

as I said, Eraser wouldnt take care of the erasing of files my browser delete in its internet cache, nor will it take care of the unused sectors thats left behind when a file shrink in size.
NOT that is, until Eraser wipes the free space of your HD. But in the time between the altering of file A and the running of eraser, file B may already have occupied the sectors file A left behind, thus shielding them from Eraser
 
To clarify my point further, heres a practical if somewhat long example of data avoiding Eraser:

John starts up his computer at work monday morning. After running Eraser and making sure that all firewalls, virus-scanners and other privacy protection measures are in working order, he opens Outlook express to check his mail. One of the e-mail messages attract his attention. our surveillance camera sees john going wide-eyed, doing some cut and paste and inserting a diskette.
After John leaves work, our hard-working anti-terrorist team, apprehend John and confiscates a diskette containing an encrypted file. The HD of johns machine is also confiscated.
John had deleted the e-mail from his outlook inbox and ran Eraser during the lunchbreak. Can our team of security-experts still retrieve the e-mail?

The answear is yes, and this is how its done: When the e-mail was received, the size of the inbox-file increased, occupying the first available sector on the harddisk. After deleting the e-mail from his inbox, John went on to surf the web, reading an article by his favourite wired-columnist. While he surfs, outlook compactifies the inbox, physically removing the already deleted e-mail. (most mailreaders work this way) The aforementioned sector once again becomes the first available one on the harddisk and is quickly occupied by a internet explorer cache-file as John surfs onto slashdot.
The sector that once contained the e-mail has now had a one-pass overwrite and will not be written at again until the cache-file is removed. Due to increased funding, our anti-terrorism-laboratory manages to retrieve the file using physical examination of the disk as outlined in the papers on this site.

John is released on bail two weeks later after cryptography experts verify that the pictures his girlfriend sent him did not infact contain steganographically concealed messages. He however, is fined for software piracy uncovered during the process and loses his job over spending his working hours on private purposes.

You might claim that John could still have secured himself from the file-retrieval by compactifying the mailbox immediately after deleting the e-mail, for then to run Eraser without starting any other program.
This however, asumes that John have complete control of when and where his operating system writes to the disk from the moment he deletes the email until Eraser is done, 2-3 hours later. Infact just starting Eraser could trigger writing to the pagefile of windows.
John wont get much work done that day.
 
quote:Originally posted by lauritz99
Given that scheduling a Nuke-disk wipeout and reinstallation on a daily basis is unappealing from a practical viewpoint
Yes, it is, and thats why the correct answer to the problem is to use an encrypted disk partition. Sorry, if I repeat myself.

quote:Maybe you can make the system move the file to an empty and previously cleaned sector? such moving around happens all the time on a sector-to-sector basis during defragmentation.
Assuming that the program updates the files cluster chain accordingly when moving clusters, this would definitely be a better solution than your first suggestion. I actually thought of a similar algorithm a couple of years ago when I first thought about adding this feature.

However, considering the amount of work required for making this work on various file systems, and the actual usefulness of the feature, I decided against implementing it.
 
Back
Top