Eraser vs. Encase

  • Thread starter Thread starter Anonymous
  • Start date Start date
A

Anonymous

Guest
I was just wondering if Eraser officially evades encase because I have heard that Encase often times still finds "scrubbed" files even though they have been erased with a scrubber.

So basicly im asking is if I use Eraser with say 20 Pseudorandom passes on unused disk space/private files could encase trace them?

Im not talking about the "trail" of overwriting pattern that encase can see i dont mind that, but will file be recoverable/visible at all?
 
Encase doesn't have any magic methods for finding erased files. If a file was merely deleted, then it can generally recover it if it wasn't overwritten. It cannot put together a fragmented file that was deleted, though it may recover a fragment of it. If a file is erased using Eraser, then Encase faces the same difficulty that any automatic or manual method would face - the data isn't there anymore. I have placed a number of files on a drive, erased the files, imaged the drive forensically, and examined the image using a variety of tools. They all found nothing.

However, there are often other trails that are left behind as evidence, even if the end product is erased.
 
One pass of even all zero's defeats EnCase. EnCase was primarily designed to find the 'ghost' files of potentially deleted files, whilst also recovering them.

Windows stores copies of files you view/use, and EnCase was designed in that regard, but of course, also being able to recover 'deleted' files.
 
consider this

Anybody using EnCase regularly is probably an LEA (i.e. Law Enforcement Angency). If "they" want to find some sensitive info on your HD, then they will...no matter how many times you run Eraser. Obviously, LEA's have methods of retieving info from HD's that are not available to the general public (that whole 9/11 stuff and all) that are beyond the scope of EnCase.

Using Eraser even at 20 passes with PRNG will not defeat the most sofisticated forensic evidence. Sorry.

My suggestion is that you use Derik's Boot and Nuke in conjunction with Eraser. Wipe your HD with Easer first, then use DBAN with at least 12 passes of PRNG.

Beyond that, only physical destruction of your HD will suffice (i.e. a sledge hammer and a gallon of gasoline).
 
"Obviously"...
To merely state that because someone is in gov't that they have techno-magical powers to recover data that has been overwritten is hogwash. Offer some evidence. Demonstrate it or show proof of someone else demonstrating it.

Here's how it works: Say a fresh sector has text data written to it. Then different text data is written to it. A 3rd time even different text data is written to it. Finally, it is overwritten with garbage info a few times. When you try to recover the data, what will be recovered? Which file's info will it find? Even if random bits were possible located, there are no complete bytes to make up what was there into something recognizable. If simple text can't be recovered, then a binary is out of the question. That is reality.

That said, there are plenty of other places (log files, registry, etc) that evidence resides that can all point to a particular computer or IP address. So going toe to toe with the feds is asking for serious trouble.
 
I stand corrected

To avoid Flame Wars: you are absolutely correct Gralfus. There is no solid evidence supporting my (and many others') belief that there exists a superior forensic tool that can recover data no matter how many times it has been overwritten and by whatever sort of wiping scheme/method.

However, the roundabout point I was trying to make is that not all secure deletion proggys are equally as effective: and some don't even work at all. As well, it is truly a guesstimate in regards to the total effectiveness of the various wiping patterns/schemes...considering there is so many variables to factor in. However, there is a fervent belief that the use of PRNG repeated in "x" number passes is the most effective method to overwrite data on a modern HD.

In the end, my whole point was this: it doesn't hurt anything to "rinse and repeat." All you loose is time.
 
I find this an interesting line of discourse... A little background on me (not that you care, or should): I hate mystery. I don't want to just watch an illusionist; I want to know how he does it. I don't like claims of ghost sightings; I want to know what was actually observed, and/or what motivated the people involved to concoct the hoax. You get the idea.

What am I talking about here? I'm getting at the fact that it bothers me not knowing what government agencies are capable of with regard to forensic data recovery. Nothing is "obvious" here... Anyone who knows what the NSA is capable of surely isn't going to go around explaining it. All anyone can do is guess. It's frustrating, and it pisses me off.

I work for a guy who writes software that performs secure deletion. He knows a hell of a lot more than I do about hardware and the whole nine yards, but even he doesn't know what the government is capable of.

Basically, the head of the hard drive is never perfectly aligned over the track; it's always a little to one side or the other--the writing actually "bleeds" over along the edge of the track. This is why overwriting isn't so simple... You are always overwriting the track itself, but you may not touch the edges of the track well enough to cover all traces of what was there.

Modern, high-density hard drives have much-tighter tolerances than they did in the past, and so track bleed isn't as much of an issue as it once was--but it still happens.

What does this mean to you and me? I'm not sure. I am pretty sure that overwriting with a few passes of pseudorandom data is about the best you can do. (And by "a few passes", I don't mean 30 or 40; more like 3 or 6.) It is hard for me to imagine that anyone could possibly recover much data from a drive that was treated in this manner. They can probably get some of it back--maybe enough for their purposes--but there's no way they're getting all of it back.

I'm not here to claim that I have the answers... I'm in the dark like almost everyone else.
 
I think that with modern drives 3 overwrites is minium to defeat
MFM ( Magnetic force microscopy ), MFSTM ( Magnetic force scanning tunneling microscopy ) and
SPM (scanning probe microscopy).

Perhaps this could be calculated somehow?

What about USB Flash Drives? Is one overwrite enough?
 
Anonymous said:
I think that with modern drives 3 overwrites is minium to defeat
But of course, you're just guessing, like the rest of us.
 
Another take

I thought I'd comment on this, even though the thread obviously is a tad old.

First of all, let me be clear that I'm not a computer forensics expert in any way. I do have a background in electrical engineering specializing in semiconductor fabrication, which includes the use of fine/microscopic inspection tools. I'm also a federal employee.

I've been doing a fair amount of online research regarding computer/disk drive forensics. And I'm of the firm belief that the vast majority of people, even those who get into trouble that results in confiscation and examination of a computer, have essentially zero chance of facing anything more than the likes of a Winhex/Encase probe.

I always find it remarkable how much mythology there is in the general public regarding what goes on in the government. Yeah, when it comes right down to it, the government can bring huge resources to bear on a problem. But in practice, making something like that happen can be extremely difficult. Case in point: right now, I'm working hard to get a couple of old (Pentium III 700 MHz) computers upgraded to ~$4300 Dell workstations so that modern Autocad can be run in a reasonable fashion. The request has been turned down twice, even though the upgrades are truly needed.

I've had computer administrators who used to work for the U.S. Marshals tell me they had to scrounge for hand-me-down Pentium II computers to replace Pentiums back when Pentium III's were the norm. Where was the money going? The basics: armament, etc.

The bottom line is that while departments CAN get ultramodern, top-of-the-line equipment costing tens of thousands of dollars (or more), it's not easy. You have to be able to justify it up one side and down the other, and the budget has to be there for it.

What's more, these days government agencies are going with commercial off-the-shelf parts (COTS) as much as possible, not just to encourage private industry, but because that's where the technology is.

Now, from the little bit of research I've done on the subject, magnetic-force microscopy seems to be about the cream of the crop when it comes to fine imaging of disk drive information:

http://www.runtime.org/recoverability.htm

That sounds about right, from what I know of testing techniques. Given the fact that MFM is the equipment of choice by manufacturers to test hard drives commercially AND has been shown to be able to detect old, overwritten data, I think it's a safe bet that MFM is just about the state-of-the-art, inside government or outside. It's obviously a very sensitive technique.

http://www.swissprobe.com/hr_mfm.html

OK, so what are the chances that anyone will ever face an MFM-powered examination of his or her hard drive? Well, look at what it would involve:

  • - the MFM itself (a decent one could run tens of thousands of dollars)
    - the computer and storage space for detailed imaging data (assuming it were digitized) from a hard drive, which would require tens to hundreds of terabytes of storage, probably in a RAID array for redundancy ($50,000 on up)
    - technicians to run the imaging process (maybe $40,000 per year each).
    - months of work to full image the entire drive.

In other words, for someone to want to take a microscope to your hard drive, they'd have to justify tying up $50-$100k of equipment (or more), plus one or more technicians, for several months, all for potentially zero payoff. And from what I know of how government works, labs with this sort of specialized capability are very likely not plentiful--not when there are usually other options for obtaining evidence and convictions.

Do you really think there's anything you've done that would warrant that kind of attention and resources?

As I said at the outset, I'm not a forensics expert, so take all of this for what it is: an educated guess. Nonetheless, I am extremely skeptical that anyone who is not involved in espionage, terrorism or running international crime syndicates/child porn rings, etc. will ever face a realistic chance of having techniques brought to bear that can't be defeated by a pass or two of overwriting, plus judicious erasing of system tracks. You are far more likely to have your ISP's records subpoenaed, your phone tapped, etc.

Just my $.02.
 
I agree with the above. I've always wondered why I see so many topics when it comes to privacy software like Eraser. 90% of the stuff I've seen are questions that are all pretty much the same about can product x defeat Encase or some type of forensic hardware. I think people take this kind a thing to far, because unless your in serious amounts of trouble with the police then you wouldn't take a chance on shredding information electronically, you'd be safe and dispose of your dirty little secrets physically.

It's the same when it comes to encryption as well. People ask can agency x break this encryption? And I very much doubt they would have to because if your system was taken by someone like HTCU and you were taken in for questioning then you would have no choice but to give over your password because if you don't you going to take a holiday in one of Her Majesties hotels.
 
Help!

I think your program is great, and I'd like to start using it.
Before putting it to work, I gave it to my computer science professor to test out.

He used encase.

Started with a clean hard drive.
Created two text files. One small (a few k) One a bit larger (a few hundred k).

Wiped both.

Encase found nothing useful from the larger file.
Encase was however, able to recreate the entire contents of the small file.

What gives?

Thanks much,
Matthew
 
Re: Help!

leder said:
I think your program is great, and I'd like to start using it.
Before putting it to work, I gave it to my computer science professor to test out.

He used encase.

Started with a clean hard drive.
Created two text files. One small (a few k) One a bit larger (a few hundred k).

Wiped both.

Encase found nothing useful from the larger file.
Encase was however, able to recreate the entire contents of the small file.

What gives?

Thanks much,
Matthew

Matthew,
Did your professor make sure that there were no other copies of the file anywhere (e.g. in a swap file, etc.)? Did you attempt to replicate the results, or try eraser with other files?
 
Before and after test

I just ran a test with two text files, one large and one small. I used Winhex to view the actual disk contents, before and after an erase.

Before the erase, the sector hex map clearly displayed the file text. After an Eraser wipe, the contents at that location were gone.

Not sure what your professor found, Matthew, but either data is on the disk or it's not. If it's overwritten, neither Winhex, Encase or any other file recovery program can recover it. And according to a direct disk view, eraser does what it claims to do.

I would suspect that your professor found a second copy of the file, or erased the file incorrectly. Either that, or Eraser has some obscure bug that causes it to sometimes not erase data. But if that's the case, I haven't seen it crop up yet.
 
obscure bug

I believe that the good professor has indeed found an obscure bug.
Here are the details in a nutshell:

With the NTFS file system, Microsoft saves small files directly inside the Master File Table (MFT)...instead of saving a pointer to the file.

It appears that Eraser does not completely wipe out the file if it's stored in the MFT. In the case of our test, it wiped out the file name and the beginning of the file, but most of the contents remained completely intact!

I have a couple of screenshots from Encase that prove this is what's happening. Please email me if you'd like me to send them your way.

Any chance you guys can fix this?
 
Re: obscure bug

leder said:
I believe that the good professor has indeed found an obscure bug.
Here are the details in a nutshell:

With the NTFS file system, Microsoft saves small files directly inside the Master File Table (MFT)...instead of saving a pointer to the file.

It appears that Eraser does not completely wipe out the file if it's stored in the MFT. In the case of our test, it wiped out the file name and the beginning of the file, but most of the contents remained completely intact!

I have a couple of screenshots from Encase that prove this is what's happening. Please email me if you'd like me to send them your way.

Any chance you guys can fix this?

Interesting observation. The Eraser FAQ says that Eraser doesn't clear the MFT, which clearly can contain the entire file:

http://www.pcguide.com/ref/hdd/file/ntfs/archMFT-c.html

Of course, in order for this to happen, the entire file has to fit in the MFT entry, which is limited to 1024-2048 bytes or so, so a 5kB file wouldn't normally fit in its entirety.

However, there seems to be some conflicting information regarding Eraser and the MFT, as Eraser's options for the "clearing unused space" function include MFT clearing.

The MFT, as I understand it, is just a file, and it clearly CAN have entries erased: Encase and Winhex, among others, presumably clear these entries.

So the question become, does Eraser clear MFT entries and data for a file when erasing a file, and if not, what needs to be done to implement the feature?
 
Well, I just ran two tests. In the first, I created a text file with a short, nonsense name in the root (C:\) directory. In the file, I put another nonsense word. I proceeded to run Disk Investigator to search for the nonsense word that had been placed within the text file. It found the word twice. I erased the file using Eraser 5.7, and it got rid of one copy of the nonsense word. Damning, right? Obviously, Eraser only got rid of one copy.

Well, not quite. I then proceeded to create another file with a different nonsense name, containing a different nonsense word, on another computer. Both the filename and the file contents of the second file were the same length as those of the first.

I then copied this file to my test computer.

I did a search on the drive, and it found the nonsense word contained in the file only once, in the MFT (Disk Investigator indicated the entire file was resident in the MFT).

I noted the location on the disk of the MFT entry.

I then erased this file using Eraser 5.7, as before. I went back to the file entry in the MFT, and it was overwritten.

(continued)
 
(continued from previous post)

Now, a subsequent all-disk search for the nonsense word in the erased file did find it again, in the NTFS transaction log file $LogFile. Not sure how long that will last--the log file is 4MB, which means it probably gets overwritten. I also don't know whether or not the logfile can be turned off, or dealt with in some other way. Work is ongoing to determine exactly when the file data gets written to $LogFile, and thus missed by file-erasing programs.

The moral of the story is twofold: first, Eraser does erase entries and data in the MFT on NTFS systems. Second, if you create a file on your test system (as opposed to copying it from a network drive or some other source), it's quite possible that more than one copy will be written to the disk, possibly as part of a temp file. If your professor created the test files in this manner, it's quite possible that this is what Encase found.

So, it would appear, at least from every test I've been able to run, that Eraser 5.7 does what it's supposed to do. It gets rid of the file you ask it to erase. Of course, it can't do anything about temporary or leftover, old file fragments, which is why clearing unused space on the drive periodically is a good idea.

Those wishing to run tests like these can easily do so with the freeware Disk Investigator program:

http://www.theabsolute.net/sware/dskinv.html
 
OK, sorry about all the posts, but I thought I'd share what I've learned on this topic this evening.

First, it would appear, from what I've seen, that Eraser does what it's supposed to do: namely, wipe files, including files whose data is stored entirely in the MFT.

However, Eraser (and probably every other file-wiping utility available) can't wipe data from an individual file that's been relocated on disk, resulting in "images" of the file around the drive, without clearing all the unused space on the drive. Moreover, there appears to be no way to remove remnant data from a file that's been written to the NTFS $LogFile; that system file is part of the NTFS file system, it's used for filesystem journaling and recovery, and if you mess with it, you risk your system.

The good news is that the $LogFile seems to get overwritten on its own periodically, so data in the $LogFile file has a limited shelf life, regardless. And it would seem that one can change the size of the $LogFile, making the turnover process more rapid by shrinking the file.

It would appear that these are fundamental limitations of NTFS; I'm not sure even a freespace wipe will clear any data kept in the $LogFile (probably not). You will have some data leakage, at least for awhile. Perhaps running an encryption program such as TrueCrypt would keep the information truly private, or using Windows' Encrypted File System, to ensure that no data gets stored in the $LogFile as plaintext. $LogFile seems to be used to temporarily cache data written to the disk, so if that data's encrypted, then it doesn't matter whether a bit of it is recovered.
 
It has been known for ages that overwriting is not entirely effective on journaled filesystems, such as NTFS. These filesystems are not guaranteed to overwrite data in place, and in fact, they usually create temporary copies of the data in a journal or a log file. I think this should be emphasized in Eraser's documentation.

There is nothing any userspace program can do about this. The only way to achieve better results would be to build the overwriting mechanism into the kernel. Since Windows' source code is closed, only Microsoft could implement this for NTFS.

However, as Kythe already wrote, a much better solution would be to use encryption. I haven't used Windows in a while, so I cannot recommend any particular software, but knowing Microsoft's track record in security, I would be wary of trusting EFS too much.
 
Back
Top