User Testing of Eraser Builds
What we mean by user testing
For Eraser, this essentially means using a target build in a user environment (i.e separately from the development environment) to confirm that:
- the Eraser UI works as described in relevant documentation (or, in the absence of such documentation, as the user might reasonably expect);
- 'under the hood' the Eraser engine is working as described, and in particular that material which is erased is not recoverable by any means available to a normal user.
Inevitably, this begs some questions, but I think it is good enough for working purposes.
How we test
For anything to do with the UI, it is simply a matter of running the program and observing behaviour. The only external program we will need is Windows Explorer, for context menu erasures and to checl that erased files disappear from the list. For testing the engine, we need tools to attempt file recovery and also to check that identifiable parts of erased files do not remain on the disk suface. For these tasks, I intend to use Recuva and HxD respectively.
The hierarchy of user testing
We cannot and should not test everything on every build. So we need to agree what kind of testing is needed for each particular build. In order of increasing complexity and effort required, the levels of testing are:
- No testing; given the number of individual builds and the way the development process works, this is the default case; the user should only be asked to test things which need user verification; also, it may make sense to test a number of different changes on a single build which incorporates them all;
- Bug fix testing This involves implementing an agreed schedule of activity either to isolate and describe the user experience of possible bugs, or to verify that fixes are working for the user; it is concerned only with the specifics of a bug;
- Regression testing This is like bug fix testing, but is used where a bug has already been fixed and there is a possibility that it might have been re-introduced as the result of a subsequent code change;
- Feature testing This works in a similar way to bug fix testing to identify and describe user experience of, and enable user comment on, newly implemented features; again such tests are typically conducted in isolation on the assumption that the remainder of the application continues to work as before.
- Erasure effectiveness testing This most complex and time-consuming level of user testing should only be required when significant changes are made to the erasing engine or when a build is proposed for issue as a stable release; following the erasure of a defined data set, an attempt is made to recover the data using a trusted file recovery program (Recuva in my case), and this procedure may be supplemented by an attempt to use a sector editor (in my case HxD) to find known and unusual contents of erased files on the disk surface.
Bug fix, regression and feature testing can be done quickly; the main concern will be to ensure that we design and implement these tests appropriately (particularly in the case of regression testing), as each test will be more or less bespoke. Each erasure effectiveness will require hours and in some cases days of running, so we need working assumptions as to what is the minimum testing needed to provide assurance of effectiveness. The modular design of Eraser helps us here, and I think it is reasonable to say that
- because there has never been a recorded case (to my knowledge) where erasing has worked with one method but not with another, we can test effectiveness with only one method and assume that the results apply to other methods also;
- similarly, if erasing initiated in one procedure (context menu, drag and drop or schedule) produces certain results in a test, those results can be assumed to apply also to the other procedures.
These assumptions would need to be reconsidered if changes are made to the erasing method or to the engine. For example, once the Eraser engine is implemented as a service, we shall need to devise a specific test procedure to assure ourselves that the changes have not compromised the reliability of the old version.
This is a first version of the working assumptions about user testing, and is subject to amendment.