One of the first things computer users, who are concerned with security, learn is that multiple overwrites with ones and zeros are required in order to wipe data to the extent that it is unrecoverable by any forensic analysis tools. According to this article on Softpedia, on which I stumbled while searching for something else, the above statement is a myth; a busted myth actually. Experts now claim that a single complete overwrite is enough to render the data unrecoverable.
Refering to the myth, the author of the article writes:
One of the reasons behind this idea is that the positioning of a hard disk drive’s head is not precise enough to ensure that the data is overwritten with new information from the exact same byte.
A study, published on December 2008, claims that tests performed on both last and older generation hard drives have shown that recovering even a single byte of data after a complete overwrite is practically impossible.
Security researchers from Heise Security, who have reviewed the paper presented at last year’s edition of the International Conference on Information Systems Security (ICISS), explain that a single byte of data can be recovered with a 56 percent probability, but only if the head is positioned precisely eight times, which in itself has a probability of occurring of only 0.97%.
Since I was one of those who believed the statement about the multiple overwrites, I found the article very interesting. I haven’t read the study itself though.
Effective data wiping with a single complete overwrite by George Notaras is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
Copyright © 2009 - Some Rights Reserved
Oh my ! now its better…. I feel good now
Thaks a lot …grate article…
I was wasting my time when dealing with multiple wipes for security reasons since I was too a believer of multiple overwrites.
Well, it seems to me that showing that one’s methodology or tools cannot accomplish something, doesn’t prove that it can’t happen in general. Everyday we handle sensitive information, but very few are responsible for datasets large enough to be an overkill to use shred(1).
Michael: That is correct. There might be other techniques or tool sets that might be a bit more efficient than the ones used during the tests of this study, but I really doubt that there is a methodology that can do magic. Of course using shred to wipe data is not overkill. But, I also doubt that shred can delete all instances of documents that have been temporarily saved on the filesystem as backup copies, autosave copies, older versions of documents by various applications. Basically, what I care about most is that a simple dd command:
dd if=/dev/zero of=/dev/sdX
is more than enough to practically waste the data on the hard drive. Until now, there was always the uncertainity about the capabilities of the data recovery tools. I had never heard of such a study before.
I’ve just changed the default number of passes to 3 in shred:
http://git.savannah.gnu.org/gitweb/?p=coreutils.git;a=commit;h=83ae1bdd44432055e2cb6cf1502d1cc0cd651746
Pádraig: thanks for taking the time to comment. I have posted the information on a separate post.