I recently was asked to look at specific findings of other forensic experts related to the “recovery of deleted flies.” Another expert found specific behavior highly suspicious, but after a careful review, I concluded that this was more an artifact of how file systems recycle media. In this post, I will walk through my logic in hopes it will be helpful to others working to understand how FAT works and the limitation of data recovery.
First, an important point to remember is that file systems are traditionally lazy about cleaning deleted space. The naïve way to think of this is that when we delete a file, we zero out the disk space. However, that turns out to be extremely expensive. Many years ago, we had a project that involved mixing meta-data management, done by a single computer, and data management, done by multiple computers. When one computer wanted to write data to the file, it would first adjust the size of the file by asking the meta-data managing computer to extend the allocated space of the file. We found that when we did this over the network (using the CIFS/SMB network protocol), the meta-data managing computer would zero-fill the entire file. The request would always fail at a specific size (around 1.8GB, as I recall). A timeout caused this failure – the SMB protocol would only wait for a response for some time, after which it would declare the operation had failed, and the client computer would “give up.” This was because the time it takes to zero out that much space was longer than the timeout value! Our solution to this was intriguing: we added a “filter” on the drive where the file system for the meta-data managing computer was located and threw away any requests to zero out data blocks on the drive. We knew this was “safe enough” because our client, in turn, would write the data to those regions as soon as the allocation was completed.
Our system ran on top of a shared storage mechanism (fiber channel) that permitted multiple computers to access the same storage. We used the meta-data management computer to “partition” the space. Once partitioned, it was safe for the client to write directly to the disk. This provided several benefits, not the least of which is better parallel I/O activity.
This project explains why we do not routinely scrub disk storage: it is expensive and, in most cases, unnecessary. In recent years storage devices have changed so that file systems can note that a block has been freed, which permits the media to handle the dynamic zeroing operation (e.g., the ATA TRIM command.) Similarly, FAT file system implementations generally do not zero out files when they are deleted. To better understand this, we will look at the key FAT data elements involved: the directory entry and the file allocation table.

The FAT file system connects the allocated storage of a file by inserting the index of the first block of storage in the file. Each block is some uniform size (usually referred to as the cluster size), and there is a table that tracks the allocation of each block in a chain.
Thus, the directory table entry for a file has a first cluster number, which represents a block of storage on the media and a corresponding entry in the file allocation table. The entry in the file allocation table then points to the next block of storage allocated to the file. When there are no more blocks, a distinguished value is used to indicate no more blocks. In addition, the file size is also part of the directory entry, so we know how much data is valid (anything beyond the size is “slack space” and may contain garbage though Windows seeks to fill it with zero data to ensure that a user won’t see some old data within the block.) Since FAT does not offer multi-user security, zeroing slack (unused) space in the final block is not required – and is typically omitted by simple FAT implementations.
Of course, when new media is formatted, most of the space is free. There is a list of free blocks using the same mechanism. Here is where things get interesting: the Windows implementation of FAT attempts to allow some level of parallel allocation. The Windows kernel has a concept of map control blocks (see FsRtlIntializeLargeMcb for a breadcrumb to the various functions for managing MCBs), which is used in the FAT file system to manage space allocation. This is because, for Windows, parallel access to the allocation data was important. In a simple device, such as a camera, there is no parallel access – the camera is the only thing accessing the FAT media, thus keeping this implementation simple. This often means that for Windows, FAT storage space is recycled in a highly varied fashion. A simple FAT implementation, however, often frees space by inserting it at the front of the free list and then allocating new space from the front of said list. There might be reasons not to do this (e.g., wear leveling), but it turns out that modern flash media typically implements wear leveling in the hardware, and thus there is no reason for the software to care.
Why does this matter? Because it means that, with a high likelihood, if you delete a picture, the space allocated on the media for that file will be pushed to the front of the free list. The next time you take a picture, the software on the camera will reuse those blocks because it just pulls them from the free list. Ordinarily, this is not an issue for anyone. Still, for someone performing forensics, it means that if you find a deleted image, the directory table entry will still be pointing at the media block used when that file existed. Thus, we end up with a situation where an old file looks like it points to the contents of a new file. If these files are of very different types, say a Word document and a cat video, this situation is relatively easy to identify because Word won’t understand the cat video, and the video software won’t understand the Word file. However, for a camera, it is often the case that the pictures it captures will be pretty similar. Thus, this can lead to a situation where a deleted file, found in the directory table with a filename that starts with the invalid character (meaning “this entry is not currently in use”), has a first cluster value that points to something.
Sometimes that something will be another image. Thus, a forensic examiner needs to remember that what they are looking at might be the old contents of that now deleted file, but then again, it might not be. If we recover the data contents, we may find it is the same as some other active file. This does not indicate anything wrong: it is inherent in how the FAT file system works. While we can sometimes infer information from “recovered” data from a FAT-formatted media device, we must be cautious about drawing firm conclusions. This is why I ended up disagreeing with the other forensic examiner’s claim that someone had tampered with the evidence: no tampering was required to be able to observe precisely this behavior.
Ultimately, it is vitally important that anyone performing such forensic investigation of media understand the on-disk format. Having additional insight into how different systems implement that on-disk format can be important in understanding the device’s state. This is particularly important when considering portable media since it may be modified by different devices, each of which might implement their functionality differently. This is acceptable: the on-disk format only defines the format of valid state for information stored on the media (e.g., a “consistent state” for the given file system) but does not dictate a specific implementation of that behavior. As an investigator, it is also crucial that we always strive to keep an open mind. In this case, I had to ask myself: “Could this have happened in some other way other than what the other expert said must have happened.” We all have biases, but as experts, part of our job is to control for those biases to minimize the likelihood that we will “see what we want to see.”
In this case, I had to notify the client that I disagreed with the other expert’s analysis. They were gracious about it but did not use my services further on the case. I do not take this personally: I did my job. I did not tell the client what they wanted to hear, but I gave them an opinion based on my analysis and understanding of the underlying system. My integrity is more important than delivering incorrect information to my client since I always have to be willing to defend my findings. None of us like to be wrong, but sometimes we can miss something. The nature of expert work in forensics is to look at these details and see if we can exclude other possibilities. Then, as Sherlock Holmes says: “When you have eliminated the impossible, whatever remains, however improbable, must be the truth.”