HDDSuperClone

maximus

Member
I would like to correct my statement where I thought that R-Studio was using OS buffering instead of direct access. There must have been too much going on in my head at the time, or I would have realized that because all the reads were no bigger than 4K, there was no OS padding going on. So R-Studio is using direct access mode which does not use OS buffering. Not that anyone cares about that as long as it works ;)
 

maximus

Member
So as I am tweaking the driver and modes, I did a test on a 160GB WD drive that has the slow issue (basically about a 2 second delay for each read attempt). The drive will show up, but the OS can't deal with it. I already know the drive has a small bad spot in the middle, I don't remember how many sectors, but the rest of the drive is fine. I have created a driver mode that will increase the read size up to the cluster size if sequential reading is detected, similar to OS read ahead buffering. I used a USB adapter to make the drive USB to simulate the real world, which limits the cluster size to 240. I calculate that it is capable of reading about 4.8GB a day like that on this drive, but it can get speed bursts where it goes faster.

So I fired up hddsuperclone with that driver setting, and let R-Studio work on the MFT data. That took between 40-50 minutes (I was out of the room so did not get exact time), and was about 270MB. This drive does not have any files to recover, except for operating system and program files, so I left it at that. But lets say the scenario was that only some documents and pictures needed to be recovered. If that was only an additional 500MB or so, it could be done in a couple hours. And it can be stopped and resumed since data already read will come from the destination. And with the image file being sparse, you can work on a large source drive with only having a much smaller system drive. Proof of concept :)
 

lcoughey

Moderator
It looks like you are on the right track with allow other apps do the file system recovery portion of the work.

One thing that would be useful is to tell hddsuperclone to image used sectors by bitmap.
 

Jared

Administrator
Staff member
lcoughey":3311qqb2 said:
One thing that would be useful is to tell hddsuperclone to image used sectors by bitmap.

+1 to that. It saves a ton of time if you can just image the used sectors instead of fighting to image everything on a drive with a lot of bad sectors.
 

maximus

Member
Yes, imaging only the used space can be very useful. I have some experience with extracting the bitmap file from NTFS and creating a ddrescue domain file from my ddrutility creation. But don't think that I am not going to see if this can't be done with yet another 3rd party software, which can handle more than just NTFS partitions. There is a free utillity called Partclone, which is capable of creating a ddrescue domain file of the used space of a partition. I have never used it, and there would need to be a bit of explanation of how to use it properly to get an accurate domain file. But if it works, I will again save myself having the pains of dealing directly with the filesystem. But some testing will be in order for that to see how feasible it is. I have no idea how it would handle a bad sector in a bitmap file. But I could make an option that all bad sectors would return with 0xFF which could produce a bitmap file on the "safe" side (ddrutility does that). The driver opens up all sorts of possibilities :)
 

maximus

Member
To add to the last post, since Partclone is a command line tool. I could potentially add a function in hddsuperclone to perform the needed actions with partclone with minimal user intervention. But again, I still have to test to see if it will even work in a worthwhile way.
 

maximus

Member
Scratch that last idea about adding a function to hddsuperclone to perform actions with partclone. That is too specific. I am already currently working on having the driver mode create a domain based on what data has been requested. And I already have a driver mode that only returns data that has already been read, otherwise it returns IO error (and I plan on adding an option to choose to return IO error or something else like all zeros or marking data). With this combination, you could run partclone with a destination of /dev/null (assuming it will accept that), and when it got the bitmap and started the cloning, you would switch the driver mode in hddsuperclone (can be done on the fly for reasons like this). Something like this may take a several minutes or longer to create a domain file, but it most definitely can be done, and does not care what 3rd party tool is being used. The biggest limitation on this process is if the 3rd part tool will accept /dev/null as the destination. But if that were to become an issue, I could potentially create a second driver device to react as needed while still being the empty hole like null.
 

maximus

Member
Okay, I have now had a few inquiries about using HDDSuperClone as a forensic cloning tool. I don't know if it would even be possible to produce a hash on the fly if it does any skipping, and it has not been high on my priority list to look into. But other than creating a hash, what would make it a forensic cloning tool? It does make a sector by sector copy, and that seems to be the main thing. It does not write to or alter the source, but the disk must be attached to a computer, so I could never speak for what the OS does to the drive. I never thought I would get many questions about using it for forensics... :?
 

Jared

Administrator
Staff member
maximus":1ujwnma4 said:
[post]9584[/post] I could never speak for what the OS does to the drive

I think most forensics pros will be using a SATA or USB write blocker anyway, so the OS attempting to write won't matter. Besides, there are builds of Linux such as Parrot Linux which are specifically built for forensics and don't attempt any writes to the disks just from connecting them.

maximus":1ujwnma4 said:
[post]9584[/post] But other than creating a hash, what would make it a forensic cloning tool?

I think logging is the next biggest thing. It'd need to create a log file documenting everything, such as:
  • The serial number, model number and other details about the source and destination drives.
  • Start and stop times, date, etc.
  • Identifiable info about the system it was performed on
  • List of read and especially unread sectors (since those will need to be excluded when rechecking the checksum)
 

maximus

Member
Jared":2ottyizn said:
[post]9585[/post]
I think logging is the next biggest thing. It'd need to create a log file documenting everything, such as:
  • The serial number, model number and other details about the source and destination drives.
  • Start and stop times, date, etc.
  • Identifiable info about the system it was performed on
  • List of read and especially unread sectors (since those will need to be excluded when rechecking the checksum)
Well, it does some of that now. The serial and model of the source are in the progress log, but not for the destination because it could be anything. The current time is also inside the log, but not the start time. No info about the system. As for sectors read and not read, as long as someone understands the log format this information is there.

I really have no intention of doing anything specifically for forensic purposes. There are plenty of programs for that already, which is why I am somewhat confused as to why it is being asked about of hddsuperclone. I think that all inquiries about the forensic application are not from the US. So I guess there could be different standards. All I think I can do is provide my software as I intend it to be.
 
Top