HDDSuperClone

maximus

Member
This is not related to the installing issue, but I would like to point out something that has changed in the Linux kernel sometime after Ubuntu 16.04 (kernel 4.4). Somewhere above kernel version 4.4, something changed and there is no longer any ATA return data in ATA Pasthrough mode. So the program must rely on SCSI sense return data. It will still detect read errors and such, but reading the ATA register data is not possible. So for all of you that like having the latest and greatest updated version of Linux, well, sometimes the newest, latest and greatest is not always the best for some things.

This does not affect the direct modes, as for that I am bypassing the Linux drivers and have direct control over the drive.
 

pclab

Moderator
maximus":256huhgj said:
[post]9210[/post] One last thing. Try the following command to install the deb package and see what happens. You should be in the same folder as the deb package for this command, or specify the location.
Code:
sudo dpkg -i hddsuperclone.free_1.12-1_amd64.deb
EDIT:
I am not sure why, but some things I have been reading suggest that you should also run the following command after the above command. I don't know why because the first command seems to work to install it.
Code:
sudo apt-get install -f
I was reading that there was a bug in Ubuntu 16.04 that would prevent installing of 3rd party software, but I am not sure if that is what is causing your issue. If so, then theses commands should work.

I get an error installing

sudo dpkg -i hddsuperclone.free_1.12-1_amd64.deb
dpkg-split: erro: erro ao ler hddsuperclone.free_1.12-1_amd64.deb: É uma directoria
dpkg:../../src/unpack.c:123:deb_reassemble: erro interno: unexpected exit status 2 from dpkg-split
Abortado (imagem do núcleo gravada)

But don't think on it too much.
I will get another PC.
 

maximus

Member
There is not much out there about this error that matches, but what is out there points to an issue with dpkg in 16.04. You should still be able to perform the manual install with the instructions.
Code:
To extract hddsuperclone, open a terminal and use the following
commands (replacing the -x.x-x.x.-xxx with proper version number and
architecture):
     gunzip hddsuperclone-x.x-x.x.-xxx-free.tar.gz
     tar xf hddsuperclone-x.x-x.x.-xxx-free.tar

   Then navigate to the proper directory:
     cd hddsuperclone-x.x-x.x.-xxx-free

   To install hddsuperclone, use the following command:
     sudo make install
 

maximus

Member
pclab":p15cupse said:
Don't bother Maximus.
When neeed, I will try on another PC.
Thanks
I am not bothering, I am just stating that you should still be able to install HDDSuperClone on that computer using the conventional method. I am considering the issue closed as it appears to be the fault of the OS having issues installing the deb package.
 

Jared

Administrator
Staff member
I think it's actually an update issue. I had it happen only after I tried to install the newer (gui) version over the top of the older version using the package installer. I was running Sparky Linux which is a Debian variant. Couldn't get it to work even after fully uninstalling, deleting the folder and reinstalling. No idea what caused it, but I discovered that I like Parrot Linux better anyway, so I just installed that to resolve it.

So for people downloading it now, it's probably a moot point.
 

maximus

Member
To get back on track, I have been experimenting with writing a Linux driver, and have had enough success in testing so far to consider my next step possible (although not without issues that I have yet to deal with). I think I can make HDDSuperClone an interface for other programs. The idea is to create a block device that other programs, even the OS itself, can access like they are reading the drive being recovered, but with all data transfers going through HDDSuperClone. Data that hasn't been read will be attempted from the source, failed data can return with IO error or zeroed out (by user choice), and data that has already be read will come from the clone/image. The possibilities are endless.
 

maximus

Member
For as long as this thread is, and how slowly I have been at getting this done, I wonder how many will actually read this post…

I finally have the driver in alpha stage, and it is working great! :D :D :D I can now produce a virtual block device that other tools can see and work with as if they are working with the actual drive, but all reads go through HDDSuperClone. Any data that has already been read from the source will come from the destination drive/image when requested, so data is never read more than once from the source. The virtual drive (and partitions that it may have) can even be seen and mounted by the Linux OS.

So as soon as I had it working, I jumped right into a real world test. I have a 160GB WD 2.5” two head drive that I recovered for a friend a few years ago. One of the heads is weak/damaged and produces many small errors, usually just single sector errors. Originally the drive took 30 days to image with ddrescue, with something like 99.97% recovered and about 50,000 errors (I don’t remember the total error size). When I later tested with timeout settings, it took 6 days and had about 79,000 errors (165145 sectors total, 99.95% recovered). I actually wrote my own NTFS processing software that used ddrescue to recover the documents and pictures/videos from the drive before performing the original imaging, and it took less than half a day to recover those files (not counting the many months of programming :roll: ). Due to the nature of the drive failure, about 1/3 of the pictures had an error in them which in some cases caused a noticeable flaw, and all of the videos had small errors (but still played as far as I could tell). The MFT also has errors, which made the processing more interesting.

Fast forward to now. Time to use R-Studio (version 3 standard for Linux, as that is the only license I have) through the new driver. I think it took around an hour or so to get the file system read as complete as possible, which was 390+MB. About half the time was the initial reading with no trimming or scraping, and the other half was letting HDDSuperClone finish processing those areas. There were a few bad spots that were a hundred or so sectors in size that took awhile to process, not sure if they were in the MFT or the folder data outside the MFT (I don’t remember any big MFT errors in the original recovery, plus the original MFT is only 325MB).

Something I am noticing about R-Studio, it seems to be only requesting 4k (8 sectors) at a time. I think this is causing a slower read speed in my initial testing, larger reads would be faster. This makes the algorithms less efficient. There is not much gained from trimming or dividing in this case, and it takes longer to process. Also, it does not appear to be requesting anything smaller than that, so that is an indication that it has possibly opened the virtual disk with OS buffering enabled as opposed to direct. This means that even if it asks for a single sector, the OS will request 8 sectors from the driver, and the error resolution can never be less than 8 sectors. And that also means that scraping is not effective, as any odd sectors that are recovered are not transferred to R-Studio. This may be something that I inquire about with R-Tools Technology in the future.

So the first stage of data extraction of the documents and pictures/videos took about 2-1/2 hours. Because of the issues noticed in the above paragraph, I am guessing that not much more data will be recovered by finish processing the bad areas, and what more is recovered will likely never make it into the file recovery. But that does not stop me from grinding away at it. Part way through the grinding the drive stuck busy and quit responding. After some checking with a different drive, it turns out the SATA port was locked up busy. I am guessing the BIOS was to blame, as I don’t think the OS was doing it, but it is not possible to tell for sure. Either way, it required a computer reboot. As the grinding continued, I realized there are a few things that need to be tweaked for performance, and made at least one change to the software. Anyway, after 2-1/2 hours of grinding it was done. It recovered 7.72GB, with a total of 7113 bad sectors within that data (including the MFT and other file system data). I did not bother logging how many files were bad. But now that the data has been cloned to an image file, it only takes a few minutes for R-Studio to do the same recovery on it. So I just recovered all the important files from a failing drive in about 8 hours (including some trial and error) from a drive that would normally take several days to clone. I know any good data recovery professional would use their PC-3000 on a drive in this condition (and probably change the heads to get the rest of the data), and only ever run the “easy” cases on something like HDDSuperClone. But for someone that can’t afford the big dog tools, this is not too bad for performance using only software.

EDIT
I was going through the pictures and noticed that some looked more messed up than I remember from the original recovery, and many of the videos crash. I think this is due to the fact that while the errors are all almost a single sector in size, R-Studio turns that into 8 bad sectors. I think the recovery could be better, so I did the file recovery against the image file. I didn’t seen a difference with the videos, but the bad pictures were noticeably better, although still messed up. I guess the trick to the best recovery is to do the recovery against the cloned image or disk after the process is done against the virtual drive.

So this is my answer to data extraction. Instead of trying to write my own half-assed complicated code to attempt to process a file system (which is a big pain in the ass and would most likely be flawed), I can let other tools do the work. I believe someone said that a desired feature for HDDSuperClone would be data extraction. Well, here is my way of dealing with that feature. You will be able to use whatever tool you want to process the file system. You could use commercial tools (Linux versions of course) such as R-Studio, UFS-Explorer, and DMDE. Or even free tools such as TestDisk, even the Linux OS itself (for easy cases). The other benefit to this is that if a drive was imaged with HDDSuperClone (or even ddrescue since the log can be imported), the image can be presented as a real device that produces instant IO errors for the bad sectors. That could be handy for finding which files have errors after cloning. Another possibility would be representing a 512 logical sector drive as a 4K sector drive. This could come in handy for working directly with a hard drive that came out of one of those pesky USB enclosures that represented the drive as having 4K logical sectors. Another possibility is working with those pesky WD USB drives that develop the “slow” issue and would take an eternity to clone. If the data needed is small, it could potentially be recovered with this software.

I still have more work to do to get this ready, but I have just taken a big step with the last major thing I wanted to get done before producing the pro version. I think I can actually see the light at the end of the tunnel now (although it is still a ways away, I can finally see it!).

(Whew! Is this a long enough post? Time for a beer :mrgreen: )
 

pclab

Moderator
As usual: good work. I can't imagine the time passed with this.

As the data extraction feature, I think you are right: leave it, since there are other tools already doing that. You can then focus more in the algorithms for a better and faster imaging solution.
 
Top