Advanced Data Recovery v2

Published on May 2016 | Categories: Documents | Downloads: 56 | Comments: 0 | Views: 306
of 19
Download PDF   Embed   Report

data recovery

Comments

Content

Seminar Report 2009

Contents
1 Introduction
1.1 Data Recovery - Definition . .
1.2 Importance of Data Recovery
1.3 Recovery from logical damage
1.3.1 consistency checking .
1.3.2 Data carving . . . . .
1.4 Organization Of the Report .

.
.
.
.
.
.

.
.
.
.
.
.

2 Recovery from physical damage
2.1 Physical damage - cause . . . . .
2.2 The part replacement . . . . . .
2.3 Refreshing the system information
2.4 Replacing the drive electronics . .

.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.

.
.
.
.

3 Recovery of overwritten data
3.1 Wise drives . . . . . . . . . . . . . . .
3.2 Magnetic force microscopy . . . . . . .
3.2.1 MFM Components . . . . . . .
3.2.2 Scanning procedure . . . . . . .
3.3 Scanning tunneling microscopy . . . .
3.3.1 Procedure . . . . . . . . . . . .
3.4 Extraction of data from magnetic data

.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.

1
1
1
2
2
3
4

.
.
.
.

5
5
6
6
8

.
.
.
.
.
.
.

10
10
11
12
13
13
14
15

4 Conclusion
16
4.1 Challenges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
4.2 Future advances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
References

Dept. of CSE, GEC, Thrissur

17

ii

Seminar Report 2009

Chapter 1
Importance of Data Recovery
The data loss or impairment became very common due to the internal (software or
hardware faults) or external (operator fault and environmental faults) faults. This
often poses the grave problem of losing all those outcomes of many hardships endured

Dept. of CSE, GEC, Thrissur

1

section 1.3

Seminar Report 2009

to achieve the specific task. Data which cost years of hardships may be lost in a flash
due to a single mistake! We may be coming across such painful experiences too often.
Increasing hastiness and pace of life resulting in accidental deletion of valuable useful
data added to the agony. This reveal only one side of the importance of Data Recovery,
the other side is nothing other than the forensic importance of the data recovery. The
change tha the forensic need have is, here the data may not be accidentally deleted
but that makes a difference in the reocevry mode also as in this face the recovery will
be difficult as the deletion would have been performed in an intention that the data
should never get recovered.
These situations were the circumstances which lead to the need of
recovering the lost data .In such cases of accidental loss of stored data, we will be
barely in need of such a recovery software and some times more than a software which
can perform usual undeletion. Hence the data recovery became important. The data
recovery procedure became important irrespective of the file systems used. In each
file system the data recovery process depends on the type of file systems and their
features. Besides this there are drive independant data recovery methods also.

1.3

Recovery from logical damage

Two common techniques used to recover data from logical damage are consistency
checking and data carving. While most logical damage can be either repaired or
worked around using these two techniques, data recovery software can never guarantee
that no data loss will occur. For instance, in the FAT file system, when two files claim
to share the same allocation unit (”cross-linked”), data loss for one of the files is
essentially guaranteed.

1.3.1

consistency checking

The first, consistency checking, involves scanning the logical structure of the disk and
checking to make sure that it is consistent with its specification. For instance, in
most file systems, a directory must have at least two entries: a dot (.) entry that
points to itself, and a dot-dot (..) entry that points to its parent. A file system repair
program can read each directory and make sure that these entries exist and point to
the correct directories. If they do not, an error message can be printed and the problem
corrected. Both chkdsk and fsck work in this fashion. This strategy suffers from two
major problems. First, if the file system is sufficiently damaged, the consistency check
can fail completely. In this case, the repair program may crash trying to deal with
the mangled input, or it may not recognize the drive as having a valid file system at

Dept. of CSE, GEC, Thrissur

2

section 1.3

Seminar Report 2009

all. The second issue that arises is the disregard for data files. If chkdsk finds a data
file to be out of place or unexplainable, it may delete the file without asking. This is
done so that the operating system may run smoother, but the files deleted are often
important user files which cannot be replaced. Similar issues arise when using system
restore disks (often provided with proprietary systems like Dell and Compaq), which
restore the operating system by removing the previous installation. This problem can
often be avoided by installing the operating system on a separate partition from your
user data

1.3.2

Data carving

Data Carving is a data recovery technique that allows for data with no file system
allocation information to be extracted by identifying sectors and clusters belonging
to the file. Data Carving usually searches through raw sectors looking for specific
desired file signatures. The fact that there is no allocation information means that the
investigator must specify a block size of data to carve out upon finding a matching file
signature, or the carving software must infer it from other information on the media.
There is a requirement that the beginning of the file still be present and that there
is (depending on how common the file signature is) a risk of many false hits. Data
carving, also known as file carving, has traditionally required that the files recovered
be located in sequential sectors (rather than fragmented) as there is no allocation
information to point to fragmented file portions. Recent developments in file carving
algorithms have led to tools that can recover files that are fragmented into multiple
pieces.
A good number of software tools are present now which can perform undeletion,
upto a great extend, even if data seems to be permannetly deleted from the drive.
The workign of these tools are usually based on the natur of the filesystem that will
never delet any data but only will mark it as deleted till it is over written next time.
And these software can recover the data only before it is over written. Thse recovery
tools are highl depended on the file system type. The general flow of action of these
software tools is.
• Usually use file journals or allocation tables for finding traces of the file.
• Once the file is found in defferent blocks of the hard disk some data carving
tools are used to carve the data.
• The file is either re-established where it was previously and then copied to new
location, or a new copy of the file is created using the carvd data.

Dept. of CSE, GEC, Thrissur

3

section 1.4

Seminar Report 2009

• The method works well, when the data is not over written and some traces are
there in the file system.
Examples of such recovery softwares are ’Recover my files’ , ’Ultimate recovery’
etc in windows platform and ’Ext3grep’, ’Recoverext3’ etc in Linux platform.
The main disadvantage of these tools is that they can recover the data only when
the drive is working properly and the data is not over written. In forensic needs it is
needed to recover the data from physiscally damaged drives and also when the data
is over written, becausephysically damaging the file and dumping the drive with junk
data are not that much dificult jobs to be performed.

1.4

Organization Of the Report

1. Chapter 2 describes the importance and method of Recovery when the drive is
physically damaged.
2. Chapter 3 describes the importance and method of Recovery when the data is
over-written.
3. Chapter 4 discuss the challenges and scope of future enhancement in advanced
data recovery methods.

Dept. of CSE, GEC, Thrissur

4

Seminar Report 2009

Chapter 2
Recovery from physical damage
Hard drives are assembled in clean rooms (cleaner than surgical rooms) and then
sealed. Hard drive platters spin at a rate of 4,200 to 10,000 rotations per minute.
Opening the hard disk drive to inspect the contents, by anyone but properly trained
personnel in a controlled environment, could lead to damages in the magnetic media.
Damage can occur because the read/write heads move at a very close distance to
the spinning hard drive platters [1]. As the platters spin, it is only a matter of time
before the head comes into contact with dust or debris on the platter. At this point,
an impact will occur, the surface of the platter containing the magnetic media will
become damaged and the data contained within this magnetic media will be lost
forever.

2.1

Physical damage - cause

Physical damage could be caused by various failures. Hard disk drives could undergo
any of numerous automatic failures, like head stack crashes, tapes could just break.
Physical damage at all times causes as a minimum a few data loss, and in a few
cases the logical formations of the file system are smashed too. Recovering data
following physical damaged hard drives: [1] majorities of the physical damage could
not be mended by end users. For instance, opening a hard drive within a standard
environment could let airborne dust to resolve on the media salver and being fixed
between the salver and the read-write head, leading new head crashes that further
damage the salver and thus concession the recovery procedure. End users usually
dont have the hardware or technological proficiency required to create these repairs.
Here I am discussing about two techniques to recover data from physically damaged
drives.
1. Replacing or ”refreshing” the system area information and
Dept. of CSE, GEC, Thrissur

5

section 2.3

Seminar Report 2009

2. Replacing the drives electronics.
These two techniques are called ’Part replacement’ methods.

2.2

The part replacement

Techniques for recovering data from physically damaged hard disk can be described
as part replacement [1-2] whereby printed circuit boards (PCBs) are swapped; heads
are transplanted; motors and base castings are replaced by remounting the disks onto
the spindle of a donor drive;[1] and firmware or system information is replaced or
refreshed by rewriting it. Placing the disks in a donor drive swaps everything except
for the on-disk system information. Data stored on portions of the magnetic layer of
the disk that have been physically removed; such as due to a slider (head) scraping
away the surface, cannot be recovered.
The ultimate part replacement operations are re-mounting disks onto new drives
and transplanting headstacks. In these two extreme cases there are six difficult challenges to overcome for succesful data recovery. [1]
1. Re-optimizing preamp read settings.
2. Recaliberate repeatable run-out (RRO) and head offsets.
3. Control spindle rotation and head positioning, typically using the magnetic servo
patterns on the disk surfaces.
4. Determine the layout and format of each surface, defects and defect mapping
strategies.
5. Detect the binary data in the analog head signal and
6. Decode the precoding, scrambling, RLL, parity-assist ECC, and any other codes
to reveal user data.
The sectors or blocks creted from the detected and decoded user bits must still be
assembled into useful files. It is at this latter task where logical recoveries typicall
start. Interestingly, data forensic examinations can only begin after the physical and
then the logical recoveries have been completed.

2.3

Refreshing the system information

Current state-of-the-art research for system area refreshing focuses on developing algorithms that can quickly and adequately re-optimize all important channel, preamp,
Dept. of CSE, GEC, Thrissur

6

section 2.3

Seminar Report 2009

and servo system parameters without rewriting over data.[1] This capability is needed
both when the system area information is corrupted and when a headstack transplant
is necessary.
The ’system information’[1] or the area where it is located, is also termed as
system area, mintenance tracks, negative cylinders, reserved cylinders, caliberation
area, initialization area and diskware. The system information includes the drive
specific hper-tuned parameters along with the normal characteristic parameters of
the hdd.The system area may become corrupted due to malfu nctioning circuits,
firmware bugs, exceeding the operational shock specifications of the drive, or position
system errors. Another, more common, reason for system area corruption is a loss of
power during an update of the system area itself. This might occur when systemlogs
are being updated or when th G-list is being changed. The G-list, or grown defect
list, holds information about the location of defects that have been found in the
field during drive operation. The G-list is typically used for sector swapping, or
sector reallocation. Related o this is the P-list, or primary defect list that stores the
location of media defects that were found during manufacturing. This is typically
used for sector slipping and is not udated in the field.

Figure 2.1: The beginning of the identification sector (IDNT) of the system area for
a 2.5 Hitachi drive is shown. The left coloumn is the offset index from the beginning
of the sector; the next two wide columns contain the hexadecimal interpretation of
the data stored there.The rightmost column shows the ASCII equivalent of the hex
values, when an equivalentexist.
Corruped system area information can be rewritten as a form of the part replacement. For some drive models, the system area contains only a small amount of
information, such as a unique drive serial number, the P-list and G-list, S.M.A.R.T.
Dept. of CSE, GEC, Thrissur

7

section 2.4

Seminar Report 2009

data, and a drive passsword possibl encrypted. Small amount of drive specific details
indicates that the drive is more amenable o part replacement.Some drive models have
larger system areas, which may span tens of tracks. This tpically indicates that a drive
emplos hyper-tuning and hence is much less amenable to traditional part replacement,
especiall head transplantation and system are refresh. If the system information is
rewriten from archival copies of the data from other similar drives, the hyper-tuned
parameters will not match those needed for the drive’s original components. These
head-specific parameters must be re-optimized and rewritten to the system area.

2.4

Replacing the drive electronics

Current state-of-the-art research for drive electronics replacement focuses on developing faster and more robust methods for determining the servo sector track ID and
wedge ID and the data sector encodings. Additionally, timing, equalization, and detection methods are being advanced to recover data from the drives that are being
built today and in the future. These are likely to employ iterative equalization and
decoding, LDPC (low-density parity-check) codes, and new timing recovery schemes.
If it were available system area refresh that re-optimizes key parameters as needed
for headstack transplants or disk removal would be the preferred method for driveindependent data recovery. [1] For example, many consecutive servo sectors could be
corrupted causing the drive to shut down or to constantly recalibrate; precise control
of headstack placement may be necessary to bypass badly damaged portions of the
disk; the commands to control the read channel parameters for re-optimization may
not be known; it may be necessary to capture system area information from a drive
that will not spin-up or will not initialize; it may be necessary to read data from
normally inaccessible locations (e.g., passwords in the system area, defective sectors,
spare sectors); decaying bits or poorly written transitions may require additional
off-line signal processing to be read adequately; and it may be necessary to bypass
the normal drive boot-up sequence, especially if the sequence is written to initiate
automatic encryption or destruction of data as a security measure. Extreme cases
in which the disk is badly damaged and not flyable, such as those with which the
defense, intelligence, and law enforcement communities may be involved, will not yield
to recovery methods that involve drive hardware with specific servo signal, timing,
zone layout, or frequency requirements.
For flyable media, the most cost-effective way to spin the disk is with its original
motor and base casting or with from of a donor drive. All that is required is a standard HDD motor controller and related programming capability.[2] Once a compatible
headstack is in place and the disks are spinning, the signal from the preamp needs
Dept. of CSE, GEC, Thrissur

8

section 2.4

Seminar Report 2009

to be acquired and used: first for servo positioning and then for data detection. To
acquire a good signal, the read bias currents must be approximated for each head.

Figure 2.2: The dull ring near the middle diameter of this spinning disk is the result
of a head crash. The headstack shown is a new replacement from a donor drive. The
flex circuit at the top of the picture is connected to the old damaged headstack. The
new headstack’s flex circuit (lower right) is connected to circuitry that replaces the
drive’s electronics.

Dept. of CSE, GEC, Thrissur

9

Seminar Report 2009

Chapter 3
Recovery of overwritten data
A good part of the computer users are still to know about the most important nd
interesting feature of our most common storage media, the magnetic storage media,
which is it’s capability to remember anhting ever written on it till it is completely
destroed by a degauss under strong magnetic field. Magnetic hard drives are used
as the primary storage device for a wide range of applications, including desktop,
mobile, and server systems. All magnetic disk drives possess the capability for data
retention,[5] but for the majority of computer users, the hard disk drive possesses the
highest lifespan of all magnetic media types, and therefore is most likely to have large
amounts of sensitive data on it.
In reality, magnetic media is simply any medium which uses a magnetic signal to
store and retrieve information. Examples of magnetic media include: floppy disks,
hard drives, reel-to-reel tapes, eight-tracks, and many others.[6-7] The inherent similarity between all these forms of media is that they all use magnetic fields to store
data. This process has been used for years, but now that security concerns are being
brought more into focus, we are now starting to see some of the weaknesses of this
technology, as well as its well-known benefits.

3.1

Wise drives

When data is written to the disc platter, it is stored in the form of ones and zeroes.
This is due to the binary nature of computers the data in question is either on (1),
or off (0). This is represented on the disk by storing either a charge (1), or no charge
(0). The data is written to the actual disc platter in what are called tracks. These are
concentric rings on the disc platter itself, which are somewhat similar to the annual
rings of a tree. As data is written to these rings, the head actually writes either a
charge (1), or no charge (0). In reality, as this is an analog medium, the discs charge
will not be exactly at a 1 or 0 potential, but perhaps a 1.06 when a one is written on
Dept. of CSE, GEC, Thrissur

10

section 3.2

Seminar Report 2009

top of an existing 1, and perhaps a .96 when an existing 0 is overwritten with a 1.
The main idea to grasp here is that the charge will never be exactly 1 or 0 on the disc
itself. It will be different, due to the properties of the magnetic coating on the disc.[6]
In this way, data is written to the tracks of the disc. Each time data is written to the
disc, it is not written to exactly the same location on the disc.
Exploiting the slight difference in the storage of charge in the magnetic disk on
each time and the nature of analog signals themselves, it is possible for the original
data from a hard drive to be recovered, even if it has been overwritten with other,
newer data. How is this possible? The data can actually be detected by reading the
charges between the tracks on the disc itself. Also, software can be used to calculate
what an ideal signal should be. Then, subtracting from this what was actually read
from the disk, the software can yield the original data. Of course, no affordable digital
storage medium is completely reliable over long periods of time, since the medium may
degrade. However, most magnetic disks can hold charges and residual data for several
years, if not decades. This is more than enough time for the data to potentially be
viewed by an unauthorized individual, organization, or both.[5-6-7]
Some common methods used to gather data from drives which might have very
important information to investigations include: Magnetic Force Microscopy (MFM)
and magnetic force Scanning Tunneling Microscopy (STM).[7] Other methods nd variations exist, but are either classified by governmental intelligence agencies, or are not
widely used yet. We will deal with MFM and STM.

3.2

Magnetic force microscopy

MFM is a fairly recent method for imaging magnetic patterns with high resolution
and requires hardly any sample preparation.[7] This method uses a sharp magnetic
tip attached to a flexible cantilever placed close to the surface of the disc, where it
picks up the stray field of the disc. An image of the field at the surface is formed
by moving this tip across the surface of the disc and measuring the force (or force
gradient) as a function of position. The strength of this interaction is measured by
monitoring the position of the cantilever using an optical interferometer or tunneling
sensor. In this way, data can be extracted from a drive. The fact that magnetic media
contains residual charges from previous data even after being wiped or overwritten
several times makes complete data destruction next to impossible.[5]
A magnetic force microscope derives from atomic force microscope (AFM). Unlike
typical AFM, a magnetized tip is used to study magnetic materials, and thus, the
tip-sample magnetic interactions are detected. Many kinds of magnetic interactions
are measured by MFM, including magnetic dipolar interaction. MFM scanning often
Dept. of CSE, GEC, Thrissur

11

section 3.2

Seminar Report 2009

uses non-contact AFM (NC-AFM).

Figure 3.1: Magnetic force microscopy, scanning.

3.2.1

MFM Components

1. Single Piezo tube
• Moves the sample in an x, y, z direction
• Voltage is applied to separate electrodes for different directions. Typically,
a 1 Volt potential results in
• 1 to 10 nm displacement.
• Image is put together by slowly scanning sample surface in a raster fashion
• Scan areas range around 200 micrometers
• Imaging times range from about a few minutes to 30 minutes.
• Restoring force constants (k) on the cantilever range from 0.01 to 100 N/m
depending on the material used to make the cantilever
2. Magnetized tip at one of a flexible lever (cantilever); generally an AFM probe
with a magnetic coating.
• In the past, tips were made of etched magnetic wires such as from Nickel
• Now, tips are batch fabricated (tip-cantilever) using a combination of micromachining and

Dept. of CSE, GEC, Thrissur

12

section 3.3

Seminar Report 2009

• photolithography. As a result, smaller tips are possible, and better mechanical control of the tip-cantilever is obtained
• Cantilever can be made of single crystalline silicon, silicon dioxide, or silicon
nitride.
• Tips are coated with a thin (maximum of 50 nm) magnetic film (such as
Ni or Co), usually of high coercivity so that the tip magnetic state (or
magnetization M) does not change during imaging
• The tip-cantilever is driven close resonance frequency by a piezo bimorph
with typical frequencies ranging from 10 kHz to 1 MHz.

3.2.2

Scanning procedure

The scanning method when using an MFM is called the ”lift height” method. When
the tip scans the surface of a sample at close distances (¡ 100 nm), not only magnetic
forces are sensed, but also atomic and electrostatic forces. The lift height method
helps to enhance the magnetic contrast by doing the following:
• First, the topographic profile of each scan line is measured. That is, the tip is
brought into close
• proximity of the sample to take AFM measurements.
• The magnetized tip is then lifted further away from the sample.
• On the second pass, the magnetic signal is extracted.

3.3

Scanning tunneling microscopy

A scanning tunneling microscope (STM) is a powerful instrument for imaging surfaces
at the atomic level. Its development in 1981 earned its inventors, Gerd Binnig and
Heinrich Rohrer, the Nobel Prize in Physics in 1986. For an STM, good resolution is
considered to be 0.1 nm lateral resolution and 0.01 nm depth resolution[3]. With this
resolution, individual atoms within materials are routinely imaged and manipulated.
The STM can be used not only in ultra high vacuum but also in air, water, and various
other liquid or gas ambients, and at temperatures ranging from near zero kelvin to a
few hundred degrees Celsius.
The STM is based on the concept of quantum tunnelling. When a conducting tip
is brought very near to the surface to be examined, a bias (voltage difference) applied
between the two can allow electrons to tunnel through the vacuum between them.
Dept. of CSE, GEC, Thrissur

13

section 3.3

Seminar Report 2009

Figure 3.2: Two pass method for MFM, where on the first pass, the topography is
obtained, while on the second pass, the magnetic structure is imaged
The resulting tunneling current is a function of tip position, applied voltage, and the
local density of states (LDOS) of the sample. Information is acquired by monitoring
the current as the tip’s position scans across the surface, and is usually displayed
in image form. STM can be a challenging technique, as it can require extremely
clean and stable surfaces, sharp tips, excellent vibration control, and sophisticated
electronics.

3.3.1

Procedure

First, a voltage bias is applied and the tip is brought close to the sample by some
coarse sample-to-tip control, which is turned off when the tip and sample are sufficiently close.[4] At close range, fine control of the tip in all three dimensions when near
the sample is typically piezoelectric, maintaining tip-sample separation W typically
in the 4-7 range, which is the equilibrium position between attractive and repulsive
interactions[4]. In this situation, the voltage bias will cause electrons to tunnel between the tip and sample, creating a current that can be measured. Once tunneling
is established, the tip’s bias and position with respect to the sample can be varied
(with the details of this variation depending on the experiment) and data is obtained
from the resulting changes in current.
Dept. of CSE, GEC, Thrissur

14

section 3.4

Seminar Report 2009

If the tip is moved across the sample in the x-y plane, the changes in surface height
and density of states cause changes in current. These changes are mapped in images.
This change in current with respect to position can be measured itself, or the height,
z, of the tip corresponding to a constant current can be measured. These two modes
are called constant height mode and constant current mode, respectively. In constant
current mode, feedback electronics adjust the height by a voltage to the piezoelectric
height control mechanism[5]. This leads to a height variation and thus the image
comes from the tip topography across the sample and gives a constant charge density
surface; this means contrast on the image is due to variations in charge density.[7]

3.4

Extraction of data from magnetic data

The remnant magnetization is detected and analysed through the above techniques.
The generation of analog signalls corresponding to the remnant magnatization is also
done using these techniques. Now what we have left is carving the data from the
anloag signals. A number of good softwares are available now which can be used to
carve the data from these outputs. And the by analysing this data the deleted data
at a purticular level of deletion can be extracted. ie, there is different levels of data
extraction. Lets consider the data just deleted to be the lower level of deletion and
the level goes on increasing as the data is deleted in past.[5] The recovery becomes
difficult and inaccurate as the level goes on increasing. But the data is available for
a much higher levle also. That is he better extraction modes and sharper magnetic
microscopy can receover data that is deleted before considerably good time.

Dept. of CSE, GEC, Thrissur

15

Seminar Report 2009

Chapter 4
Conclusion
The recovery data from the logically and/or physically damaged disk drives, and the
recovery of ove written data is now been done with a good amount of success. The
data recovery now have become a handy tool to the end-users as far as the logical
damages are concerned, although the recovery of data from the physically damaged
drives and over written data, which is done by the magnetic data recovery methods
have still to reach at the end users, the data recovery industr has grown through
hights of technology, that nowadays the situation is such that, data can be recovered
from any physically damaged drive untill it’s magnetic platters remain as such.[5]
And in case of the magnetic recovery also the present state-of-the-art has contributed
alot to the data recover industry that the magnetic recovery had reported recover
of data that had been over written upto 17 times. ie Through part replacement the
recovery of data from physically damaged drives has become easy. And with the use
of magnetic force microscopy and Signal tunneling microscopy the magnetic recovery
of over written data also have become possible to great extend.[4-7]

4.1

Challenges

The Recovery of data using part replacement and magnetic recovery methods are now
implemented in robust ways and hence the challenges it is facing or the areas where
the improvemetns have to be made are the improvements in efficiency of the steps in
the recovery procedure, in most occasions. The challenges are.
• The data can be recovered onl if the magneic platter is not damaged ; although
researches are there for improving the part replacement methods there is no
active reasearches that is intended to over coem this challenge.
• The recovery is highly complicated in case of some purticular ultr hyper-tuned
hard disk which has highly customized system area ; Active researches are there
Dept. of CSE, GEC, Thrissur

16

section 4.2

Seminar Report 2009

to overcome this challenge, besides the manufacturers have also now started
designing the drives amenable for recovery.
• The part replacement methods and the magnetic recovery are usually of highcost.
• Both classes of recovery needs highly sophisticated laboratory setup.
• The strength of the magnetic fields ahve to be increased for recovery of data
that are deleted far beyond, and even data is recoverd, that are over written
more number of times the method doesn’t guarantee that the recovered data is
correct.
• The magnetic recovery with the present day technology, is not capable of recovering the data when the disk is degaussed under stronger magnetic fields ;
The degaussing will result in the permanent distruction of the drive and the degaussing itself needs stronger magnetic field, here also there is no active research
going on to tackle his challenge.

4.2

Future advances

The data recovery industry is looking forwardto attain a good number of goals in the
near future.
• Highly drive independant part replacement methods, which contains provision
for easy tuning in case of the hper tuned parameters; researches are on the way
in this area and as far as now a good extend of drive independance have been
attained.
• Improvements in algorithms that can predict the data in highly unrecoverable
sectors and thus overcoming the failure to recover data due to ignorable bad
sectors.
• Improvement in strength of magnetic field that is used in magnetic force microscopy and signal tunneling microscopy.
• Improvement in algorihm that can extract data which is over written mor number of times, Although the present algorithms can extract data to a great extend
, improvement in the agorithm can use the result of the MFM and STM more
efficiently.

Dept. of CSE, GEC, Thrissur

17

Seminar Report 2009

References
[1] Charles H. Sobey, Laslo Orto, and Glenn Sakaguchi ”Drive-Independent Data
Recovery: The Current State-of-the-Art”, IEEE transactions on Magnetics, IEEE
volume 42 February 2006
[2] Bennison, Peter F, and Lasher, Philip J, ”Data security issues relating to end of
life equipment”, Electronics and the Environment Conference, 2004 IEEE International Symposium on May 10-13, 2004
[3] Cranor, Lorrie Faith, and Geiger, Matthew, ”Counter-Forensic Privacy Tools: A
Forensic Evaluation” February 1, 2006
[4] Commonwealth of Australia, ”Protecting and handling magnetic media”January
31, 2006
[5] Garfinken, S.L. Shelat, ”Remembrance of Data passed: a study of disk sanitization”,Security and privacy, IEEE International Symposium on February 2003
[6] Joshua J Sawyer, East Carolina University, ”Magnetic Data Recovery The Hidden Threat”, infosecwriters, December 2006
[7] L. Gao, L.P. Yue, T. Yokota, et al., ”Focused Ion Beam Milled CoPt Magnetic
Force Microscopy Tips for High Resolution Domain Images, IEEE Transactions
on Magnetics, IEEE Volume 40, 2004

Dept. of CSE, GEC, Thrissur

18

Sponsor Documents

Or use your account on DocShare.tips

Hide

Forgot your password?

Or register your new account on DocShare.tips

Hide

Lost your password? Please enter your email address. You will receive a link to create a new password.

Back to log-in

Close