Types of Transformation.docx

Published on July 2016 | Categories: Documents | Downloads: 19 | Comments: 0 | Views: 243
of 4
Download PDF   Embed   Report

Comments

Content

Types of Transformation
2.3.1 Dimensionality Transformations The process of registration involves computation of a transformation between the coordinate systems of the images or between an image and physical space. We are three-dimensional beings who move, so in principle, registration should be four-dimensional. In practice, we usually make some approximations and assumptions so that the body can be represented with fewer dimensions. 2.3.1.1 2D-to-2D If the geometry of image acquisition is tightly controlled, 2D images may be registered purely via a rotation and two orthogonal translations. It may also be necessary to correct for differences in scaling from the real object to each of the images. However, computationally straightforward, clinically relevant examples of this are rare, as controlling the geometry of image acquisition is usually very difficult. One example is the registration of x-ray radiographs of the hand with Tc methyl-diphosphonate planar nuclear medicine images for the diagnosis of suspected scaphoid injury. Color Figure 2.2* shows a nuclear medicine image overlaid in color on the radiograph, confirming a scaphoid fracture. In this example, a purpose-built holding device constrained the hand to be in identical positions in the two images.
99m 1

2.3.1.2 3D-to-3D Of more widespread applicability is the accurate registration of multiple 3D images such as MR and CT volumes. The assumption is usually made that the internal anatomy of the patient has not distorted or changed in spatial relationships between organs, so that the imaged part of the body behaves as a “rigid body.” In this case three translations and three rotations will bring the images into registration. Careful calibration of each scanning device is required to determine image scaling, i.e., the size of the voxels in each modality. 3D-to-3D registration is the most well developed and widely used method and is the primary emphasis of this book. 2.3.1.3 2D-to-3D 2D-to-3D registration may be required when establishing correspondence between 3D volumes and projection images such as x-ray or optical images. Another class of 2D-to-3D registration arises when the position of one or more slices from tracked B-mode ultrasound, interventional CT, or interventional MRI are to be established relative to a 3D volume. The main application of these methods is in image-guided interventions, as described in more detail in Chapter 12. 2.3.1.4 Time Another class of registration problem concerns registration of image sequences that follow some process that changes with time. An obvious example is imaging of the heart, where images are acquired in synchrony with the heartbeat, monitored by the ECG or blood pressure waveform. Synchronized or “gated” acquisitions allow averaging of images over multiple cardiac cycles to reduce image noise in nuclear medicine and MR imaging. In a similar way, temporal registration of x-ray images of the heart before and after injection of contrast material allows synchronous subtraction of mask images. All these methods assume that the heart cycle does not change from beat to beat. The same principle can be applied to images acquired at different stages of the breathing cycle, although the breathing cycle is less reproducible and therefore registration errors will be greater. Acquisition of images over time and subsequent registration can be used to study dynamic processes such as tissue perfusion,

Image Registration Algorithms
Registration algorithms compute image transformations that establish correspondence between points or regions within images, or between physical space and images. This section briefly introduces some of these methods. Broadly, these divide into algorithms that use corresponding points or corresponding surfaces, or operate directly on the image intensities. 2.4.1 Corresponding Landmark-Based Registration One of the most intuitively obvious registration procedures is based on identification of corresponding point landmarks or “fiducial markers” in the two images. For a rigid structure, iden tification and location of three landmarks will be sufficient to establish the transformation between two 3D image volumes, provided the fiducial points are not all in a straight line. In practice it is usual to use more than three. The larger the number of points used, the more any errors in marking the points are averaged out. The algorithm for calculating the transformation is well known and straightforward. It involves first computing the average or “centroid” of each set of points. The difference between the centroids in 3D tells us the translation that must be applied to one set of points. This point set is then rotated about its new centroid until the sum of the squared distances between each corresponding point pair is minimized. The square root of the mean of this squared distance is often recorded by the registration algorithm. It is also referred to as the root mean square (RMS) error, residual error or fiducial registration error (FRE). The mathematical solution for calculating this transformation has been
4

known for many years, and is known as the solution to the orthogonal Procrustes problem after the unpleasant practice of Procrustes, a robber in Greek mythology, fitting his guests with extreme prejudice to a bed of the wrong size. The mathematics of the solution are provided in Chapter 3 together with the full story of the fate of Procrustes. Many commercial image registration packages and image guided-surgery systems quote the FRE. Although this can be useful as a quick check of gross errors in correspondence, FRE is not a direct measure of the accuracy with which features of interest in the images are aligned. Indeed, it can be misleading, as changing the positions of the registration landmarks in order to reduce FRE can actually increase the error in correspondence between other structures in the images. A more meaningful measure of registration error is the accuracy with which a point of interest (such as a surgical target) in the two images can be aligned. This error is normally position-dependent in the image, and is called the target registration error (TRE). In practical terms, TRE, and how it varies over the field of view, is the most important parameter determining image registration quality. Fitzpatrick has derived a formula to predict TRE based on corresponding point identification. The formula computes TRE from the distribution of fiducial points and the estimate of error in identifying correspondence at each point, the fiducial localization error (FLE). This formula has been verified by computer simulation and predicts experimental results accurately (see Chapter 3). The point landmarks may be pins or markers fixed to the patient and visible on each scan. These may be attached to the skin or screwed into bone. The latter can provide very accurate registration but are more invasive and cause some discomfort and a small risk of infection or damage to underlying tissue. Skin markers, on the other hand, can easily move by several millimeters due to the mobility of the skin, and are difficult to attach firmly. Care must be taken to ensure that the coordinate of each marker is computed accurately and that the coordinate computed in each modality corresponds to the same point in physical space. Subvoxel precision is possible, for example, by using the intersection of two tubes containing contrast material visible in each modality, the apex of a “V,” 7 or the center of gravity of spherical or cylindrical markers with a volume much larger than the voxel sizes. 8 Markers like these can be identified automatically in the images. Each of these systems was also designed so that the corresponding point in physical space could be accurately located. These are used widely in imageguided surgery as described in Chapter 12. Alternatively, corresponding internal anatomical landmarks may be identified by hand on each image. These must correspond to truly point-like anatomical landmarks at the resolution of the images (such as the apical turn of the cochlea), structures in which points can be unambiguously defined (such as bifurcations of blood vessels or the center of the orbits of the eyes), or surface curvature features that are well defined in 3D. Several methods have been reported to register clinical images using corresponding anatomical landmarks that have been identified interactively by a skilled user. 9–11 Assuming all markers are identified with the same accuracy, registration error as measured by TRE can be reduced by increasing the number of fiducial markers. If the error in landmark identification or FLE is randomly distributed about the true landmark position, the TRE reduces as the square root of the number of points identified, for a given spatial distribution of points. TRE values of about 2 mm at the center rising to about 4 mm at the periphery are to be expected when registering MR and PET images of the head using 12 anatomical landmarks well distributed over the image volume. For registering MR and CT images, including the skull base, typical misregistration errors (TRE values) will be about 1 mm at the center, rising to about 2 mm at the periphery for 12 to 16 landmarks. 11 Finding these landmarks automatically and reliably is difficult and remains a research issue. Figure 2.5 shows an example of aligned and combined CT and MR volumes of a patient with a large acoustic neuroma extending into the internal auditory meatus. These images are useful for planning skull base surgery.10 Figure 2.6 depicts an aligned MR and PET image of the head showing that a suspicious bright region seen on contrast-enhanced MR does not correspond to a region
5 6

FIGURE 2.5 Slice (bottom) through a 3D volume formed by aligning and combining CT (top left) and MR (top right) volumes. CT intensity is displayed when this corresponds to bone otherwise the MR intensity is shown. This type of display has been useful in planning skull base surgery

of high uptake of18 FDG (2-[ 18 F]–fluoro-2-deoxy-D-glucose) on PET and therefore is unlikely to represent recurrent tumor. Figure 2.7 shows a sequence of CT axial slices, with the corresponding aligned 18 FDG PET images overlaid in pale green, taken through the pelvis of a patient who had received previous radiotherapy for cervical carcinoma. The images clearly show increased uptake in the denser mass shown on CT. This is likely to represent recurrent tumor rather than radiation-induced fibrotic changes, and this was confirmed at surgery. Figure 2.5 and Color Figures 2.6 and 2.7* were aligned using manually identified landmarks assuming that the part of the patient imaged could be represented as a rigid body. Now this process is almost completely replaced by the fully automated registration method based on voxel similarity and described in Section 2.4.3. 2.4.4 2D-3D Registration Registration of x-ray or video images to a 3D-volume image involves establishing the pose of the x-ray or video image in relation to a previously acquired CT or MR volume. This has potential applications in image-guided interventions in the spine, pelvis, or head, or in endoscopic or microscopic surgery. More details of potential applications are provided in Chapter 12. The two main classes of methods for 2D-3D registration are featurebased and direct intensity-based. In feature-based methods of x-ray image alignment, silhouettes of bony structures are delineated in the x-ray image, and the algorithm aligns the projections of these silhouettes with the surface of the same structure delineated from a CT (or MR) volume. One algorithm makes use of geometric properties of tangent lines of projected silhouettes and tangent planes of 3D surfaces. 32 This type of algorithm is fast, but is highly dependent on the integrity of the segmentation in both images. An alternative method is to match the pixel and voxel intensities directly. The method is based on digitally reconstructed radiographs (DRRs), first proposed for stereotactic neurosurgical applications. 18 DRRs are computed by integrating (summing intensities) along rays through the CT volume to simulate the process of perspective projection in xray imaging. New DRRs can readily be calculated for trial poses and then compared with the true x-ray image using an appropriate measure of similarity. 33,34 The confounding effect of soft tissue movement is minimized by removing the soft tissue from the CT image by intensity thresholding. Video images may be used to register visible surfaces to MR or CT volumes. Again there are two methods for establishing registration, those based on matching a reconstructed surface and those that are intensity based. In the first, a pattern of light (lines, random dots, etc.) is projected onto the visible surface. Correspondence of the pattern in the two calibrated video images is used to reconstruct the surface in 3D. This surface is then registered to the corresponding MR or CT derived surface using an appropriate surface registration method such as the iterative closest point algorithm. Recently, alternative methods have been proposed which do not rely on the initial step of reconstructing a surface. In these methods the voxel intensities are used directly to match to the 3D surface derived from MR or CT. Viola has proposed using the mutual information between optical image intensities and directions of the MR or CT surface normals.35 An alternative approach uses the observation that intensities of a given point on a surface appear very similar or “photoconsistent” when viewed under the same lighting conditions with two or more cameras. 36 Trial registrations are iteratively tested until an appropriate photoconsistency criterion is satisfied.

2.6 Optimization
Only two regularly used algorithms directly calculate a transformation to establish correspondence. The first is the Procrustes method based on point correspondence, described earlier. The other is when two images have verysimilar intensities and the transformation required to establish correspondence is small. In this case, the transformation can be approximated by the first term in an expansion of the function relating one image to the other as a series of terms, i.e., the Taylor series. An approximate transformation can be calculated directly.40 These algorithms are described in more detail in Chapter 3. In all other algorithms, a process of optimization is required. This means that the algorithm takes a series of guesses from an initial starting position. The starting position has to be sufficiently close for the algorithm to converge to the correct answer, i.e., it has to be within what is known as its “capture range.” This first guess can be set automatically or with a simple user interaction. The algorithm computes a number, known as the cost function or similarity function, relating to how well the two images are registered. Mutual information, correlation coefficient, and sum-of-squared-intensity differences are all examples of cost functions. Some cost functions (e.g., the correlation coefficient) increase as the images come into alignment; others (e.g., sum-ofsquared- intensity-differences) decrease. The registration algorithm proceeds by taking another guess and recalculating the cost function. Progression towards an optimal registration is then achieved by seeking transformations that increase (or decrease) the cost function until a maximum (or minimum) of the cost function is found. The best registration that can be achieved is defined by this maximum (or minimum). The strategy for “optimization”, i.e., guessing subsequent alignment transformations, is an important subdiscipline in the area of computing known as numerical methods. The next chapter contains a more detailed treatment of optimization of cost functions.

2.7 Transformation of Images
Registration algorithms are designed to establish correspondence. In many applications this is sufficient. All that is required is an indication of what point in one image corresponds to a particular point in the other. In some applications, however, we need to transform an image into the space of the other. This process requires resampling one image on the grid corresponding to the voxels or pixels in the other. To do this, interpolation is required. The accuracy with which this interpolation is done depends on the motivation for registering the images in the first place. In most applications simple nearest neighbor or trilinear interpolation will suffice. In nearest neighbor interpolation, as the name suggests, the location of each voxel in the transformed image is transformed back to the appropriate location in the original image and the nearest voxel value is copied into the transformed voxel. In trilinear interpolation the linearly weighted average of the eight nearest voxels is taken. For the highest accuracy, sampling theory tells us that a sinc ((sin x) /x) weighting function applied to all voxels should be used.19 This is particularly important when studying very subtle changes in the intensities or sizes of structures from images taken over a period of time. Interpolation errors can easily exceed the original image noise and can swamp subtle changes that would otherwise be detectable in subtraction images. The mathematics of these issues are addressed in Chapter 3, with applications described in Chapter 7. Unfortunately, accurate sinc interpolation can be extremely time consuming even on very fast computers, and can, therefore, limit applicability. Recent innovations in this area such as shear transformations are making high quality interpolation much faster. Some consideration needs to be given to the spatial resolution and pixel or voxel sizes of the two images. Transforming from a high resolution modality such as CT or MR with voxel sizes of perhaps 1 x 1x 1 mm or finer onto a voxel grid from, for example, PET with a voxel size of 3 x3x 3 mm will result, inevitably, in loss of information. On the other hand, transforming a PET image onto the grid of an MR or CT image will dramatically increase the memory required to store the PET image (by a factor of 27 in this example), unless some form of data compression is used. The choice of the final transformed image-sampling grid will depend on the specific application.

2.8 Validation
Complex software has to be verified and validated. This is particularly important in medical applications, where erroneous results can risk a patient’s health or even life. Verification is the process by which the software is shown to do what it is specified to do (e.g., maximize mutual information). The software industry has developed standards, protocols, and quality procedures for verification. This is an important topic, but beyond the scope of this book. Validation is the process whereby the software is shown to satisfy the needs of the application with accuracy and other performance criteria (e.g., register two images within a certain tolerance, within a certain processing time, and with less than a certain rate of failure). Validation of image registration algorithms will usually follow a sequence of measurements using computer-generated models (software phantoms), images of physical phantoms of accurately known construction and dimensions, and images of patients or volunteers. The process must demonstrate both high robustness and high accuracy. Robustness implies a very low failure rate and, if failure does occur, that this is communicated to the user. Assessment of accuracy requires knowledge of a “gold standard” or “ground truth” registration. This is difficult to achieve with clinical ima ges, but several methods have recently been reported. These are described in more detail in Chapter 6. Finally, in any new technology applied to medicine we must evaluate whether there is a clear benefit to the patient and, if so, that it is achieved in a cost-effective manner. This is the topic of health technology assessment. It is touched on in application chapters but is largely beyond the scope of this book. It is clear that image registration is invaluable in neurosciences research and the clinical application of image-guided surgery. Many other applications will undoubtedly be accepted as the technology matures.

Sponsor Documents

Or use your account on DocShare.tips

Hide

Forgot your password?

Or register your new account on DocShare.tips

Hide

Lost your password? Please enter your email address. You will receive a link to create a new password.

Back to log-in

Close