Graphics and Mutimedia Unit 4

Published on July 2016 | Categories: Types, School Work | Downloads: 52 | Comments: 0 | Views: 258
of 31
Download PDF   Embed   Report

Graphics and Mutimedia Unit 4

Comments

Content

IT6501- GRAPHICS AND MULTIMEDIA
III YEAR / V SEMESTER
UNIT IV

PREPARED BY,
Mr.M.KARTHIKEYAN, M.E., AP / IT

VERIFIED BY

HOD

PRINCIPAL

CEO/CORRESPONDENT

SENGUNTHAR COLLEGE OF ENGINEERING,TIRUCHENGODE – 637 205.
DEPARTMENT OF INFORMATION TECHNOLOGY

UNIT IV
1

MULTIMEDIA FILE HANDLING
 Compression and decompression
 Data and file format standards
 Multimedia I/O technologies
 Digital voice and audio
 Video image and animation
 Full motion video
 Storage and retrieval technologies.

2

LIST OF IMPORTANT QUESTIONS
UNIT IV
MULTIMEDIA FILE HANDLING
PART – A
1. Compute the storage space in MB required storing a video of VGA resolution, true color and
30 fps, compressed at 30:1.
2. State the resolution of Facsimile, Document Images and Photographic Images?
3. What is the compression technique used in Facsimile and Document Images?
4. Explain about Voice Synthesis?
5. What is Isochronous Playback?
6. Explain about Full motion and Live video?
7. Write a brief note on Fractals.
8. Explain Fractal Compression?
9. Define Compression Efficiency?
10. What is Image Processing?
11. Explain Image Calibration?
12. What is Grayscale Normalization?
13. What is the need for Compression?
14. State the two types of Compression?
15. What is Lossy Compression?
16. What is Lossless Compression?
17. What are the advantages of Compression?
18. State the types of Lossy Compression?
19. State the types of Lossless Compression?
20. What is Codec?
21. State the Applications of WORM?
22. What is a Juke Box?
PART –B
1. Why compression/decompression is an essential module in multimedia file handling.Explain
the various techniques.(16) (NOV/DEC 2012) (MAY/JUN 2007)
2. Explain in detail about the Overview of data and file format standards.
3. Elaborate on various multimedia I/O technologies. Give examples.(16)(MAY/JUN 2013)
4. In detail,explain the various technologies involved in storage and retrieval of multimedia files
comprising of motion videos.Comment on the challenges in it.(16)(NOV/DEC 2012) Write a
brief note on optical media and disk technologies.(08)(MAY/JUN 2012)
5. Draw and explain the TWAIN architecture.(08)(MAY/JUN 2012) Or What is TWAIN? Explain
the objective and architecture of TWAIN.(16)(APR/MAY 2010)

3

NOTES
UNIT IV
MULTIMEDIA FILE HANDLING
PART – A
1. Compute the storage space in MB required storing a video of VGA resolution, true
color and 30 fps, compressed at 30:1.
The storage space required for storing a video of VGA resolution and 30 fps when it is
compressed at 30:1 is 30MB.
2. State the resolution of Facsimile, Document Images and Photographic Images?
•Facsimile

-100 to 200 dpi

• Document images

-300 dpi (dots/pixels per inch)

• Photographic images

-600 dpi

3. What is the compression technique used in Facsimile and Document Images?
•Facsimile

- CCITT Group3

•Document Images

- CCITT Group4

4. Explain about Voice Synthesis?
This approach breaks down the message completely to a canonical form based on
phonetics. It is used for presenting the results of an action to the user in a synthesized voice.
It is used in Patient Monitoring System in a Surgical Theatre.
5. What is Isochronous Playback?
Isochronous playback is defined as a playback at a constant rate. Audio and Video
systems require isochronous playback.
6. Explain about Full motion and Live video?
Full motion video refers to prestored video clip. i.e., video stored in CD
Eg: games, courseware, training manuals, MM online manuals etc Live video refers to live
telecast.
It is live and must be processed while the camera is capturing it i.e., Instant occurring is
transferred at the same time.
Eg: Live Cricket Show (in television)
7. Write a brief note on Fractals.
Fractals are regular objects with a high degree of irregular shapes. It is a lossy
compression technique but it doesn’t change the shape of the image. Fractals are
decompressed images that result from a compression format
4

8. Explain Fractal Compression?
Fractal Compression is based on image content i.e., it is based on similarity of patterns
within an image. The steps in Fractal compression are
• a digitized image is broken into segments
• the individual segments are checked against a library of fractals
• the library contains a compact set of numbers called iterated function system codes.
• these system codes will reproduce the corresponding fractal
9. Define Compression Efficiency?
Compression Efficiency is defined as the ratio in bytes of an uncompressed image to the
same image after compression.
10. What is Image Processing?
Image Processing refers to processing a digital image using a digital computer.
An image processing system will alter the contents of the image.
It involves Image Recognition, Image Enhancement, Image Synthesis and Image
Reconstruction.
11. Explain Image Calibration?
The overall image density is calibrated. In Image calibration the image pixels are adjusted to
a predefined level.
12. What is Grayscale Normalization?
The overall grayscale of an image or picture is evaluated to determine if it is skewed in one
direction and if it needs correction.
13. What is the need for Compression?
To manage large multimedia data objects efficiently Reduce file size for storage of objects
Compression eliminate redundancies in the pattern of data .
14. State the two types of Compression?
 Lossy Compression
 Lossless Compression

5

15. What is Lossy Compression?
Lossy compression causes some information to be lost. Even if some data is lost it does not
affect the originality of the image. It is used for compressing audio, greyscale or color
images and video objects in which absolute data accuracy is not essential. it is used in
Medical Screening Systems, Video teleconferencing and Multimedia Electronic messaging
systems.
16. What is Lossless Compression?
Lossless compression preserves the exact image throughout the compression and
decompression process. Lossless Compression techniques are good for text data and for
repetitive images in images like binary and grey scale images.
17. What are the advantages of Compression?
Compressed data object
Require less disk memory space for storage
Takes, less time for transmission over a network.
18. State the types of Lossy Compression?
 JPEG (Joint Photographic Experts Group)
 MPEG (Moving Picture Experts Group)
 Intel DVI (Digital Video Interface)
 CCITT H.261(P*64)
 Fractal

19. State the types of Lossless Compression?
 Packbits Encoding
 CCITT Group3 1D
 CCITT Group3 2D
 CCITT Group 4
 Lempel-Ziv and Welch Algorithm (LZW)
CCITT – International Consultative Committee for Telephone and Telegraph
20. What is Codec?
Compression and decompression software or programs are called codec.

6

21. State the Applications of WORM?
 Legal and Stock Trading Management
 Medical Applications
 Online Catalogs
 Large Volume Distribution
 Transaction logging
 Multimedia Archival
22. What is a Juke Box?
A jukebox is an optical disk device that can automatically load and unload optical
disks. The device that holds over ten to thousands of CD’s and DVD’s. It allows storage and
fast

access to large amounts of data

23. State the layers of WORM drive?
Polycarbonate substrate
Recording layer is made up of

A Bismuth Tellurium layer

Two layers of Antimony Solenoid
 Reflective layer
 Protective layer
24. How data is recorded in WORM drive?
 Input signal is fed to laser diode
 Laser beam strikes the three recording layers
 Laser beam is absorbed by Bismuth-Tellurium layer and generates heat
 The heat diffuses the atoms in the recording layers and forms a four element alloy
(Sb, Se,

Bi, and Te) which is the recorded area.

25. How data is read from WORM drive?
 Weak Laser beam is focused to the disk
 It is not absorbed due to reduced power level and reflected back
 The beam splitter mirror and lens arrangement sends the reflected beam to Photo
detector
 The Photo sensor detects the beam and converts it into electrical signal
26. What is RLL?
RLL - Run-Length Limited. RLL is an encoding scheme. The benefit of RLL is that it packs
50% more bits than the MFM scheme, resulting in 26 sectors per track with a 6.4 Mbits/sec
or 798 Kbytes/sec transfer rate.

7

27. Explain briefly about Rewriteable Optical Disk technology?
Rewriteable optical media technology allows erasing old data and rewriting new data over
old data. It behaves like a magnetic hard disk where data can be written and erased
repeatedly. Two types of Rewriteable technology are
Magneto-optical technology Phase change rewriteable optical disk.

28. Write short note on Magneto-optical technology?
Magneto-optical technology uses a combination of magnetic and laser technology to
achieve read/write. The disk recorded layer is magnetically recordable. It uses a weak
magnetic field to record data under high temperatures. It requires two passes to write data.
In the first pass, the magneto optical head goes through an erase cycle. In the second pass
it writes the data.
29. Explain the Phase Change technology?
In phase change technology the recording layer changes the physical characteristics from
crystalline to amorphous under the influence of heat from a laser beam. The benefits of
phase change technology are equire only one pass to write the data.
No magnetic technology is needed.
30. What is a multifunction drive?
A multifunction drive is a single drive unit. It is capable of reading and writing a variety of
disk media i.e., CD-ROMS, WORM drives and rewriteable disks. It provides permanence of
read-only device and flexibility of a rewriteable device. It is used in product documentations.
31. Explain Hierarchical storage Management?
The primary goal of hierarchial storage is to route data to lowest cost device that will support
the required performance of that object.
32. Define Cadence.
Cadence is a term used to define the regular rise and fall in the intensity of sound.

8

PART –B

1. Why compression/decompression is an essential module in multimedia file
handling.Explain the various techniques.(16) (NOV/DEC 2012) (MAY/JUN 2007)
Compression and Decompression

9

Need for compression and decompression
A digital process that allows data to be stored or transmitted using lesser than the normal number
of bits. Compression can be lossless, lossy or visually lossless There are several types of which the
JPEG standard of lossy compression is widely used for still imaging. In general, compression
should not be used where images may be required for scientific analysis for although the data loss
may not be visually apparent, it may be essential from a scientific or legal point of view
Compressions in multimedia systems are subjected to certain constraints that are listed below:


The quality of the reproduced data should be adequate for applications



The complexity of the technique used should be minimal, to make a cost effective
compression technique



The processing of the algorithm should not take too long



Various audio rates should be supported. Thus, depending on specific system conditions the
data rates can be adjusted



It should be possible to generate data on one multimedia system and reproduce data on
another system. The compression technique should be compatible with various reproduction
systems
Need for Compression:
Consider a digital video sequence at a standard definition TV picture resolution of
720×480 and a frame rate of 30 frames per second (FPS).If a picture is represented
using the RGB color space with 8 bits per component or 3bytes per pixel, size of each
frame is 720×480×3 bytes. The disk space required tostore one second of video is
720×480×3×30 = 31.1 MB.A one hour video would thus require 112 GB. To deliver
video over wired and/orwireless networks, bandwidth required is 31.1×8 = 249 Mbps.
In addition to these extremely high storage and bandwidth requirements, using
uncompressed video will add significant cost to the hardware and systems that process
digital video. Digital video compression is thus necessary even with exponentially
increasing bandwidth and storage capacities.
Fortunately, digital video has significant redundancies and eliminating or reducing
those redundancies results in compression. Video compression can be lossy or loss less.
Loss less video compression reproduces identical video after de-compression. We
primarily consider lossy compression that yields perceptually equivalent, but not identical
video compared to the uncompressed source.
Video compression is typically achieved by exploiting four types of redundancies:
1) perceptual, 2) temporal, 3) spatial, and 4) statistical redundancies.
10

Decompression:
Decompression is the inverse of compression. Decompression techniques can
differ from the compression techniques in various ways.
For example, if the applications are symmetric, e.g., dialog applications, then the coding
and decoding should incur more or less the same costs, as the importance here is the
speed factor, rather than quality However, if data will be encoded once, but decoded
many times, as is the case with an image or video retrieval system, then whereas the
decoding speed must approximate real-time, the encoding time may be asymmetric to

the

decoding time Usually, better quality/compression ratios are obtained if encoding time is not
a factor.
Compression standards
Data compression:
As with any communication, compressed data communication only works when

both

the sender and receiver of the information understand the encoding scheme. For example,
this text makes sense only if the receiver understands that it is intended to be

interpreted

as characters representing the English language. Similarly, compressed data

can only

be understood if the decoding method is known by the receiver.
Compression is useful because it helps reduce the consumption of expensive
resources, such as hard disk space or transmission bandwidth. On the downside,
compressed data must be decompressed to be used, and this extra processing may be
detrimental to some applications.
Non-lossy compression for photographs and videos
When a compression algorithm is used in an applied environment, the choice of that
algorithm is often governed by the situations where the compression will take place. Good
algorithms for text compression may not be useful to compress a motion

picture

file,

whereas an algorithm suitable for compression of an audio file may not be a good
choice for a speech file. Obviously, speed of encoding and decoding are important, but
another factor, not yet mentioned in detail, is the question of whether, when one
uncompressed the compressed material, gets back the original. If information is lost
during the compression process, the algorithm is lossy. If the compression algorithm
guarantees the uncompressed material is what was started with, this is called non lossy
compression or lossless compression. Lossless compression technique must be used
when compressing data and programs. For greater compression of video and images
compression technique must be used.

11

Lossy compression for photographs and videos
A lossy compression method is one where compressing data and then
decompressing it etrieves data that is different from the original, but is close enough to be
useful in some way. Lossy compression is most commonly used to compress

multimedia

data (audio, video, still images), especially in applications such as streaming

media and

internet telephony. By contrast, lossless compression is required for text and

data files,

such as bank records, text articles, etc. In many cases it is advantageous to make a master
lossless file which can then be used to produce compressed files for different purposes; for
example a multi-megabyte file can be used at full size to produce

a

full-page

advertisement in a glossy magazine, and a 10 kilobyte lossy copy made for a small
image on a web page.
The original contains a certain amount of information; there is a lower limit to the size
of file that can carry all the information. As an intuitive example, most people know that a
compressed ZIP file is smaller than the original file; but repeatedly compressing the

file

will not reduce the size to nothing.
Hardware versus Software Compression
This is performed by the drive at the drive level, hence the name 'hardware
compression'. It has an advantage in that since the compression is done by the drive,
there is no system overhead. Hardware compression is only configurable in so far as it is
either on or off, there is no way to tune or configure the rate of data compression either on
the Hardware, or using ARC serve.
Compression on most modern drives is soft set, meaning that hardware
compression can be enabled or disabled by sending a SCSI command to the drive. If
drive supports hardware compression settable in this way, an option for

your

compression will

be shown in ARC serve device manager, allowing you to enable or disable

hardware

compression. We will look at this in more detail later on in this

document. Older drive

technologies may set compression using jumper blocks or DIP

switches on the drive

unit.

2. Explain in detail about the Overview of data and file format standards.
Multimedia data and information must be stored in a disk file using formats
similar to image file formats. The variety of data it stores includes text, image data,
12

audio and video data,computer animations, and other forms of binary data, such as
Musical Instrument Digital Interface (MIDI), control information, and graphical fonts.
The personal computer industry and, more specifically, the Microsoft-Windowsbased systems form the largest base for multimedia systems
The multimedia file formats are
Rich-text format (RTF)
Tagged image file format (TIFF)
Resource image file format (RIFF)
Musical instrument digital interface (MIDI)
Joint Photographic Experts Group (JPEG)
Audio Video Interleaved (AVI) Indeo file format
TWAIN
Rich text format
RTF is a document language used for exchanging text between different word
processors and text-processing applications. RTF is much easier to generate than PDF or
PostScript, and is more word-processor friendly than HTML. RTF has been around for

over

a decade, while hundreds of other binary formats have come and gone.
Tiff File Format
Big TIFF Proposal:
The Big TIFF file format is an ongoing attempt to design a next version of TIFF,
specifically targeted at breaking the 4 gigabyte boundary. This page presents an overview
of the current proposal.
TIFF Tag Reference:
A reference on all known baseline, extended, and private TIFF tags. Includes
basic properties of each tag (such as code, name, LibTiff name and data type), as well as a
short description. Also includes a useful search function, and a means for owners of
private tags to submit data about the tags.
TIFF Tag Viewer:
As Tiff Tag Viewer is a free TIFF Tag Viewer application for Windows. It lets you
each TIFF page's tags/fields (code, data type, count and value). As Tiff Tag Viewer

view
is

an indispensable tool for any professional to diagnose any TIFF file. Whenever a
customer reports your software doesn't handle this or that particular TIFF, use As Tiff

Tag

Viewer and discover.
TIFF Tag Reference:
This is a reference on all known baseline, extended, and private TIFF tags. Every
tag page offers a list of basic properties (such as code, name, Lib Tiff name and data
type), as well as a short description.
13

Search:
Search for any specific tag or group of tags here.
Baseline Tags:
Baseline tags are those that are listed as part of the core of TIFF, the essentials that all
mainstream TIFF developers should support in their products.
Extension Tags:
Extension tags are those listed as part of TIFF features that may not be

supported

by all TIFF readers.
Private Tags:
Private tags are, at least originally, allocated by Adobe for organizations that wish
to store information meaningful only to that organization in a TIFF file. The private tags
listed here are the ones that found their way into the public domain and more general
applications, and the ones that the owning organizations documented for the benefit
of the TIFF community.
Private IFD Tags:
If one needs more than 10 private tags or so, the TIFF specification suggests
that,rather than using a large amount of private tags, one should instead allocate a
single private tag, define it as data type IFD, and use it to point to a so-called 'private
IFD'. In that private IFD, one can next use whatever tags one wants. These private IFD tags
do not need to be properly registered with Adobe; they live in a namespace of their
own, private to the particular type of IFD.

3. Elaborate on various multimedia I/O technologies. Give examples.(16)(MAY/JUN 2013)
Multimedia I/O Technologies
Pen Input

14

The digital pen is a input device that allows the user to write, draw, point and
gesture (perform an action such as stroke and a loop). Pens have been used in

CAD/CAM

system to point and select.
Use of a pen is to point, pick, drag and click on an object. A gesture is another

type

of user interface for entering characters and for writing notes on the screen of pen
based system.
The Pen Extensions include a set of dynamic link libraries (DLLs) and drivers that
make applications pen-enabled. The DLLs allow pen-based input and handwriting
recognition.
The Pen Computing system consists of following components:
Electronic pen:
When an electronic pen is used to write or draw, the digitizer encodes the x and y
coordinates of the pen, and the pen status. The pen status includes whether the pen is
touching the digitizer surface (usually the screen) or not, pen pressure, pen angle, pen
rotation, and so on.
The minimum resolution defines that sufficient x-y location data is generated to

maintain

the accuracy of the pen path over the digitizing surface.
Digitizer:
The most commonly used digitizers include the following two types:
A transparent digitizer bonded to the thin flat LCD screen of a notebook or palmtop
computer (PDA)
A separate tablet containing electronic digitizing circuitry, with pen or mouse. The digitizer
generates the pen position (x and y coordinates) and the pen status (distance

from

the screen

surface and pen contact with the screen).
Pen Driver:
A pen driver is a pen device driver that interacts with the digitizer to receive all

the

digitized information about the pen location. The pen driver in the windows for Pen
system consists of two drivers:
An installable Windows pen driver and a virtual driver
Video and Image Display System
Video, like sound, is usually recorded and played as an analog signal. To incorporate
video into a multimedia application it has to be digitized. The video in a multimedia
application brings a sense of realism, engage the user and evoke emotions. The visual
technologies employed to display video makes the display system an important architectural
component in the design of any multimedia system
Display System Technologies
15

While displaying images and full-motion video in windows it is required to resize the
windows dynamically to cope up with the user preferences In resizing the windows the
number of pixels being displayed would change. That is,when the windows become longer,
pixels per inch of the original rendering change Scaling down to smaller window in achieved
by dropping pixels, but scaling up to a larger window requires adding pixels that do not exist
in the originally captured image Dynamic scaling becomes much more complex if the display
window contains full motion video
The display system generates the visual output of text, graphics and video that the
end user sees. A range of video display standards with possible parameters are presented
in the table
Resolution and Dot pitch
To display an image on a monitor the scanning process takes place. Monitor screen
consists of a number of scan lines A scan line is made of a pixel array. The screen
resolution of the monitor is defined by the product of pixels per scan line and the number of
scan lines. The higher the resolution, the sharper the image.
Each pixel on the screen is made of red, green and blue (RGB) phosphors arranged
in a triad. The distance between on RGB phosphor to the next is called dot pitch smaller the
dot pitch, the finer the size of the pixel and lower the chance of pixel overlap at higher
resolution.
Horizontal and Vertical Refresh Rate, and Flicker
The rate at which the horizontal scan lines are painted horizontal scan frequency and
is measure in kilohertz. The vertical scan frequency depend on the horizontal scan
frequency The sweep starts from left to right and progress gradually to the end of the last
scan line and returns to the top of the screen.Sometimes low vertical refresh rates cause the
human eye not retain a continuous perception of successive images and is referred tom
flicker
Interlaced scan and Non-interlaced Scan Mode
It interlaced scan mode the odd lines are scanned first and then the even lines are
scanned. Hence, it takes two passes to paint or refresh one frame on the screen In noninterlaced scan mode all lines are scanned in one pass sequentially. Hence, twice the
number of lines are painted on the screen in every pan and thus doubling of the frame
repaint rate Non-interlaced mode is the recommended one to reduce flicker for multimedia
applications using high resolution
CRT Display System
The Cathode Ray Tube (CRT) is a vacuum glass tube with the display screen at one
end

and connectors to the control circuits at the other end CRT function using electrostatic

16

deflection coils mounted on the outside of the glass envelope to bend the electron beam with
magnetic fields
Phosphor Types
Phosphor is a chemical compound coated on the inside surface of the picture tube. A
triad consists of a set of red green and blue phosphors arranged in a triangle When an
electron beam strikes the phosphor, it glows and generates visible color. The color of the
light and the time period vary from one type of phosphor to another The light given off by the
phosphor during exposure to the electron beam is known as fluorescence, the continuing
glow given off after the beam is removed is known as phosphorescence and the duration of
phosphorescence is known as phosphor’s persistence.
Flat Panel Display system
Color Display System like color monitors used to display millions of colors, associated
with

multimedia application

The way the colors are exhibited in a color display system mainly depends on the two
parameters, already discussed, namely
1. Number of colors and
2. Resolution of the display
Further, it depends on the memory capacity of the video controller card called
VRAM.A particular combination of colors and resolution is called the video mode of the
display system Though all presentation display systems basically use the same technology,
called the Liquid Crystal Display (LCD), they are offered in various forms.
LCD Projection Panels
In this, the output from the computer is directed to a small and compact matrix of
LCDs arranged in a specific pattern. This arrangement is kept over a projection system.
When the projection system is illuminated the graphics in the pattern corresponding to the
output from the computer is displayed, four basic technologies are used for Flat panel
displays
1. Passive matrix monochrome
2. Active-matrix monochrome
3. Passive-matrix color
4. Active-matrix color
The first two technologies are becoming obsolete and are used in very limited
application passive matrix color LED panel, suffer from lesser color saturation, contrast and
slower response time These are less costlier and are suitable for slide slide show
presentation and predominantly used in notebooks and PDAs
17

Active matrix color LCD panels displays and long persistence and are costlier To
improve the efficiency of the LCD displays backlit or sidelit technologies are being used the
Light Emitting Diode (LED) technology. LEDs are electronic devices capable of displaying
any particular combination of RGB color and is also transparent to lights,So when a series of
these LEDs are arranged to form a row-column matrix, they can be used in place of pixels in
monitors and can display the output from the computer
This is the basic principle behind all LED projection system and LCD arranged in the
form of row-column matrix are known as LCD panels
Print Output Technologies
Laser print technology has continued to evolve, and print quality at 600 dpi started to
make the technology useful for high-speed presses.Typical textbooks printed by offset
presses range from 1200 dpi to 1800 dpi, the 600 dpi sufficient for most industrial manuals
and maintenance books.1200 DPI printers were commonly available during 2008. 2400 DPI
electro photographic printing plate makers, essentially laser printers
that print on plastic sheets, are also available.
An important advantage of this approach is that the manuals can be upgraded very
easily, and printing of new manuals can be achieved almost instantaneously by changing the
files being printed. As compared to high-quality offset printing, where setting up a print job
can take a long time, there is literally no setup time for laser print output. Laser printing
technology is the most common technology for multimedia systems.
Laser Printing Technology
Print technologies have enhancements in the form of speed, resolution, dot size (for
greater clarity at higher resolutions), gray-scale printing, and color printing.These ongoing
enhancements are aimed at achieving the same level of quality and control obtained from an
offset press.
The basic components of the laser printer are:
Paper feed mechanism
Paper guide
Laser assembly
Corona assembly
Fuser
Toner cartridge
The paper feed mechanism moves the paper from a paper tray through the paper
path in the printer. The paper passes over a set of corona wires that induce a charge
the paper.
18

in

The charged paper passes over a drum coated with fine-grain carbon (toner), and the
toner attaches itself to the paper as a thin film of carbon. The paper is then struck by a
scanning laser beam that follows the pattern of the text or graphics to be printed.
Dye Sublimation Printer

A dye sublimation printer, has a thermal printing head with thousands of very tiny
heating

elements, plastic film transfer roll mounted on two rollers and a drum The

transfer roll film contains panels of cyan, magenta, yellow and black eyes.
During printing, individual heating elements can be heated to one of 256 different
temperature levels. The coils are heated to their individual temperature levels under program
control.
The cyan panel is rolled under the thermal printing head first. Tiny spots of cyan dye
from the panel get printer paper, which clings to the drum. The printer paper is coated with
polyester to help absorb the dye quickly. The hotter the temperature of the heating element,
the greater the dye vaporized resulting in a denser and larger dot.
The dye sublimation printer is very applicable to multimedia applications because it
prints in color, and the print quality is very high.
Graphic artists, advertising agencies, the film industry, whoever requires photographic
quality prints with continuous tone can use the printer.
Image Scanners
Scanners are quite popular for capturing images from hardcopy sources such as
books, magazines, letters, photographs and even camera slides. Scanners operate in similar
fashion to photocopier in that images are electronically captured with an array of light

19

sensors. The sensors convert light into color and light intensity one line at a time until the
complete document is viewed.
Types of Scanners
Flat-Bed Scanners
Rotary Drum Scanners
Handheld Scanners

Flat-Bed Scanners:
Flat-bed scanners are best suited for larger documents as well as books and other
odd-shaped source images.The scanning bed in the scanner, a glass plate, is utilized to
place a document for scanning.
A light source, a fluorescent lamp, is mounted on a traction mechanism which moves
from one end of the document to the other end during a scan session. Flatbed scanners are
the workhorses of document image scanning. For heavy-duty scanning operations, flatbed
scanners are fitted with sheet-feeding mechanisms that allow as many as 200 sheets to be
stacked. Typical scanning speeds range from 8 pages / minute to 30 pages / minute.
Rotary Drum Scanners:
A rotary drum scanner contains a drum in the paper transport system. It contains
feeder and stacking trays, and electronics interfaces. It addition to the feeder and stacking
trays, the scanner contains two sets of belts and three sets of roller guides to guide the
paper.The paper is fed from the feed tray and is clinched by the transport mechanism and
wrapped around the drum. The front side of the page is scanned in position 1 as it rolls
around the drum, and the back side of the page is scanned in position 2 as the transport
mechanism pulls the paper from the drum and ejects it out to the stacking tray.
Handheld Scanners:
Handheld scanners are used for casual use to capture a part of a page from a book, a

20

manual, or newspapers. It can scan over an area as large as 8.5-inch x 50-inch withsensitive
color image sensor that scans at high 600dpi or standard 300dpi.Handheld scanners are
useful where the document cannot be placed on a flatbed scanner or for the convenience of
carrying a small light scanner.
Handheld scanners offer portability and the convenience of a simple device at a cost
lower than that of a flatbed scanner. There are some problems associated with handheld
scanners. A very steady hand is necessary to guide the scanning motion to avoid skewing,
poor registration, and improper alignment. Handheld scanners use a light bulb as a light
source to illuminate the scan line being scanned. The light is reflected from the document as
the user moves the scanner across the document.
A fixed CCD array absorbs the reflected light and generates analog voltage, which in turn
gets converted to a digital value.
Digital Voice and Audio
Audio is any sound output from a source. Digital audio is that, the audio signal
converted into 0s and 1s. Digital audio systems are designed to make use of the range of
human hearing frequency 20Hz to 20 KHz.
The quality of digital audio is characterized by the sampling rate, the sampling
resolution and the number of channels. The sampling rate determines the frequency
response of a digital audio system.
Frequency response refers to the range of frequencies that a medium can reproduce
accurately. The sampling resolution is the number of bits per sample. This determines the
dynamic range which describes the spectrum of the softest to the loudest sound amplitude
levels that a medium can reproduce.
One bit yields 6 dB of dynamic range. For example, 16-bit audio contributes 96dB of
Dynamic range found in CD-quality audio, which is nearly the dynamic range of human
hearing. Different system have different number of channels. For example, mono systems
have single channel, stereo system have two channels (left and right) and dolby surround
systems have 5.1 channels.
In this the five full range channels of audio includes three format channels (left,centre,
right and two surrounds) and a low frequency bass effect channel called the sub-woofer
which has only a limited frequency response of about 100 Hz.Hence, this sub-woofer is
sometimes referred to as the “1” channel. With the higher sampling rate, the more bits per
sample and the more channels, it renders higher quality of the digital audio. Therefore,
higher storage and bandwidth are required.

21

The Table gives the description on the above

Digital audio and video typically

requires specialized hardware and software for capture and playback. There are three
aspects of the digital audio video operation.
1. Capture - raw analog sources are digitized and compressed via hardware (audio and
video) or software (video).
2. Storage - files are stored and/or transmitted.
3. Playback - playback based on hardware or software digital to analog conversion.

4. In detail,explain the various technologies involved in storage and retrieval of
multimedia files comprising of motion videos.Comment on the challenges in it.(16)
(NOV/DEC 2012) Write a brief note on optical media and disk technologies.(08)
(MAY/JUN 2012)

Optical storage demerits and latency
Optical data storage, which once appeared to be a failing technology in the
marketplace, is

quickly finding its way into homes and offices with the multimedia

revolution. It has become one of the important enabling technologies fusing together the
entertainment and computing

industries. Although the basic concepts for optical storage

were first proposed in the United States in the 1960s, over the last decade Japan has clearly
pulled ahead in terms of development and production.
In view of the important role optical storage is likely to play in the future, it may be
critical for the data storage and optoelectronics industries to design a catch-up strategy.
CD-ROM
A CDROM (compact disk read-only memory), also written as CD-ROM, is a type of
optical storage media that allows data to be written to it only once.Storage refers to devices
or media that can retain data for relatively long periods of time (e.g., years or even decades).
This contrasts with memory, whose contents can be accessed (i.e., read and written to) at
extremely high speeds but which are retained only temporarily (i.e., while in use or only as
long as the power supply remains on).
Most storage devices and media are rewritable, including hard disk drives (HDDs),
floppy disks, USB (universal serial bus) key drives, magnetic tape and some types of optical
disks.

22

CD-ROM Standards:
The most popular way to organize data in CD-ROMs is according to the ISO 9660 standard.
ISO 9660 specifies a very minimal file system, which is even simpler than the one used by
MS-DOS.
This has the advantage of making it compatible with almost every operating system.
Additional benefits, including longer filenames and symbolic links, are provided by the Rock
Ridge extension.
The standard CDROM holds 650 or 700 megabytes (MB) of data, which, when
compressed, is comparable to the data than can be accommodated in printed books
occupying several hundred feet of shelf space. This, together with their low cost of
production, light weight (less than 30 grams) and durability, makes CD-ROMs a popular
means of distribution of software and storage of data.
DVDs (digital video disks or digital versatile disks) typically have a capacity at least 4.4 GB
of data, roughly seven times the amount of CD-ROMs.
DVD technology is similar to CD technology except that a higher precision laser is used,
which makes possible a higher recording density. As is the case with CDs, there are
rewritable DVDs and DVDs that can be written to only once (i.e., DVD-ROMs).
Mini Disk (MD)
Identification:
A mini disc itself is 2.5 inches in diameter and is encased in a hard plastic shell,
giving it a similar appearance to a computer floppy disc.
Function:
Once inserted into an MD player, a disc can record off of a microphone or other audio
player through a stereo coaxial connection. The recorded tracks can be labeled, move to
different positions on the disc or erased.
Time Frame:
A mini disc contains between 70 and 80 minutes worth of audio disc space, which can
be separated into as many tracks as desired.
History:
The mini disc was first released by Sony Electronics in 1992. Other manufacturers
like

Maxell and TDK have since produced their own discs.

Effects:
Mini discs can often be connected to computers through a player's USB connection
and

then share MP3 files. The computer will need driver software from the player's

manufacturer.
Potential:
23

Mini discs have not had mainstream popularity in Europe or America. However, they
have found a niche in portable live field recordings.
WORM Optical Drives
A storage medium from which data is read and to which it is written by lasers. Optical
disks can store much more data -- up to 6 gigabytes (6 billion bytes) -- than most portable
magnetic media, such as floppies.
There are three basic types of optical disks:
CD-ROM: Like audio CDs, CD-ROMs come with data already encoded onto them.
The data is permanent and can be read any number of times, but CD-ROMs cannot be
modified.
WORM: Stands for write-once, read -many. With a WORM disk drive, you can write

data

onto a WORM disk, but only once. After that, the WORM disk behaves just like a
CD-ROM.
Erasable: Optical disks that can be erased and loaded with new data, just like magnetic
disks. These are often referred to as EO (erasable optical) disks. These three technologies
are not compatible with one another; each requires a different type of disk drive and
disk.Even within one category, there are many competing formats, although CD-ROMs are
relatively standardized.
Rewritable Optical Disk Technologies
The two mainstream technologies for rewritable optical data storage are based on
magneto-optical (MO) and phase-change (PC) media. In both cases a focused laser beam is
used to raise the temperature of the medium beyond a certain critical temperature (i.e.,
melting and crystallization temperatures in the case of PC, and the Curie temperature in the
case of MO) for writing, erasure, and overwriting of data.
The readout of information from these media relies on the change of reflectivity of the
medium (PC), or the effect of the medium on the state of polarization of the laser
beam (MO).
The performance of these data storage systems is characterized by the storage
density of the media, achievable data rates during recording and readout, longevity,
reliability, and cost of the finished products.
Magneto-optical Technology:
As implied by the name, these drives use a hybrid of magnetic and optical
technologies, employing laser to read data on the disk, while additionally needing magnetic
field to write data. An MO disk drive is so designed that an inserted disk will be exposed to a
magnet on the label side and to the light (laser beam) on the opposite side. The disks, which
24

come in 3.5in and 5.25in formats, have a special alloy layer that has the property of
reflecting laser light at slightly different angles depending on which way it's magnetized, and
data can be stored on it as north and south magnetic spots, just like on a hard disk. While a
hard disk can be magnetized at any temperature, the magnetic coating used on MO media is
designed to be extremely stable at room temperature, making the data unchangeable unless
the disc is heated to above a temperature level called the Curie point, usually around 200
degrees centigrade. Instead of heating the whole disc, MO drives use a laser to target and
heat specific regions of magnetic particles. This accurate technique enables MO media to
pack in a lot more information than other magnetic devices. Once heated the magnetic
particles can easily have their direction changed by a magnetic field

generated by the

read/write head.
Phase Change Rewriteable Optical Disk:
A type of rewritable optical disk that employs the phase change recording method.
Using this technique, the disk drive writes data with a laser that changes spots on the

disk

between amorphous and crystalline states.
An optical head reads data by detecting the difference in reflected light from
amorphous and crystalline spots.
A medium-intensity pulse can then restore the original crystalline structure.Magnetooptical and dye-polymer technologies offer similar capabilities for developing

re-

writable

optical disks.
Multifunction Drives:
The emergence of optical disk storage technology with high density and long lifetime
is a great event in information science and technology. Nowadays the high storage density
optical disks are the key devices for data storage at the dawn of a full-scale multimedia age.
The new storage media are still the bottleneck in high density optical storage. It is an
important task to search for new optical recording media with high performance and to
explore new preparation approaches to obtain high quality recording films. A series of
organic materials for optical storage have been studied for a long time, but only a few of
them have obtained practical application. For example, some cyanine and phthalocyanine
dyes have been used in recordable compact disks (CD-R). For erasable (or rewritable)
disks, the most applicable storage media are inorganic materials, such as magneto-optical
and phase change materials.

25

Multifunction Drives
Hierarchical Storage Management
Hierarchical Storage Management (HSM) is a data storage technique which
automatically moves data between high-cost and low-cost storage media. HSM systems
exist because high-speed storage devices, such as hard disk drive arrays, are more
expensive (per byte stored) than slower devices, such as optical discs and magnetic tape
drives. While it would be ideal to have all data available on high-speed devices all the time,
this is prohibitively expensive for many organizations. Instead, HSM systems store the bulk
of the enterprise's data on slower devices, and then copy data to faster disk drives when
needed. In effect, HSM turns the fast disk drives into caches for the slower mass storage
devices.The HSM system monitors the way data is used and makes best guesses as to
which data can safely be moved to slower devices and which data should stay on the fast
devices.
Permanent vs. Transient Storage Issues:
There are endless threads on this (which admittedly are hard to find by searching) but
the general consensus AFAIK is that HDDs (with redundancy) are the most convenient and
reliable solution, followed by DVD-Rs (CD-R capacity is too low to be of much use these
days of RAW and high-MP images). As long as you spin a hard drive up every month or so
(which is probably not a problem if it's a backup device) and have a secondary backup
(because HDDs do go bad every now and then) you should be fine. I'd also avoid portable
HDDs and get a standard 3.5" drive in an external enclosure -- much more reliable and if the
power supply or plugs go bad you simply swap the drive into a new enclosure (for $20). The
problem with DVD-Rs is that they need good storage conditions and even then they can go
bad with no warning. Recovering data from a rotten DVD is next to impossible.
Online storage is too unreliable IMO (there have been instances of online storage
companies going under), ZIP drives are/were hopelessly unreliable, and the jury is

still

out on flash-memory based storage -- CF and SD cards are probably too delicate,but SSDs
could turn out to be a better bet for long-term storage.
Optical Disk Library (Jukebox):
Also called an "optical jukebox," it is an optical disk storage system that houses
multiple disc platters.It is similar to a music jukebox, except that instead of "playing one
tune," more than one drive can be used to read and write several discs simultaneously. Such
devices are made for rewritable optical discs, write once discs and CD-ROMs,
hold from a handful to several thousand discs or cartridges.
Hierarchical Storage Applications:
26

and can

Characteristics of Hierarchical Storage:
Also known as tiered storage, the driving rationale for using hierarchical storage is the
notion that costs can be reduced by storing all or part of collection of files (bitstreams) on
lower performing, less expensive storage technologies while keeping a copy of some part of
the collection on a high performing, more expensive storage technology for immediate use.
There may be any number of tiers in the hierarchy but the most common
implementations consist of one or two tiers of random access disk followed by tape storage
as the lowest tier. The hierarchical storage manager implements some policy for determining
on what tier files are stored to meet system goals and moving files between tiers as dictated
by the policy. In real world implementations, many other needs may be fulfilled by
hierarchical storage such as backup, replication, high availability and disaster recovery.
However, it is cost that underlies any decision to deploy hierarchical storage. At this
time, magnetic tape provides the lowest cost in most deployment scenarios.
Cache Management for Storage Systems
A type of memory that is used in high-performance systems, inserted between the
processor and memory proper. The memory hierarchy on a system contains registers in the
processor, which are the highest-speed storage, and, at a slightly lower level of accessibility,
the contents of the main memory.
The cache is intended to reduce the discrepancy in accessibility between these two
types of unit, and functions by holding small regions that map the contents of main
memory. The formal behavior of the cache corresponds closely to that of the working set in a
paging system. Some magnetic disk controllers have a cache. The working of the cache is
not visible
to the main CPU, but again provides a mapping of the current contents of part of the disk
units in order to provide improved performance.
Low-Level Disk Caching:
Disk drive capabilities and processing power are steadily increasing, and this power gives us
the possibility of using disks as data processing devices rather than merely for data
transfers. In the area of malicious code (malware) detection, anti-virus (AV) engines are slow
and have trouble correctly identifying many types of malware. Our goal is to help make
malware detection more reliable and more efficient by using the disk drive’s processor.
Using the extra processing power available on modern disk drives can provide significant
advantages in detecting malware including reducing the traditional AV engine’s workload on
the host CPU by partitioning the workload between the host AV engine and the disk drive,
improving the detection of stealth malware by providing a low-level view of the system, and
27

recognizing virus behavior by observing disk I/O traffic directly. Several research questions
must be addressed before these benefits can be realized: how to correctly partition work
between the AV engine and the disk drive processor, how to design interfaces between the
operating system (OS) or host AV engine and the disk drive that provide satisfactory
performance without compromising security, and how to recognize malicious behavior based
on the dynamic analysis of low-level data accesses.
Cache Organization for Hierarchical Storage Systems:
We propose and analyze a two-level cache organization that provides high memory
bandwidth. The first-level cache is accessed directly by virtual addresses. It is small, fast,
and, without the burden of address translation, can easily be optimized to match the
processor speed. The virtually-addressed cache is backed up by a large physicallyaddressed cache; this second-level cache provides a high hit ratio and greatly reduces
memory traffic. We show how the second-level cache can be easily extended to solve the
synonym problem resulting from the use of a virtually-addressed cache at the first level.
Moreover, the second-level cache can be used to shield the virtually-addressed first
level cache from irrelevant cache coherence interference.

Finally, simulation results show

that this organization has a performance advantage over a hierarchy of physically-addressed
caches in a multiprocessor environment.
Cache Organization for Distributed Client-Server Systems:
With the continuing decline in the cost of computing, we have witnessed a dramatic
increase in the number of independent computer systems. These machines do not compute
in isolation, but rather are often arranged into a distributed system consisting of single-user
machines (workstations) connected by a fast local-area network (LAN).
The workstations need to share resources, often for economic reasons. In particular,
it is desirable to provide the sharing of disk files. Current network technology does not
provide sufficiently high transfer rates to allow a processor’s main memory to be shared
across the network. Management of shared resources is typically provided by a trusted
central authority. The workstations, being controlled by their users, cannot be guaranteed to
be always available or be fully trusted. The solution is to use server machines to administer
the shared resources. A file server is such a machine that makes available a large quantity
of disk storage to the client workstations. The clients have little, if any, local disk storage,
relying on the server for all long term storage. The disparity in speeds between processor
and remote disk make an effective caching scheme desirable. However, no efficient, fully
transparent solutions exist for coherence in a distributed system.
Distributed data base systems use locking protocols to provide coherent sharing of objects
between clients on a network. These mechanisms are incorporated into the systems at a
very high level, built on a non-transparent network access mechanism, and are not
28

concerned with performance improvements. We prefer a solution that is integral to the
network file system, and provides the extra performance of an embedded cache. Several
distributed file systems that include some form of caching exist. The next sections present a
survey of their characteristic features.
5. Draw and explain the TWAIN architecture.(08)(MAY/JUN 2012) Or What is TWAIN?
Explain the objective and architecture of TWAIN.(16)(APR/MAY 2010)
 To avoid the custom interface problem ,the TWAIN working group formed an
open std interface for input device.
 The std interface was designed to allow application to interface with different
types of input devices such as cameras , scanners etc and the benefits are as
follows,
 Application developers can code to a single TWAIN that allows application to
interface to all TWAIN device.
 Device manufactures can write device drivers for their proprietary devices and
allow the device to be used by all TWAIN complaint application.
 TWAIN allows users to invoke “Acquire” and “select source” to select one of
multiple device.
TWAIN architecture
User ApplicationApplication Layer
#2

User
Application #1

Source
Manager
TWAIN
CODE

TWAIN
CODE

Remote
source

Local

Protocol Layer

Acquisition Layer

Local device

Device Layer
Network
29

Twain architecture defines set of architecture programming interface and protocol to
acquire data from input device , and it is shown in the figure.
TWAIN objectives
 Supports multiple platforms
 Supports multiple devices
 Widespread acceptance with standard interface
 Standard extendibility and backward compatibility.
Application layer
 This set up a logical connection with device , and set guidelines for the he user
interface to

select the source from a given list of logical devices.#

 It also specifies user interface guidelines to acquire data from the selected sources.
Protocol layer
 The protocol layer is responsible for communication between the application and
acquisition

layer.

 It specifies the services provided by sources, establishing a session, physical
connection to device
Source Manager
 The functions of source managers as follows,
 Provide std API fro all TWAIN sources.
 Provide selection of sources for user
 Establish logical sessions between application and source.
 Keep track of session.
 Load or unload sources as demanded by application.
 Pass all return code from source to application.
 Maintain a default source.
 Act as traffic cop to make sure that transaction andcommunication are routed to
appropriate sources.
Acquisition Layer
The acquisition layer contains virtual device and it acts directly with device driver. The
virtual layer is called source and the source can be local and logically connected to a local
device and performs the following functions.

30

 Device control
 Acquisition of data from device
 Transfer of data
 Provision of user interface to control the device
Cue chunk
It identifies a series of position in the waveform data stream.
Playlist chunk
It specifies a play order for a series of cue points
Associated chunk
Provides the ability to attach information such as labels.
Instrumental chunk
The wave form is perfect file format for sorting a sampled sound synthesizer’s sample

31

Sponsor Documents

Or use your account on DocShare.tips

Hide

Forgot your password?

Or register your new account on DocShare.tips

Hide

Lost your password? Please enter your email address. You will receive a link to create a new password.

Back to log-in

Close