Image Compression

Published on January 2017 | Categories: Documents | Downloads: 20 | Comments: 0 | Views: 664
of 46
Download PDF   Embed   Report

Comments

Content

1

ACKNOWLEDGEMENT

It is with great pleasure and learning spirit that we are bringing out this project report. We use this opportunity to express our heartiest gratitude to support and guidance offered to us from various sources during the courses and completion of our project. We are extremely grateful to the head of the institute Dr.M.K Jana, sarabhai institute of science and technology, vellanad for providing the necessary facilities. We are grateful to our head of department, Dr.C.G Sukumaran Nair for valuable suggestions. It is our duty to acknowledge Miss Sudha S.K, assistant Professor and Miss Sheena for sharing their wealthy knowledge. We also wish to thank all faculties of computer science Department for their proper guidance and help. Also we extend our gratitude to all laboratory staff of computer lab. Above all, we owe our gratitude to the almighty for showering abundant blessings upon us. And last but not the least we wish to thank our parents and our friends for helping us to complete our mini project work successfully. ARUN K.R DEEPAK.K.P MANOJ.R

DEPARTMENT OF CSE-SARABHAI INSTITUTE OF SCIENCE & TECHNOLOGY

2

INTRODUCTION
INTRODUCTION Image compression is to reduce irrelevance and redundancy of the image data in order to be able to store or transmit data in an efficient form. Compressing an image is significantly different than compressing raw binary data. General purpose compression programs can be used to compress images, but the result is less than optimal. This is because images have certain statistical properties which can be exploited by encoders specifically designed for them. There are two types of compression • Lossy compression • Lossless compression Lossy methods are especially suitable for natural images such as photographs in applications where minor (sometimes imperceptible) loss of fidelity is acceptable to achieve a substantial reduction in bit rate. The lossy compression that produces imperceptible differences may be called visually lossless. Methods for lossy compression:








Reducing the color space to the most common colors in the image. The selected colors are specified in the color palette in the header of the compressed image. Each pixel just references the index of a color in the color palette. This method can be combined with dithering to avoid posterization. Chroma subsampling:- This takes advantage of the fact that the human eye perceives spatial changes of brightness more sharply than those of color, by averaging or dropping some of the chrominance information in the image. Transform coding :- This is the most commonly used method. A Fourier-related transform such as DCT or the wavelet transform are applied, followed by quantization and entropy coding. Fractal compression

DEPARTMENT OF CSE-SARABHAI INSTITUTE OF SCIENCE & TECHNOLOGY

3 Lossless compression is preferred for archival purposes and often for medical imaging, technical drawings, clip art, or comics. This is because lossy compression methods, especially when used at low bit rates, introduce compression artifacts. Methods for lossless image compression are:
• • • • •

Run-length encoding DPCM and Predictive Coding Entropy encoding Deflation Chain codes

FRACTAL IMAGE COMPRESSION
Fractal compression is a lossy image compression method using fractals. The method is best suited for textures and natural images, relying on the fact that parts of an image often resemble other parts of the same image. Fractal algorithms convert these parts into mathematical data called "fractal codes" which are used to recreate the encoded image. Fractal compression differs from pixel-based compression schemes such as JPEG, GIF and MPEG since no pixels are saved. Once an image has been converted into fractal code, the image can be recreated to fill any screen size without the loss of sharpness that occurs in conventional compression schemes. A fractal is a structure that is made up of similar forms and patterns that occur in many different sizes. The term fractal was first used by Benoit Mandelbrot to describe repeating patterns that he observed occurring in many different structures. These patterns appeared nearly identical in form at any size and occurred naturally in all things. Mandelbrot also discovered that these fractals could be described in mathematical terms and could be created using very small and finite algorithms and data. Fractal encoding is largely used to convert bitmap images to fractal codes. Fractal decoding is just the reverse, in which a set of fractal codes are converted to a bitmap. The encoding process is extremely computationally intensive. Millions or billions of iterations are required to find the fractal patterns in an image. Depending upon the resolution and contents of the input bitmap data, and output quality, compression time, and file size parameters selected, compressing a single image could take anywhere from a few seconds to a few hours (or more) on even a very fast computer. Decoding a fractal image is a much simpler process. The hard work was performed finding all the fractals during the encoding process. All the decoding process needs to do is to interpret the fractal codes and translate them into a bitmap image.

DEPARTMENT OF CSE-SARABHAI INSTITUTE OF SCIENCE & TECHNOLOGY

4 Two tremendous benefits are immediately realized by converting conventional bitmap images to fractal data. The first is the ability to scale any fractal image up or down in size without the introduction of image artifacts or a loss in detail that occurs in bitmap images. This process of "fractal zooming" is independent of the resolution of the original bitmap image, and the zooming is limited only by the amount of available memory in the computer. The second benefit is the fact that the size of the physical data used to store fractal codes is much smaller than the size of the original bitmap data. If fact, it is not uncommon for fractal images to be more than 100 times smaller than their bitmap sources. It is this aspect of fractal technology, called fractal compression, that has promoted the greatest interest within the computer imaging industry. The process of matching fractals does not involve looking for exact matches, but instead looking for "best fit" matches based on the compression parameters (encoding time, image quality, and size of output). But the encoding process can be controlled to the point where the image is "visually lossless." That is, you shouldn't be able to notice where the data was lost. Fractal compression differs from other lossy compression methods, such as JPEG, in a number of ways. JPEG achieves compression by discarding image data that is not required for the human eye to perceive the image. The resulting data is then further compressed using a lossless method of compression. To achieve greater compression ratios, more image data must be discarded, resulting in a poorer quality image with a pixelized (blocky) appearance.

Encoding
Select an image and divide it into small,non-overlapping,square blocks, typically called “parent blocks”. Divide each parent block into 4 individual blocks, or “child blocks.” Compare each child block against a subset of all possible overlapping blocks of parent block size. Need to reduce the size of the parent to allow the comparison to work. Determine which larger block has the lowest difference, according to some measure, between it and the child block.Upper left corner child block, very similar to upper right parent block.Compute affine transform.Store location of parent block (or transform block), affine transform components, and related child block into a file. Repeat for each child block. Lots of comparisons can calculations. 256x256 original image 16x16 sized parent blocks 241*241 = 58,081 block comparisons Read in child block and transform block position, transform, and size information. Use any blank starting image of same size as original image For each child block apply stored transforms against specified transform block Overwrite child block pixel values with transform block pixel values Repeat until acceptable image quality is reached.

DEPARTMENT OF CSE-SARABHAI INSTITUTE OF SCIENCE & TECHNOLOGY

5

Advantages
• • • •

Reduces the image size without changing any part of the image Fractal image compression is well suited for applications requiring fast access to high quality images It is essential to be able to send out the compressed image with minimum delay Faster decompression speed

PROCESSING ENVIRONMENT

HARDWARE REQUIREMENTS
Processor RAM Hard Disk Display Screen Resolution : : : : : Intel Pentium IV 256 MB 40 GB 15” Color Monitor

800 x 600 pixels

SOFTWARE REQUIREMENTS
Operating System Front End Platform Backend Web Browser : : : : : Windows XP Visual Studio 2005 ASP.Net with C# SQL Server 2005 Internet Explorer 6.0

DEPARTMENT OF CSE-SARABHAI INSTITUTE OF SCIENCE & TECHNOLOGY

6

SOFTWARE DESCRIPTIONS Microsoft .Net Framework
The Microsoft .NET Framework is a software component that can be added to or is included with the Microsoft Windows operating system. It provides a large body of pre-coded solutions to common program requirements, and manages the execution of programs written specifically for the framework. The pre-coded solutions that form the framework's class library cover a large range of programming needs in areas including: user interface, data access, database connectivity, cryptography, web application development, numeric algorithms, and network communications. The functions of the class library are used by programmers who combine them with their own code to produce applications. Programs written for the .NET Framework execute in a software environment that manages the program's runtime requirements. This runtime environment, which is also a part of the .NET Framework, is known as the Common Language Runtime (CLR). The CLR provides the appearance of an application virtual machine, so that programmers need not consider the capabilities of the specific CPU that will execute the program. The CLR also provides other important services such as security mechanisms, memory management, and exception handling. The class library and the CLR together compose the .NET Framework. The framework is intended to make it easier to develop computer applications and to reduce the vulnerability of applications and computers to security threats.

DEPARTMENT OF CSE-SARABHAI INSTITUTE OF SCIENCE & TECHNOLOGY

7 First released in 2002, it is included with Windows XP SP2, Windows Server 2003 and Windows Vista, and can be installed on most older versions of Windows

Microsoft .NET Framework was designed with several intentions:


Interoperability -Because interaction between new and older applications is
commonly required, the .NET Framework provides means to access functionality that is implemented in programs that execute outside the .NET environment. Access to COM components is provided in the System.Runtime.InteropServices and System.EnterpriseServices namespaces of the framework, and access to other functionality is provided using the P/Invoke feature.



Common Runtime Engine - Programming languages on the .NET Framework
compile into an intermediate language known as the Common Intermediate Language, or CIL (formerly known as Microsoft Intermediate Language, or MSIL). In Microsoft's implementation, this intermediate language is not interpreted, but rather compiled in a manner known as just-in-time compilation (JIT) into native code. The combination of these concepts is called the Common Language Infrastructure (CLI), a specification; Microsoft's implementation of the CLI is known as the Common Language Runtime (CLR).



Language Independence - The .NET Framework introduces a Common Type
System, or CTS. The CTS specification defines all possible data types and programming constructs supported by the CLR and how they may or may not interact with each other. Because of this feature, the .NET Framework supports development in multiple
DEPARTMENT OF CSE-SARABHAI INSTITUTE OF SCIENCE & TECHNOLOGY

8 programming languages. This is discussed in more detail in the .NET languages section below.


Base Class Library - The Base Class Library (BCL), sometimes referred to as the
Framework Class Library (FCL), is a library of types available to all languages using the .NET Framework. The BCL provides classes which encapsulate a number of common functions, including file reading and writing, graphic rendering, database interaction and XML document manipulation.



Simplified Deployment - Installation of computer software must be carefully
managed to ensure that it does not interfere with previously installed software, and that it conforms to increasingly stringent security requirements. The .NET framework includes design features and tools that help address these requirements.



Security - .NET allows for code to be run with different trust levels without the use of a
separate sandbox.

The Microsoft .Net Architecture comprises of: Common Language Infrastructure (CLI):

DEPARTMENT OF CSE-SARABHAI INSTITUTE OF SCIENCE & TECHNOLOGY

9 The most important component of the .NET Framework lies within the Common Language Infrastructure, or CLI. The purpose of the CLI is to provide a languageagnostic platform for application development and execution, including, but not limited to, components for exception handling, garbage collection, security, and interoperability. Microsoft's implementation of the CLI is called the Common Language Runtime, or CLR. The CLR is composed of four primary parts: 1. Common Type System (CTS) 2. Common Language Specification (CLS) 3. Just-In-Time Compiler (JIT) 4. Virtual Execution System (VES)

Assemblies
The intermediate CIL code is housed in .NET assemblies, which for the Windows implementation means a Portable Executable (PE) file (EXE or DLL). Assemblies are the .NET unit of deployment, versioning and security. The assembly consists of one or more files, but one of these must contain the manifest, which has the metadata for the assembly. The complete name of an assembly contains its simple text name, version number, culture and public key token; it must contain the name, but the others are optional. The public key token is generated when the assembly is created, and is a value that uniquely represents the name and contents of all the assembly files, and a private key known only to the creator of the assembly.

DEPARTMENT OF CSE-SARABHAI INSTITUTE OF SCIENCE & TECHNOLOGY

10 Two assemblies with the same public key token are guaranteed to be identical. If an assembly is tampered with (for example, by hackers), the public key can be used to detect the tampering.

Metadata
All CIL is Self-Describing through .NET metadata. The CLR checks on metadata to ensure that the correct method is called. Metadata is usually generated by language compilers but developers can create their own metadata through custom attributes. Metadata also contain all the information about assembly.

Base Class Library (BCL)
The Base Class Library (BCL), sometimes incorrectly referred to as the Framework Class Library (FCL) (which is a superset including the Microsoft.* namespaces), is a library of classes available to all languages using the .NET Framework. The BCL provides classes which encapsulate a number of common functions such as file reading and writing, graphic rendering, database interaction, XML document manipulation, and so forth. The BCL is much larger than other libraries, but has much more functionality in one package.

Security
.NET has its own security mechanism, with two general features: Code Access Security (CAS), and validation and verification. Code Access Security is based on evidence that is associated with a specific assembly. Code Access Security uses evidence to determine the permissions granted to the code. Other code can demand that calling code is granted a specified permission. The demand causes the CLR to perform a call stack walk: every
DEPARTMENT OF CSE-SARABHAI INSTITUTE OF SCIENCE & TECHNOLOGY

11 assembly of each method in the call stack is checked for the required permission and if any assembly is not granted the permission then a security exception is thrown. When an assembly is loaded the CLR performs various tests. Two such tests are validation and verification. During validation the CLR checks that the assembly contains valid metadata and CIL, and it checks that the internal tables are correct. Verification is not so exact. The verification mechanism checks to see if the code does anything that is 'unsafe'. The algorithm used is quite conservative and hence sometimes code that is 'safe' is not verified. Unsafe code will only be executed if the assembly has the 'skip verification' permission, which generally means code that is installed on the local machine.

DEPARTMENT OF CSE-SARABHAI INSTITUTE OF SCIENCE & TECHNOLOGY

12

. N V i s u a l B

E

T

A . N

p

p

l i c a t i o n

s a l J #

a s i c

EV T i s u a l C # . N E T V i s u O t h e r L a n g u a g e s

. N

E

T

F

r a m

e w

o r k o r k C l a s s L i b r a r y C l a s s e s s , A S P . N E

W

. N E T F r a m e w i n d o w s F o r m

T

C

l a

M

C o m m a n a g e d

o n A p

L a n g u a g e p l i c a t i o n s

R

u

n t i m C T S

e

I n

O

p

e r a t i n

g

S

y s t e m

a n

d

H

a r d

w

a r e

Microsoft Visual Studio
Microsoft Visual Studio is Microsoft's flagship software

development product for computer programmers. It centers on an integrated development environment which lets programmers create standalone applications, web sites, web applications, and web services that run on any platforms supported by Microsoft's .NET Framework (for all versions after 6). Supported platforms include Microsoft Windows servers and workstations, Pocket PC, Smart phones, and World Wide Web browsers.

Visual Studio includes the following:
DEPARTMENT OF CSE-SARABHAI INSTITUTE OF SCIENCE & TECHNOLOGY

13
• • • • •

Visual Basic (.NET) Visual C++ Visual C# Visual J# ASP.NET

Express editions of Visual Studio have been released by Microsoft for lightweight streamlined development and novice developers. The Express editions include:

• • • • •

Visual Basic (.NET) 2005 Express Edition Visual C# 2005 Express Edition Visual C++ 2005 Express Edition Visual J# 2005 Express Edition Visual Web Developer 2005 Express Edition
Visual Studio 2005, codenamed Whidbey (a reference to Whidbey

Island in Puget Sound), was released online in October 2005 and hit the stores a couple of weeks later. Microsoft removed the ".NET" moniker from Visual Studio 2005 (as well as every other product with .NET in its name), but it still primarily targets the .NET Framework, which was upgraded to version 2.0. Visual Studio 2005's internal version number is 8.0 while the file format version is 9.0.

DEPARTMENT OF CSE-SARABHAI INSTITUTE OF SCIENCE & TECHNOLOGY

14 Visual Studio 2005 also added extensive 64-bit support. While the development environment itself is only available as a 32-bit application, Visual C++ 2005 supports compiling for x86-64 (AMD64 and Intel 64) as well as IA-64 (Itanium).The Platform SDK included 64-bit compilers and 64-bit versions of the libraries. Visual Studio 2005 is available in several editions, which are significantly different from previous versions: Express, Standard, Professional, Tools for Office, and a set of five Visual Studio Team System Editions. The latter are provided in conjunction with MSDN Premium subscriptions, covering four major roles of software development: Architects, Software Developers, Testers, and Database Professionals. The combined functionality of the four Team System Editions is provided in a Team Suite Edition. Express Editions were introduced for amateurs, hobbyists, and small businesses, and are available as a free download from Microsoft's web site. The Express Editions lack many of the more advanced development tools and extensibility of the other editions such as Just-in-time JScript debugging.

Microsoft Visual C#.Net

By design, C# is the programming language that most directly reflects the underlying Common Language Infrastructure (CLI). Most of C#'s intrinsic types correspond to value-types implemented by the CLI framework. C# was created as an object-oriented programming (OOP) language. Other programming languages include object-oriented features, but very few are fully object-oriented.

DEPARTMENT OF CSE-SARABHAI INSTITUTE OF SCIENCE & TECHNOLOGY

15

C# differs from C and C++ in many ways, including:


There are no global variables or functions. All methods and members must be declared within classes.



Local variables cannot shadow variables of the enclosing block, unlike C and C++. Variable shadowing is often considered confusing by C++ texts.



C# supports a strict boolean type, bool. Statements that take conditions, such as while and if, require an expression of a boolean type. While C and C++ also have a boolean type, it can be freely converted to and from integers, and expressions such as if(a) require only that a is convertible to bool, allowing a to be an int, or a pointer. C# disallows this 'integer meaning true or false' approach on the grounds that forcing programmers to use expressions that return exactly bool prevents certain types of programming mistakes.



In C#, pointers can only be used within blocks specifically marked as unsafe, and programs with unsafe code need appropriate permissions to run. Most object access is done through safe references, which cannot be made invalid. An unsafe pointer can point to an instance of a value-type, array, string, or a block of memory allocated on a stack. Code that is not marked as unsafe can still store and manipulate pointers through the System.IntPtr type, but cannot dereference them.



Managed memory cannot be explicitly freed, but is automatically garbage collected. Garbage collection addresses memory leaks. C# also provides direct support for deterministic finalization with the using statement (supporting the Resource Acquisition Is Initialization idiom).
DEPARTMENT OF CSE-SARABHAI INSTITUTE OF SCIENCE & TECHNOLOGY

16


Multiple inheritance is not supported, although a class can implement any number of interfaces. This was a design decision by the language's lead architect to avoid complication, avoid "dependency hell," and simplify architectural requirements throughout CLI.



C# is more type safe than C++. The only implicit conversions by default are safe conversions, such as widening of integers and conversion from a derived type to a base type. This is enforced at compile-time, during JIT, and, in some cases, at runtime. There are no implicit conversions between booleans and integers and between enumeration members and integers (except 0, which can be implicitly converted to an enumerated type), and any userdefined conversion must be explicitly marked as explicit or implicit, unlike C++ copy constructors (which are implicit by default) and conversion operators (which are always implicit).

• •

Enumeration members are placed in their own namespace. Assessors called properties can be used to modify an object with syntax that resembles C++ member field access. In C++, declaring a member public enables both reading and writing to that member, and accessor methods must be used if more fine-grained control is needed. In C#, properties allow control over member access and data validation.



Full type reflection and discovery is available.

Unified type system
C# has a unified type system. This means that all types, including primitives such as integers, are subclasses of the System. Object class. For example, every type inherits a ToString() method. For performance reasons, primitive types (and value types in general) are internally allocated on the stack. Boxing and unboxing allow one to translate

DEPARTMENT OF CSE-SARABHAI INSTITUTE OF SCIENCE & TECHNOLOGY

17 primitive data to and from their object form. Effectively, this makes the primitive types a subtype of the Object type. C# allows the programmer to create user-defined value types, using the struct keyword. From the programmer's perspective, they can be seen as lightweight classes. Unlike regular classes, and like the standard primitives, such value types are allocated on the stack rather than on the heap. They can also be part of an object (either as a field or boxed), or stored in an array, without the memory indirection that normally exists for class types. Structs also come with a number of limitations. Because structs have no notion of a null value and can be used in arrays without initialization, they are implicitly initialized to default values (normally by filling the struct memory space with zeroes, but the programmer can specify explicit default values to override this). The programmer can define additional constructors with one or more arguments. This also means that struts lack a virtual method table, and because of that (and the fixed memory footprint), they cannot allow inheritance (but can implement interfaces).

Features of C#:
• • • • • • C# is simple. C# is modern. C# is object-oriented. C# is powerful and flexible. C# is a language of few words. C# is modular.

DEPARTMENT OF CSE-SARABHAI INSTITUTE OF SCIENCE & TECHNOLOGY

18

C# 2.0 new language features
New features in C# for the .NET SDK 2.0 are:


Partial classes allow class implementation across more than one file. This permits breaking down very large classes, or is useful if some parts of a class are automatically generated.



Generics or parameterized types: This is a .NET 2.0 feature supported by C#. Unlike C++ templates, .NET parameterized types are instantiated at runtime rather than by the compiler; hence they can be cross-language whereas C++ templates cannot. They support some features not supported directly by C++ templates such as type constraints on generic parameters by use of interfaces. On the other hand, C# does not support non-type generic parameters. Unlike generics in Java, .NET generics use reification to make parameterized types first-class objects in the CLI Virtual Machine, which allows for optimizations and preservation of the type information.



Static classes that cannot be instantiated and that only allow static members. This is similar to the concept of module in many procedural languages.



A new form of iterator that provides generator functionality, using a yield return construct similar to yield in Python.

• • • •

Anonymous delegates providing closure functionality. Covariance and contravariance for signatures of delegates The accessibility of property assessors can be set independently. Nullable value types (denoted by a question mark, e.g. int? i = null;) which add null to the set of allowed values for any value type.
DEPARTMENT OF CSE-SARABHAI INSTITUTE OF SCIENCE & TECHNOLOGY

19


Coalesce operator: (??) returns the first of its operands which is not null. The primary use of this operator is to assign a nullable type to a non-nullable type with an easy syntax.

C# versus Java
C# and Java are both new-generation languages descended from a line including C and C++. Each includes advanced features, like garbage collection, which remove some of the low level maintenance tasks from the programmer. In a lot of areas they are syntactically similar. Both C# and Java compile initially to an intermediate language. C# to Microsoft Intermediate Language (MSIL), and Java to byte code. In each case the intermediate language can be run by interpretation or just-in-time compilation on an appropriate ‘virtual machine’. In C#, however, more support is given for the further compilation of the intermediate language code into native code. C# contains more primitive data types than Java, and also allows more extension to the value types. For example, C# supports enumerators’, types which are limited to a defined set of constant variables, and ‘structs’, which are user-defined value types. Unlike Java, C# has the useful feature that we can overload various operators. Like Java, C# gives up on multiple class inheritance in favour of a single inheritance model extended by the multiple inheritance of interfaces. However, polymorphism is handled in a more complicated fashion; with base class methods either ‘overriding’ or ‘hiding’ super class methods.

C# also uses ‘delegates’ - type-safe method pointers. These are used to implement event-handling. In Java, multi-dimensional arrays are implemented solely

DEPARTMENT OF CSE-SARABHAI INSTITUTE OF SCIENCE & TECHNOLOGY

20 with single-dimensional arrays where arrays can be members of other arrays. In addition to jagged arrays, however, C# also implements genuine rectangular arrays.

Microsoft SQL Server 2005
Microsoft SQL Server 2005 is a full-featured relational database management system (RDBMS) that offers a variety of administrative tools to ease the burdens of database development, maintenance and administration. SQL Server 2005 is a powerful tool for turning information into opportunity. The following are more common tools provided by SQL server.


Enterprise Manager is the main administrative console for SQL Server installations. It provides us with a graphical “birds-eye” view of all of the SQL Server installations on your network.



Query Analyzer offers a quick method for performing queries against any of our SQL Server databases. It’s a great way to quickly pull information out of a database.

• •

SQL Profiler provides a window into the inner workings of your database. Service Manager is used to control the MS SQL Server (the main SQL Server process), MSDTC (Microsoft Distributed Transaction Coordinator) and SQL Server Agent processes.



Data Transformation Services (DTS) provide an extremely flexible method for importing and exporting data between a Microsoft SQL Server installation and a large variety of other formats.

• •

The following are some of the features of SQL Server: High Availability-Maximize the availability of our business applications with log shipping, online backups, and failure clusters.
DEPARTMENT OF CSE-SARABHAI INSTITUTE OF SCIENCE & TECHNOLOGY

21
• •

Scalability-Scale our applications up to 32 CPUs and 64 gigabytes (GB) of RAM. Security-Ensure you applications are secure in any networked environment, with rolebased security and file and network encryption.



Distributed partitioned views-Partition your workload among multiple servers for additional scalability.



Data transformation services-Automate routines that extract, transform, and load data from heterogeneous sources.



Simplified database administration-Automatic tuning and maintenance features enable administrators to focus on other critical tasks.



Improved developer productivity-User-defined functions, cascading referential integrity and the integrated Transact-SQL debugger allow you to reuse code to simplify the development process.



Application hosting-With multi-instance support, SQL Server enables you to take full advantage of your hardware investments so that multiple applications can be run on a single server . SQL is the set of statements that all programs and user must use to access data within

database. Application programs in turn must use SQL when executing the user’s request. The benefits of SQL are:


SQL is a non-procedural language. So it allows to process set of records than one at a time.

• • •

It provides automatic navigation to the data. It is used for all users of database activity by all range of users. It provides statements for a variety of tasks, which concerns all activities regarding a

DEPARTMENT OF CSE-SARABHAI INSTITUTE OF SCIENCE & TECHNOLOGY

22 database. SQL is a query and is not without any structure. It is more than English or any other language. It has rules for grammar and syntax but they are basically the normal rules and can be readily understood. The SQL stands for Structured Query Language. The SQL statements can be classified as:

1.Queries:
It always begins with the keyword SELECT and is used to retrieve the data from the database in any combination or in any order. The query type statements cannot modify or manipulate the database.

2.Data Manipulation Language(DML)
The purpose of DML is to change the data in database. Basically a data in the database can be changed or manipulated in 3 ways. They are:

INSERT : inserting new rows in the database UPDATE : updating an existing row in the database DELETE : Deleting existing rows from the database.

1.Data Definition Language(DDL)
The main purpose of DDL is to create, modify and drop the database objects namely relation, index, view, trigger etc.

2.Data Control Language(DCL)
This is used to provide privacy and security to the

DEPARTMENT OF CSE-SARABHAI INSTITUTE OF SCIENCE & TECHNOLOGY

23 database. The DCL statements allow the user to give and take privileges, which are needed for guaranteed controlled data sharing.

SYSTEM ANALYSIS

INTRODUCTION
A system is simply a set of components to accomplish an objective. System Analysis is an important activity that takes place when we attempt to build a new system or when modifying existing ones. Analysis comprises a detailed study of the various operations performed by a system and their relationships within and outside the system. It is the process of gathering and interpreting facts, diagnosing problems and improving the system using the information obtained. The objectives of System Analysis include the following.
• • • •

Identifying the user’s need. Evaluating the system concept. Performing economic and technical analysis. Establishing cost and scheduled constraints. System analysis is finding out what happens in the existing systems, deciding on what changes and new features are required and defining exactly what the proposed system must be. This process of system analysis is largely concerned with determining developing and agreeing to the user’s requirements. It provides prime opportunities to communicate well with the user and conceive a joint understanding of what a system should be

DEPARTMENT OF CSE-SARABHAI INSTITUTE OF SCIENCE & TECHNOLOGY

24 doing, together with a view of the relative importance of the system facilities using interactive techniques. The following steps are performed to analyze the system:


Learn the details of the existing system as well as procedures currently taking place.



Develop an insight into future demands of the organization as a result of growth, changing customer needs, evolving financial structure, the introduction of new technology, government regulatory changes and other changes.



Document the details of the current system and procedures for discussion and review by others.



Evaluate the efficiency and effectiveness of the current system and procedures, taking into account the impact of anticipated future demands.



Recommend any necessary revisions and enhancements to the current system. If appropriate an entire new system may be proposed.



Document the new system features at a level of detail that allows others to understand its components.



Involve directors and employees in the entire process, both to draw on their expertise and knowledge of the current system as well as to learn their ideas, feelings and opinions about requirements for the new changed system.

DEPARTMENT OF CSE-SARABHAI INSTITUTE OF SCIENCE & TECHNOLOGY

25

FEASIBILITY STUDY

The main objective of feasibility study is to test the technical, social and economic feasibility of developing a system. Preliminary investigations examine project feasibility, the likelihood the system will be useful to the organization. This is done by investigating the existing system in the area under investigation and generating ideas about the new system. A feasibility study is conducted to identify the best system that meets all the requirements. This entails an identification description, an evaluation of proposed system and selection of the best system for the job. Three tests of feasibility are studied:
• • •

Operational Feasibility Technical Feasibility Financial and Economic Feasibility

Operational Feasibility Proposed projects are beneficial only if they can be turned into information systems that will meet the organization’s operating requirements. This test of feasibility asks if the system will work when it is developed and installed. The project ‘Frenz4Ever’ is aimed to be used as a general purpose software. One of the main problems faced during the development of a new system is getting acceptance from user. Being a general purpose software there are no resistance from the user as this software is extremely beneficial for users. Technical Feasibility It is the study of resources availability that may affect the availability to achieve an acceptable system. The system must be evaluated from the technical viewpoint first. The

DEPARTMENT OF CSE-SARABHAI INSTITUTE OF SCIENCE & TECHNOLOGY

26 assessment of this feasibility must be based on an outline design of the system requirements in terms of input, output, procedures. Having identified the outline of the system, the investigation must go on to suggest the type of equipment, required method of developing the system, and the method of running the system. The system which is being developed is used by the users as a means to communicate with each other. It should be able to store large amount of data and should provide an attractive graphical interface. In order to attain these requirements, the technologies used in this project are Microsoft .Net framework, Microsoft SQL Server 2005, IIS Server. Financial and Economic Feasibility The developing system must be justified by cost and benefit criteria to ensure that effort is concentrated on project, which will give best return at the earliest. One of the factors that affect the development of a new system is the cost it would require. Since the system is developed as a part of my study, there is no manual cost to be spent for the proposed system.

DATAFLOW DIAGRAMS
A graphical representation is used to describe and analyze. The movement of data through a system manual or automated includes the process, storing of data and delays in the system. Data flow diagrams are the central tool and the basis from which other components are developed.

DEPARTMENT OF CSE-SARABHAI INSTITUTE OF SCIENCE & TECHNOLOGY

27 The transformation of data from input to output through process may be described logically and independently of the physical components associated with the system. They are termed logical data flow diagrams, showing the actual implementation and the movement of data between people, departments and workstations. Data Flow Diagrams (DFD) is one of the most important modeling tools used in system design. DFD shows the flow of data through different process in the system. Data flow diagrams can be used to provide a clear representation of any business function. The technique starts with an overall picture of the business and continues by analyzing each of the functional areas of interest. This analysis can be carried out to precisely the level of detail required DFD illustrates the flow of information. They are hardware independent and do not reflect decisional points. Rather they demonstrate the information and how it flows between specific processes in a system. They provide one kind of documentation for reports. Data Flow Diagrams are made up of a number of symbols, which represents system components. Data flow modeling method uses four kinds of symbols in drawing data.
• • • •

Process Data stores Data flows External entities

Process Process shows the work of the system. Each process has one or more data inputs and produce one or more data outputs. Processes are represented by round rectangles in Data Flow Diagram. Each process has a unique name and number. This name and

DEPARTMENT OF CSE-SARABHAI INSTITUTE OF SCIENCE & TECHNOLOGY

28 number appears inside the rectangle that represents the process in a Data Flow Diagram. Process name should be unambiguous and should convey as much meaning as possible without being too long. Data Stores A data store is a repository of data. Processes can enter data into a store or retrieve data from the data store. Each data has a unique name. Data Flows Data flows show the passage of data in the system and are represented by lines joining system components. An arrow indicates the direction of flow and the line is labeled by name of the data flow.

External Entities External entities are outside the system but they either supply data into the system or use other systems output. They are entities on which the designer has control. They may be an organization’s customer or other bodies with which the system interacts. External entities that supply data into the system are sometimes called source. External entities that use the system data are sometimes called sinks. These are represented by rectangles in the Data Flow Diagrams.

Data Flow Diagram Symbols

DEPARTMENT OF CSE-SARABHAI INSTITUTE OF SCIENCE & TECHNOLOGY

29 A rectangle defines a source or destination of system data.

An arrow identifies data flow. It is a pipeline through which the information flows.

A circle or a bubble represents a process that transforms incoming data flow(s) into out going data flow(s).

An open rectangle represents a data store or a temporary repository of data.

UML DIAGRAMS

Each UML diagram is designed to let developers and customers view a software system from a different perspective and in varying degrees of abstraction. UML diagrams commonly created in visual modeling tools include:

• • •

Use Case Diagram Class Diagram Activity Diagram
DEPARTMENT OF CSE-SARABHAI INSTITUTE OF SCIENCE & TECHNOLOGY

30
• •

Interaction Diagrams State Diagram

Use Case Diagrams
A use case is a set of scenarios that describing an interaction between a user and a system. A use case diagram displays the relationship among actors and use cases. The two main components of a use case diagram are use cases and actors. An actor is represents a user or another system that will interact with the system you are modeling. A use case is an external view of the system that represents some action the user might perform in order to complete a task.

Class Diagrams Class diagrams are widely used to describe the types of objects in a system and their relationships. Class diagrams model class structure and contents using design elements such as classes, packages and objects. Class diagrams describe three different perspectives when designing a system, conceptual, specification, and implementation.

Activity Diagrams
Activity diagrams describe the workflow behavior of a system. Activity diagrams are similar to state diagrams because activities are the state of doing something. The diagrams describe the state of activities by showing the sequence of activities performed. Activity diagrams can show activities that are conditional or parallel.

DEPARTMENT OF CSE-SARABHAI INSTITUTE OF SCIENCE & TECHNOLOGY

31

DFD

SOURCE

IMAGE COMPRESSIO N

DESTINATION

Fig: Level 0 DFD

USER

LOGIN VERIFICATIO N

LOGIN

NEW REGISTER

Fig: Level 1 DFD (LOGIN)

TESTING
DEPARTMENT OF CSE-SARABHAI INSTITUTE OF SCIENCE & TECHNOLOGY

32

SYSTEM TESTING
System Testing is the stage of implementation, which is aimed at ensuring that the system works accurately and efficiently as expected before live operation commences. It certifies that the whole set of program hang together. System testing requires a test plan that consists of several keys, activities and steps to run program, string, system and user acceptance testing. The implementation of newly designed package is important in adopting a successful new system. Testing is an important stage in software development. The system test is implementation stage in software development. The system test in implementation should be confirm that all is correct and an opportunity to show the users that the system works as expected. Testing is a set of activity that can be planned in advance and conducted systematically, which is aimed at ensuring that the system works accurately and efficiently before live operation commences.

Testing Objectives
Testing is the process of correcting a program with intend of finding an error.
• •

A good test is one that has a high probability of finding a yet undiscovered error. A successful test is one that uncovers an undiscovered error.

There are different type of testing methods available:

Unit Testing
DEPARTMENT OF CSE-SARABHAI INSTITUTE OF SCIENCE & TECHNOLOGY

33

In this testing we test each module individually and integrate the overall system. Unit testing focuses verification efforts on the smaller unit of software design in the module. This is also known as ‘module’ testing. The modules of the system are tested separately. The testing is carried out during programming stage itself. In this testing step each module is found to work satisfactory as regard to the expected output from the module. There are some validation checks for verifying the data input given by the user . It is very easy to find error and debug the system.

Integration Testing
Data can be lost across an interface, one module can have an adverse effect on the other sub functions when combined by may not produce the desired major functions. Integrated testing is the systematic testing for constructing the uncover errors within the interface. This testing was done with sample data. The need for integrated test is to find the overall system performance.

Black Box Testing
This testing attempts to find errors in the following areas or categories: Incorrect or missing functions, interface errors, errors in data structures, external database access, performance errors and initialization and termination errors.

Validation Testing
DEPARTMENT OF CSE-SARABHAI INSTITUTE OF SCIENCE & TECHNOLOGY

34

At the culmination of Black Box testing, software is completely assembled as a package, interface errors have been uncovered and corrected and final series of software tests, validation tests begins. Validation testing can be defined in many ways but a simple definition is that validation succeeds when the software functions in a manner that can be reasonably accepted by the customer. After validation test has been conducted one of the two possible conditions exists. 1. The function or performance characteristics confirm to specification and are accepted. 2. A deviation from specification is uncovered and a deficiency list is created.

Output Testing
After performing the validation testing, the next step is output testing of the proposed system since no system could be useful if it doesn’t produce the required data in the specific format. The output displayed or generated by the system under consideration is tested by, asking the user about the format displayed. The output format on the screen is found to be correct as the format was designed in the system according to the user needs. Hence the output testing doesn’t result in any correction of the system.

User Acceptance Testing
User acceptance of the system is the key factor for the success of the system. The system under consideration is tested for user acceptance by constantly keeping in touch with

DEPARTMENT OF CSE-SARABHAI INSTITUTE OF SCIENCE & TECHNOLOGY

35 prospective system at the time of developing and making change wherever required. This is done with regard to the following points:
• • •

Output Screen design. Input Screen design. Menu driven system.

White Box Testing
White box testing is a testing case design method that uses the control structure of the procedural design to derive the test cases. The entire independent path in a module is exercised at least once. All the logical decisions are exercised at least once. Executing all the loops at boundaries and within their operational bounds exercise internal data structure to ensure their validity.

SYSTEM IMPLEMENTATION
System Implementation is the stage of project when the theoretical design is turned into a working system. If the implementation stage is not carefully planned and controlled, it can cause chaos. The implementation stage is a system project in its own. Implementation is the stage of the project where the theoretical design turns into a working system. Thus, it can be considered to be the most crucial stage in achieving a successful new system and giving the users the confidence that the new system will work efficiently and accurately. It is less creative than system design. It is primarily concerned with user training and site preparation.

DEPARTMENT OF CSE-SARABHAI INSTITUTE OF SCIENCE & TECHNOLOGY

36 Depending on the nature of the system, extensive user training may be required. Implementation simply means converting a new system design into operation. An important aspect of the system analyst job is to make sure that the new design is implemented to establish standards. Implementation means the process of converting a new raised system design into an operational one. The three type of implementation are:
• • •

Implementation of a new computer system to replace an existing one. Implementation of a modified application to replace an existing one. Implementation of a computer system to replace a manual system.

The implemented system has the following features:
• • •

Reduced data redundancy Easy to use Controlled flow

The tasks involved in the normal implementation process are:

Implementation Planning
The implementation of a system involves people from different departments and system analysts are confronted with the practical problems of controlling the activities of people outside their own data processing department prior to this point in the project system, system analyst has interviewed department staff with the permission of their respective managers. The implementation co-ordination committee should be

responsible for a successful implementation. There should be at least one Representative of each department affected by the changes and other members should be co-opted for discussion of specific topics.

DEPARTMENT OF CSE-SARABHAI INSTITUTE OF SCIENCE & TECHNOLOGY

37

Implementation and Training
Successful implementation depends on the right people being at the right place at the right time. So it requires staff selection training for that part of the system for which the staff will be responsible. That is training must begin before the implementation activities begin. Training sessions must aim to give user staff the specific skills required in their new jobs. The training will be most successful if conducted by the supervisor with the system analyst in attendance to sort out any queries, new methods gain acceptance more quickly in this way. Education is complementary to training. Education involves creation of the right atmosphere and motivating user staff. Education sessions should encourage participation from all staff, with protection for individuals from group criticism. Educational information can also make training more interesting and understandable.

DEPARTMENT OF CSE-SARABHAI INSTITUTE OF SCIENCE & TECHNOLOGY

38

SYSTEM MAINTENANCE
Maintenance corresponds to restoring something to original condition, covering a wide range of activity, including correcting coding, design errors, updating user support. Better the system design, easier to maintain the system. Maintenance is performed most often to improve the existing software rather than to respond to a crisis or system failure. According to user needs and operational environment change, maintenance should be done in parallel. Otherwise the system could fail. Provision must be made for environmental changes, which may affect either the computer or other parts of a computer-based system such as activity are normally called maintenance. It includes both improvement of system functions and the correction of faults that arise during the operation of a system. Maintenance activity may require the continuing involvement of a large proportion of computer department resources. Most changes arise in two ways:


As part of the normal running of the system when errors are found, users ask for improvement or external requirements change.



As the result of specific investigation and review of the systems performance, maintenance involves the software industry captive, typing of system resources. It means restoring something to its original condition. Maintenance involves a wide range of activities including

corrective, coding and design errors, updating documentation and test data and upgrading user support.

DEPARTMENT OF CSE-SARABHAI INSTITUTE OF SCIENCE & TECHNOLOGY

39 Maintenance was done after the successful implementation. Maintenance is continued till the product is reengineered or deployed to another platform. Maintenance is also done based on fixing the problem reported, changing the interface with other software or hardware, enhancing the software. Any system developed should be secured and protected against possible hazards. Security measures are provided to prevent unauthorized access of the database at various levels. An uninterrupted power supply should be provided so that power failure or voltage fluctuations will not erase the data in the files. Password protection and simple procedures to retrieve forgotten password are provided to the users. Also, unauthorized access is restricted. The software allows the user to enter the system only through login utility with a valid login name and a password.

DEPARTMENT OF CSE-SARABHAI INSTITUTE OF SCIENCE & TECHNOLOGY

40

CONCLUSION

The software ‘IMAGE COMPRESSION’ is developed in Microsoft C#.Net as front-end and Microsoft SQL Server 2005 as back end in Windows operating system. This windows based software is developed for compressing various images. Weaving through the system developed, a brief idea can be given as follows:
• • • • • • • • •

Comprehending the problem. Studying the existing system. Building up the course of action to reach the goal. Designing the problem. Visualizing the solution. Preparing the screen outputs. Testing the system with test data. Achieving the required results. Documenting the software developed. During the design phase of the system, many difficulties were

encountered. Checking different tables and listing out the errors created many problems. More errors were spotted during system testing. This user friendly software successfully overcame strict validation checks performed during the test data. The results obtained were fully satisfactory from the user point of view. Thus, the project titled ‘ONLINE POST OFFICE MANAGEMENT SYSTEM was successfully completed and showed reasonably good performance.

DEPARTMENT OF CSE-SARABHAI INSTITUTE OF SCIENCE & TECHNOLOGY

41 The key features of this project are:
• • •

Resource requirement is less. User-friendly. Ease in handling and implementation.

DEPARTMENT OF CSE-SARABHAI INSTITUTE OF SCIENCE & TECHNOLOGY

42

SCREENSHOTS
LOGIN PAGE

DEPARTMENT OF CSE-SARABHAI INSTITUTE OF SCIENCE & TECHNOLOGY

43

REGISTRATION FORM

IMAGE PROCESSING
DEPARTMENT OF CSE-SARABHAI INSTITUTE OF SCIENCE & TECHNOLOGY

44

IMAGE INFORMATION

DEPARTMENT OF CSE-SARABHAI INSTITUTE OF SCIENCE & TECHNOLOGY

45

IMAGE RESIZE

REFERENCE
DEPARTMENT OF CSE-SARABHAI INSTITUTE OF SCIENCE & TECHNOLOGY

46

Websites 1.www.msdn.microsoft.com 2.www.csharpcorner.com 3.www.csharpcenter.com Bibliography 1.Professional C# :By Simon Robinson, Christian Nagel 2.C# Programming Bible :By Jeff Ferguson, Meeta Gupta 3.Microsoft Solution Developer Network :By Microsoft Corporation 4.Software Engineering By Roger S Pressman, ata McGraw Hill Publications 5.System Analysis and Design :By Elias M Awad, Golgotia publications.

DEPARTMENT OF CSE-SARABHAI INSTITUTE OF SCIENCE & TECHNOLOGY

Sponsor Documents

Or use your account on DocShare.tips

Hide

Forgot your password?

Or register your new account on DocShare.tips

Hide

Lost your password? Please enter your email address. You will receive a link to create a new password.

Back to log-in

Close