Unit-1

Published on March 2017 | Categories: Documents | Downloads: 53 | Comments: 0 | Views: 459
of 49
Download PDF   Embed   Report

Comments

Content

UNIT-1
UNIX
UNIX is a multitasking, multi-user computer operating system originally developed in 1969
by a group of AT&T employees at Bell Labs .
Unix operating systems are widely used in servers, workstations, and mobile devices. Unix
was designed to be portable, multi-tasking and multi-user in a time-sharing configuration.
Unix systems are characterized by various concepts:
 the use of plain text for storing data;
 a hierarchical file system;
 treating devices and certain types of inter-process communication (IPC) as files;
 the use of a large number of software tools, small programs that can be strung
together through a command line interpreter using pipes.
These concepts are collectively known as the Unix philosophy.
Unix vs Windows: Two Major Classes of Operating Systems
On the server front, Unix has been closing in on Microsoft’s market share.
On the client front, Microsoft is currently dominating the operating system market with over
90% market share.
Advantages of Unix over windows



Flexiblity -Unix is more flexible and can be installed on many different types of
machines, including main-frame computers, supercomputers and microcomputers.
Stable -Unix is more stable and does not go down as often as Windows does,
therefore requires less administration and maintenance.



Security-Unix has greater built-in security and permissions features than
Windows.



Processing power -Unix possesses much greater processing power than
Windows.



Inexpensive open-source operating systems - The mostly free or inexpensive
open-source operating systems, such as Linux , with their flexibility and control,
are very attractive to (aspiring) computer wizards.



Software design -Unix also inspires novel approaches to software design, such as
solving problems by interconnecting simpler tools instead of creating large
monolithic application programs.

Main Features of UNIX


multi-user- more than one user can use the machine at a time
supported via terminals (serial or network connection)



multi-tasking -more than one program can be run at a time



hierarchical directory structure -to support the organisation and maintenance of
files



portability- only the kernel ( <10%) written in assembler ,tools for program
development,
a wide range of support tools (debuggers, compilers)



programming facility- Unix shell is a programming language .This featureis used to
design shell scripts

LINUX
1.Linux-The Operating System

Linux is a Unix-like computer Operating System (or OS) that uses the Linux kernel. Linux
started out as a personal computer system used by individuals, and now used mostly as a
server operating system. Linux is a prime example of open-source development, which means
that the source code is available freely for anyone to use.
Linus Torvalds, who was then a student at the University of Helsinki in Finland, developed
Linux in 1991. He released it for free on the Internet. Due to the far reach of the Free
Software Foundation (FSF) and the GNU Project, Linux popularity increased rapidly, with
utilities developed and released for free online.
It provides the basic computer services needed for someone to do things with a computer. It is
the middle layer between the computer hardware and the software applications you run.

The GNU software

About GNU
GNU stands for 'GNU's Not Unix.' It was a project conceived by Richard Stallman in 1983 in
response to the increasing tendency of software companies to copyright their software under
terms that prohibited sharing. GNU's purpose is to develop a wholly free system.
The kernel combined with GNU's free software is properly called "GNU/Linux."
About GPL
Both the kernel and the software are freely available under licencing that is sometimes called
"copyleft" (as opposed to copyright). Where traditional copyright was meant to restrict usage
and ownership of a copyrighted item to as few people as possible, inhibiting development and
growth, GNU/Linux is different. It is released under terms designed to ensure that as many
people as possible are allowed to receive, use, share, and modify the software. That licence is
called the GPL (GNU Public Licence).
What is the Kernel?
The kernel is a program that constitutes the central core of a computer operating system. It
has complete control over everything that occurs in the system.
The kernel is the first part of the operating system to load into memory during booting (i.e.,
system startup), and it remains there for the entire duration of the computer session because
its services are required continuously. Thus it is important for it to be as small as possible
while still providing all the essential services needed by the other parts of the operating
system and by the various application programs.
Because of its critical nature, the kernel code is usually loaded into a protected area of
memory, which prevents it from being overwritten . The kernel performs its tasks, such as
executing processes and handling interrupts, in kernel space, whereas everything a user
normally does, in user space.
Crash of the Kernel
When a computer crashes, it actually means the kernel has crashed. If only a single program
has crashed but the rest of the system remains in operation, then the kernel itself has not
crashed. A crash is the situation in which a program, either a user application or a part of the
operating system, stops performing its expected function(s) and responding to other parts of
the system. The program might appear to the user to freeze. If such program is a critical to the
operation of the kernel, the entire computer could stall or shut down.
The kernel provides basic services for all other parts of the operating system, typically
including memory management, process management, file management and I/O
(input/output) management (i.e., accessing the peripheral devices). These services are

requested by other parts of the operating system or by application programs through a
specified set of program interfaces referred to as system calls.
Components of Linux Kernel
The major components of the Linux kernel shown in Figure
Figure . One architectural perspective of the Linux kernel

System call interface
The SCI is a thin layer that provides the means to perform function calls from user space into
the kernel.
Process management
Process management is focused on the execution of processes. In the kernel, these are called
threads. In user space, the term process is typically used. The kernel provides an application
program interface (API) through the SCI to create a new process (fork, exec ) ,stop a process
(kill, exit), and communicate and synchronize between them (signal).
Also in process management is the need to share the CPU between the active threads called
CPU scheduling
Memory management
Another important resource that's managed by the kernel is memory. Linux includes the
means to manage the available memory, as well as the hardware mechanisms for physical and
virtual mappings.
Supporting multiple users of memory, there are times when the available memory can be
exhausted. For this reason, pages can be moved out of memory and onto the disk. This
process is called swapping because the pages are swapped from memory onto the hard disk.
You can find the memory management sources in ./linux/mm.
Virtual file system
The virtual file system (VFS) is an interesting aspect of the Linux kernel because it provides
a common interface abstraction for file systems. The VFS provides a switching layer between
the SCI and the file systems supported by the kernel .
Network stack
The network stack, by design, follows a layered architecture modeled after the protocols
themselves. You can find the networking sources in the kernel at ./linux/net.

Device drivers
The vast majority of the source code in the Linux kernel exists in device drivers that make a
particular hardware device usable. The Linux source tree provides a drivers subdirectory that
is further divided by the various devices that are supported. You can find the device driver
sources in ./linux/drivers.
Architecture-dependent code
While much of Linux is independent of the architecture on which it runs, there are elements
that must consider the architecture for normal operation and for efficiency. The ./linux/arch
subdirectory defines the architecture-dependent portion of the kernel source contained in a
number of subdirectories that are specific to the architecture (collectively forming the BSP).
For a typical desktop, the i386 directory is used.
Categories of Kernels
Kernels can be classified into four broad categories: monolithic kernels, microkernels, hybrid
kernels and exokernels. Each has its own advocates and detractors.
Microkernel
A microkernel manages: CPU, memory, and IPC (inter process communication).
Microkernels have a advantage of portability ex: they don’t have to worry if you change your
video card Microkernels also have a very small footprint, for both memory and install space,
and they tend to be more secure because only specific processes run in user mode which
doesn’t have the high permissions as supervisor mode. Ex:Unix
Pros


Portability



Small install footprint



Small memory footprint



Security

Cons


Hardware is more abstracted through drivers



Hardware may react slower because drivers are in user mode



Processes have to wait in a queue to get information



Processes can’t get access to other processes without waiting

Monolithic Kernel
Monolithic kernels are the opposite of microkernels because they encompass not only the
CPU, memory, and IPC, but they also include things like device drivers, file system
management, and system server calls. Monolithic kernels tend to be better at accessing
hardware and multitasking. Ex: Linux
Pros


More direct access to hardware for programs



Easier for processes to communicate between each other



If your device is supported, it should work with no additional installations



Processes react faster because there isn’t a queue for processor time

Cons


Large install footprint



Large memory footprint



Less secure because everything runs in supervisor mode

Hybrid kernels
Hybrid kernels are similar to microkernels, except that they include additional code in kernel
space so that such code can run more swiftly than it would were it in user space. These
kernels represent a compromise that was implemented by some developers .Hybrid kernels
should not be confused with monolithic kernels that can load modules after booting (such as
Linux).
Most modern operating systems use hybrid kernels, including Microsoft Windows NT, 2000
and XP.
Exokernels
Exokernels differ from the other types of kernels in that their functionality is limited to the
protection and multiplexing of the raw hardware, and they provide no hardware abstractions
on top of which applications can be constructed. This separation of hardware protection from
hardware management enables application developers to determine how to make the most
efficient use of the available hardware for each specific program.
Exokernels in themselves they are extremely small A major advantage of exokernel-based
systems is that they can incorporate multiple library operating systems, each exporting a
different API (application programming interface), such as one for Linux and one for
Microsoft Windows, thus making it possible to simultaneously run both Linux and Windows
applications.

1.2-Linux History
Linux traces its ancestry back to a mainframe operating system known as Multics
(Multiplexed Information and Computing Service). Begun in 1965, Multics was one of the
first multi-user computer systems and remains in use today. Two Bell Labs software
engineers, Ken Thompson and Dennis Richie, worked on Multics .They implemented a
rudimentary operating system they named Unics, as a pun on Multics. Somehow, the spelling
of the name became Unix.
Their operating system was novel in respect of portability. In order to create a portable
operating system, Ritchie and Thompson first created a programming language, called C.
Writing Unix in C made it possible to easily adapt Unix to run on computers.
As word of their work spread and interest grew, Ritchie and Thompson made copies of Unix
freely available to programmers around the world. These programmers revised and improved
Unix, sending word of their changes back to Ritchie and Thompson, who incorporated the
best such changes in their version of Unix.

Linux is the first truly free Unix-like operating system. The underlying GNU Project was
launched in 1983 by Richard Stallman originally to develop a Unix-compatible operating
system calledGNU, intended to be entirely free software. Many programs and utilities were
contributed by developers around the world, and by 1991 most of the components of the
system were ready. Still missing was the kernel.
Linus Torvalds invented Linux itself. In 1991, Torvalds was a student at the University of
Helsinki in Finland where he had been using Minix, a non-free Unix-like system, and began
writing his own kernel. He started by developing device drivers and hard-drive access, and by
September had a basic design that he called Version 0.01. This kernel, which is called Linux,
was afterwards combined with the GNU system to produce a complete free operating system.
On October 5th, 1991, Torvalds sent a posting to the comp.os.minix newsgroup announcing
the release of Version 0.02, a basic version that still needed Minix to operate, but which
attracted considerable interest nevertheless. The kernel was then rapidly improved by
Torvalds and a growing number of volunteers communicating over the Internet, and by
December 19th a functional, stand-alone Unix-like Linux system was released as Version
0.11.
On January 5, 1992, Linux Version 0.12 was released, an improved, stable kernel. The next
release was called Version 0.95, to reflect the fact that it was becoming a full-featured system.
After that Linux became an underground phenomenon, with a growing group of distributed
programmers that continue to debug, develop, and enhance the source code baseline to this
day.
Torvalds released Version 0.11 under a freeware license of his own devising, but then
released Version 0.12 under the well established GNU General Public License. More and
more free software was created for Linux over the next several years.
Linux continued to be improved through the 1990's, and started to be used in large-scale
applications like web hosting, networking, and database serving, proving ready for production
use. Version 2.2, a major update to the Linux kernel, was officially released in January
1999. By the year 2000, most computer companies supported Linux in one way or another,
recognizing a common standard that could finally reunify the fractured world of the Unix
Wars. The next major release was V2.4 in January 2001, providing (among other
improvements) compatibility with the upcoming generations of Intel's 64-bit Itanium
computer processors.
Although Torvalds continued to function as the Linux kernel release manager, he avoided
work at any of the many companies involved with Linux in order to avoid showing favoritism
to any particular organization, and instead went to work for a company called Transmetaand
helped develop mobile computing solutions, and made his home at the Open Source
Development Labs (OSDL), which merged into The Linux Foundation.

1.3 Linux Features:Following are the key features of the Linux operating system:


Multitasking: several programs running at the same time.



Multiuser : several users on the same machine at the same time (and no two-user
licenses!).



Multiplatform: runs on many different CPUs, not just Intel.



Multiprocessor/multithreading: it has native kernel support for multiple
independent threads of control within a single process memory space.



Protected It has memory protection between processes, so that one program can't
bring the whole system down.



Demand loads executables : Linux only reads from disk those parts of a program that
are actually used.



Shared copy-on-write pages among executables. This means that multiple process
can use the same memory to run in. When one tries to write to that memory, that page
(4KB piece of memory) is copied somewhere else. Copy-on-write has two benefits:
increasing speed and decreasing memory use.



Virtual memory using paging (not swapping whole processes) to disk: to a
separate partition or a file in the file system, or both, with the possibility of adding
more swapping areas during runtime . A unified memory pool for user programs and
disk cache, so that all free memory can be used for caching, and the cache can be
reduced when running large programs.



All source code is available, including the whole kernel and all drivers, the
development tools and all user programs; also, all of it is freely distributable. Plenty
of commercial programs are being provided for Linux without source, but everything
that has been free, including the entire base operating system, is still free.



Multiple virtual consoles: several independent login sessions through the console
are allowed , you switch by pressing a hot-key combination. These are dynamically
allocated; you can use up to 64.



Supports several common file systems, including minix, Xenix, and all the
common system V file systems, and has an advanced file system of its own, which
offers file systems of up to 4 TB, and names up to 255 characters long.

1.4 Linux Distibution
Some of the popular LINUX distributions are:
 Caldera OpenLinux - is a Linux distribution that was created by the former Caldera
Systems corporation. It was the early "business-oriented distribution" .
 Debian GNU/LINUX -Debian is a free operating system (OS) that uses the Linux
kernel and provides more services than a pure OS: it comes with many packages,
precompiled software bundled up in a nice format for easy installation on your
machine.
 Mandrake -It was developed at Melbourne IT Limited which is a world leader in the
supply of domain name registration and other online solutions with a strong
commitment to the delivery of high value internet services and Web-based solutions
to organizations of all sizes across the globe.
 RedHat - Red Hat Enterprise Linux is an enterprise platform well-suited for a broad
range of applications across the IT infrastructure. The latest release, Red Hat
Enterprise Linux 6, represents a new standard for Red Hat by offering greater
flexibility, efficiency, and control. Corporations and agencies that standardize on Red
Hat Enterprise Linux are free to focus on building their businesses, knowing they
have a platform that delivers more of what they need.







Slackware -- is a free and open source Linux-based operating system. It was one of
the earliest operating systems to be built on top of the Linux kernel .Slackware aims
for design stability and simplicity, and to be the most "Unix-like" Linux distribution,
using plain text files for configuration and making as few modifications as possible to
software packages .
SuSE - SUSE is the original provider of the enterprise Linux distribution and the
most interoperable platform for mission-critical computing. It's the only Linux
recommended by Microsoft . And it's supported on more hardware and software than
any other enterprise Linux distribution
TurboLinux
The Turbolinux distribution was created as a rebranded Red Hat distribution

Many networking protocols: the base protocols available in the latest development kernels
include TCP, IPv4, IPv6, AX.25, X.25, IPX, DDP (AppleTalk), Netrom, and others. Stable
network protocols included in the stable kernels currently include TCP, IPv4, IPX, DDP, and
AX.25.

1.5 Overview of linux Architecture:Linux Architecture
Linux is a hardware independent architecture derived from UNIX. It is divided into three
levels. They are user, kernel and hardware levels.




The hardware level contains device drivers and machine specific components.
The kernel level is a mix of machine-dependent and machine-independent software.
The user level is a collection of applications, like shells, editors and utilities.

These levels may be thought of like the wheel of an automobile.


The hardware and device drivers present a stable set of definitions to support many
types of kernels.



The kernel interfaces with the hardware and device drivers and presents a stable set of
interfaces to support standard UNIX application programs.



The application programs enable the users to accomplish meaningful work with the
hardware, like getting from place to place.

Figure 2. The fundamental architecture of the GNU/Linux operating system
At the top is the user, or application, space. This is where the user applications are executed.
Below the user space is the kernel space. Here, the Linux kernel exists.
There is also the GNU C Library (glibc). This provides the system call interface that connects
to the kernel and provides the mechanism to transition between the user-space application and
the kernel. This is important because the kernel and user application occupy different
protected address spaces. And while each user-space process occupies its own virtual address
space, the kernel occupies a single address space
The Linux kernel can be further divided into three gross levels.


system call interface, which implements the basic functions such as read and write.



the kernel code, which can be more accurately defined as the architecture-independent
kernel code. This code is common to all of the processor architectures supported by
Linux.



the architecture-dependent code, which forms what is more commonly called a BSP
(Board Support Package). This code serves as the processor and platform-specific
code for the given architecture.

1.6 -System Processes:-

Processes

A process is a program that is running. It is an address space with one or more threads
executing within that address space, and the required system resources for those threads.
Each instance of a running program constitutes a process.
A process is a dynamic entity, constantly changing as the machine code instructions are
executed by the processor
In short a process is an executing program encompassing all of the current activity in the
microprocessor. Linux is a multiprocessing operating system.
Each process is a separate task with its own rights and responsibilities. If one process crashes
it will not cause another process in the system to crash.
Each individual process runs in its own virtual address space and is not capable of interacting
with another process except through secure, kernel-managed mechanisms.
Needs of a process
During the lifetime of a process it will use many system resources. It will use the CPUs in the
system to run its instructions and the system's physical memory to hold it and its data. It will
open and use files within the filesystems and may directly or indirectly use the physical
devices in the system. Linux must keep track of the process and its system resources to fairly
manage it and the other processes in the system.
Linux Processes
Linux can manage the processes in the system, each process is represented by a task_struct
data structure. The task vector is an array of pointers to every task_struct data structure in the
system.
This means that the maximum number of processes in the system is limited by the size of the
task vector; by default it has 512 entries. As processes are created, a new task_struct is
allocated from system memory and added into the task vector. To make it easy to find, the
current, running, process is pointed to by the current pointer.
Linux also supports real time processes.
Process State
As a process executes it changes state according to its circumstances. Linux processes
have the following states:
Running
The process is either running (it is the current process in the system) or it is ready to
run (it is waiting to be assigned to one of the system's CPUs).
Waiting
The process is waiting for an event or for a resource. Linux differentiates between two
types of waiting process; interruptible and uninterruptible.
Interruptible waiting processes can be interrupted by signals whereas uninterruptible
waiting processes are waiting directly on hardware conditions and cannot be
interrupted under any circumstances.
Stopped
The process has been stopped, usually by receiving a signal. A process that is being
debugged can be in a stopped state.
Zombie
This is a halted process which, for some reason, still has a task_struct data structure in
the task vector. It is what it sounds like, a dead process.
The Life Cycle of Processes

The state a process is in changes many times during its "life." These changes can occur, for
example, when the process makes a system call, A commonly used model shows processes
operating in one of six separate states, which you can find in sched.h:
1. executing in user mode
2. executing in kernel mode
3. ready to run
4. sleeping
5. newly created, not ready to run, and not sleeping
6. issued exit system call (zombie)

Life cycle of the process



A newly created process enters the system in state 5. If the process is simply a copy of
the original process (a fork but no exec), it then begins to run in the state that the
original process was in (1 or 2).
When a process is running, an interrupt may be generated (more often than not, this is
the system clock) and the currently running process is pre-empted (3). This is the
same state as state 3 because it is still ready to run and in main memory.






When the process makes a system call while in user mode (1), it moves into state 2
where it begins to run in kernel mode.
Assume at this point that the system call made was to read a file on the hard disk.
Because the read is not carried out immediately, the process goes to sleep, waiting on
the event that the system has read the disk and the data is ready. It is now in state 4.
When the data is ready, the process is awakened. This does not mean it runs
immediately, but rather it is once again ready to run in main memory (3).
If a process that was asleep is awakened (perhaps when the data is ready), it moves
from state 4 (sleeping) to state 3 (ready to run). This can be in either user mode (1) or
kernel mode (2).

Ending the life cycle
A process can end its life by either explicitly calling the exit() system call or having it called
for them. The exit() system call releases all the data structures that the process was using.
One exception is the slot in the process table, which is the responsibility of the init process
for the exit code of the exiting process. This can be used by the parent process to determine
whether the process did what it was supposed to do or whether it ran into problems. The
process shows that it has terminated by putting itself into state 6, and it becomes a "zombie."
Once here, it can never run again because nothing exists other than the entry in the process
table.
This is why you cannot "kill" a zombie process. The only thing to do is to let the system clean
it up.
If the exiting process has any children, they are "inherited" by init. When a process is
inherited by init, the value of its PPID (Parent process ID) is changed to 1 (the PID of init).
Nice process
When a process puts itself to sleep while waiting for an event to occur that is for example an
interrupt from the keyboard, it is voluntarily giving up the CPU.
Because the process is being so nice to let others have a turn, the kernel will be nice to the
process by allowing it to set the priority at which it will run when it wakes.
The INIT process
When the system starts up it is running in kernel mode and there is, in a sense, only one
process, the initial process.
An Idle process - At the end of system initialization, the initial process starts up a kernel
thread (called init) and then sits in an idle loop doing nothing. Whenever there is nothing else
to do the scheduler will run this, idle, process. The idle process's task_struct is the only one
that is not dynamically allocated, it is statically defined at kernel build time and is, rather
confusingly, called init_task.
The init kernel thread or process has a process identifier of 1 as it is the system's first real
process that does some initial setting up of the system and then executes the system
initialization program.
The init program uses /etc/inittab as a script file to create new processes within the system.
These new processes may themselves go on to create new processes.
Creating a new process

New processes are created by cloning old processes, or rather by cloning the current process.
A new task is created by a system call (fork or clone) and the cloning happens within the
kernel in kernel mode. At the end of the system call there is a new process waiting to run
once the scheduler chooses it. A new task_struct data structure is allocated from the system's
physical memory with one or more physical pages for the cloned process's stacks (user and
kernel).
When cloning processes Linux allows the two processes to share resources rather than have
two separate copies. This applies to the process's files, signal handlers and virtual memory.
Cloning a process's virtual memory
A new set of vm_area_struct data structures must be generated together with their owning
mm_struct data structure and the cloned process's page tables. None of the process's virtual
memory is copied at this point.
Instead Linux uses a technique called ``copy on write'' which means that virtual memory will
only be copied when one of the two processes tries to write to it.

1.7 - Ext2 and Ext3 File system:LINUX supports large number of file systems with the help of unified interface to the
LINUX kernel called the Virtual File System (VFS).
The Virtual File System supplies the applications with the system calls for file management
to maintain internal structures and passes tasks on to the appropriate actual file system.
Another important job of the VFS is performing standard actions.

Figure The layers in the file system.
Basic principles
 A file system refers to the purposeful structuring of data thereby increasing the speed
of access to data and a facility for random access.

 Random access is made possible by block-oriented devices, which are divided into a
specific number of equal-sized blocks.
 When using these, LINUX also has at its disposal the buffer cache . Using the
functions of the buffer cache, it is possible to access any of the sequentially numbered
blocks in a given device.
 In LINUX, the information required for file management is kept strictly apart from
the data and collected in a separate inode structure for each file.
 Figure shows the arrangement of a typical LINUX inode. The information contained
includes access times, access rights and the allocation of data to blocks on the
physical media.
 the inode already contains a few block numbers to ensure efficient access to small
files. Access to larger files is provided via indirect blocks, which also contain block
numbers.
 Every file is represented by just one inode, which means that, within a file system,
each inode has a unique number and the file itself can also be accessed using this
number.
 Directories allow the file system to be given a hierarchical structure. These are also
implemented as files, but the kernel assumes them to contain pairs consisting of a
filename and its inode number.
 Each file system starts with a boot block reserved for the code required to boot the
operating system .
 All the information which is essential for managing the file system is held in the
superblock which is followed by a number of inode blocks containing the inode
structures for the file system

Figure Structure of a LINUX inode.
 The remaining blocks for the device provide the space for the data. These data blocks
thus contain ordinary files along with the directory entries and the indirect blocks.
 This arrangement is built up by the action of mounting the file system, which adds
another file system (of whatever type) to an existing directory tree.

 A new file system can be mounted onto any directory. This original directory is then
known as the mount point and is covered up by the root directory of the new file
system along with its subdirectories and files.
 Unmounting the file system releases the hidden directory structure again.
 Another major important aspect of a file system is data security.
The representation of file systems in the kernel
 The actual representation of data in LINUX'S memory sticks closely to the logical
structure of a UNIX file system.
 The VFS, calls the file-system-specific functions for the various implementations to
fill up the structures.
 These functions are provided to the VFS via the function register_filesystem().
#ifdef CONFIG_MINIX_FS
register_filesystem(&(struct file_system_type)
{minix_read_super, "minix", 1, NULL});
#endif
 the VFS is given the name of the file system ('minix').
 The function passed, read_super( ), forms the mount interface: further functions of the
file system implementation will be made known to the VFS via this function.
 The function sets up the file_system_type structure
 it has been passed in a singly linked list whose beginning is pointed to by
file_systems.
Mounting
 Before a file can be accessed, the file system containing the file must be mounted.
This can be done using either the system call mount or the function mount_root().
 The mount_root() function takes care of mounting the first file system (the root file
system).
 It is called by the system call setup after all the file system implementations
permanently included in the kernel have been registered.
 The setup call itself is called just once,' immediately after the init process is created
by the kernel function init() (file init/main.c).
 This system call is necessary because access to kernel structures is not allowed from
user mode (which is the status of the init process).
 Every mounted file system is represented by a super_block structure.
 These structures are held in the static table super_blocks[] and limited in number to
NR_SUPER.
 The superblock is initialized by the function read_super() in the Virtual File System.
 This file-system-specific function will have been made known on registering the
implementation with the VFS.
 When called, it will contain:
• a superblock structure in which the elements s_dev and s_flags are filled in
accordance with Table below,
• a character string (in this case void *) containing further mount options for the
file system, and
• a silent flag indicating whether unsuccessful mounting should be reported.
• This flag is used only by the kernel function mount_root(), as this calls all the
read_super( ) functions present in the various file system implementations
 The file-system-specific function read_super() reads its data if necessary from the
appropriate block device using the LINUX cache functions .
 The file-system-independent mount flags in the superblock.

Macro
MSRDONLY
MSNOSUID
MSNODEV
MSNOEXEC
MSSYNCHR
MSREMOU

Value
1
2
4
8
16
32

Remarks
File system is read only
Ignores S bits
Inhibits access to device files
Inhibits execution of program
Immediate write to disk
Changes flags

 The superblock contains information on the entire file system, such as block size,
access rights and time of the last change.
 In addition, the structure holds special information on the relevant file systems.
 For file system modules mounted later, there is a pointer generic_sdp.
 The components s_Lock and s_wait ensure that access to the superblock is
synchronized.
 This uses the functions Lock_super() and unLock_super(), which are defined in the
file <linux/ Locks. h>.
Superblock operations
The superblock structure provides, in the function vector s_op, functions for accessing the
file system, and these form the basis for further work on the file system.
struct super_operations {
void (*read_inode) (struct inode *);
int (*notify_change) (struct inode*, struct iattr *);
void (*write_inode) (struct inode *);
void (*put_inode) (struct inode *);
void (*put_super) (struct super_block *);
void (*write_super) (struct super_block *);
void (*statfs) (struct super_block *, struct statfs *);
int (*remount_fs) (struct super_block *, int *, char *);
};
 The functions in the super_operations structure serve to read and write an individual
inode,to write the superblock and to read file system information.
 If a superblock operation is not implemented - that is, if the pointer to the operation is
NULL -no further action will take place.


write_super(sb)
 The write_super(sb) function is used to save the information of the superblock.
 The function will cause the cache to write back the buffer for the superblock: this is
ensured by setting the buffer's b_dirt flag.
 The function is used in synchronizing the device and is ignored by read-only file
systems .

• put_super(sb)
 The Virtual File System calls this function when unmounting file systems,
 when it should also release the superblock and other information buffers and/or
restore the consistency of the file system.
 In addition, the s_dev entry in the superblock structure must be set to 0 to ensure that
the superblock is once again available after unmounting.

• statfs(sb, statfsbuf)
 The two system calls statfs and fstatfs call the superblock operation which fill in the
staffs structure.
 This structure provides information on the file system, the number of free blocks and
the preferred block size.
 The structure is located in the user address space.
• remount_fs(sb, flags, options)
 The remount_fs() function changes the status of a file system .
 This involves entering the new attributes for the file system in the superblock and
restoring the consistency of the file system.
• read_inode(inode)
 This function is responsible for filling in the inode structure it has been passed, in a
similar way to read_super().
 It is called by the function _iget(), which will already have
given the entries i_dev, i_ino, i_sb and i_flags their contents.
 The main purpose of the read_inode0 function is to mark the different file types by
entering inode operations in the inode according to the file type.
• notify_change(inode, attr)
 The changes made to the inode via system calls are acknowledged by
notify_change().
 All inode changes are carried out on the local inode structure only, which means that
the computer exporting the file system needs to be informed.
• write_inode(inode)
This function saves the inode structure, analogous to write_super().
• put_i node(inode)
 This function is called by iput() if the inode is no longer required.
 Its main task is to delete the file physically and release its blocks.
The inode structure
 When a file system is mounted, the superblock is generated and the root inode for the
file system is entered in the component i_mount at the appropriate mount point, that
is, in its inode structure.
 The definition of the inode structure is as follows:
struct Inode {
dev_t
i_dev;
unsigned long i_ino;
umode_t
i_mode;
nlink_t
i_nlink;
uid_t
i_uid;
gid_t
i_gid;
dev_t
i_rdev;
off_t
i_size;
time_t
i_atime;

/* file device number */
/* inode number */
/* file type and access rights */
/* number of hard Links */
/* owner */
/* owner */
/* device, if device file */
/* size */
/* time of last access */

time_t
i_mtime;
/* time of last modification */
time_t
i_ctime;
/* time of creation */
unsigned long i_blksize;
/* block size */
unsigned long i_blocks;
/* number of blocks */
unsigned long i_version;
/* DCache version management */
struct semaphore i_sem;
/* access control */
struct inode_operations * 1_0p;
/* inode operations */
struct super_block * i_sb;
/* superbLock */
struct wait_queue * i_wait;
/* wait queue */
struct file_lock * i_flock;
/* file locks */
struct vm_area_struct * i_mmap;
/* memory areas */
struct inode * i_next, * i_prev;
/* inode linking . */
struct inode * i_hash_next, * 1_hash_prev;
struct inode * i_bound_to, * i_bound_by;
struct inode * i_mount;
/* mounted inode */
struct socket * i_socket;
/* socket management */
unsigned short i_count;
/* reference counter */
unsigned short i_wcount;
/* number authorized to write */
unsigned short i_flags;
/* flags (= i_sb->s_flags) */
unsigned char i_lock;
/* lock */
unsigned char i_dirt;
/* inode has been modified */
unsigned char i_pipe;
/* inode represents pipe */
unsigned char i_sock;
/* inode represents socket */
unsigned char i_seek;
/* not used */
unsigned char i_update;
/* inode is current */
union {
struct pipe_inode_info pipe_i;
struct minix_inode_info minix_i;
•••
void *generic_ip;
} u;
/* file-system-specific information */ };
 In the first section, this holds information on the file.
 The remainder contains management information and the file-system-dependent union
u.
 In memory, the inodes are managed in two ways. First, they are managed in a doubly
linked circular list starting with first_i node, which is accessed via the entries i_next
and i_prev.
 This approach is not particularly efficient, as the complete list of inodes also includes
the 'free', unused inodes, for which the components i_count, i_dirt and i_lock should
all be zero.
 The unused inodes are generated via the grow_inodes() function, which is called
every time that less than a quarter of all the inodes are free but not more than
NR_INODE are in existence.
 The number of unused inodes and the count of all available inodes are held in the
static variables nr_free and nr_inode respectively.
 For fast access, inodes are also stored in an open hash table hash_table[], where
collisions are dealt with via a doubly linked list using the components i_hash_next
and i_hash_prev.
 Access to any of the NR_IHASH entries is made through the device and inode
numbers.
 The functions for working with inodes are iget(), namei() and iput( ).
 The iget( ) function supplies the inode specified by the superblock and the inode
number .

 If the required inode is included in the hash table, the i_count reference counter is
simply incremented.
 If it is not found, a 'free' inode is selected (get_empty_inode()) and the
implementation of the relevant file system calls the superblock operation read_inode()
to fill it with information.
 The resulting inode is then added to the hash table.
 An inode obtained using iget() has to be released using the function iput( ).
 This decrements the reference counter by 1 and marks the inode structure as 'free' if
the former is 0.
 _namei() function supplies the inode for the directory that contains the file with the
name specified.
 All functions return an error code smaller than 0 if they are not successful.
The Inode operations
 The inode structure also has its own operations, which are held in the
inode_operations structure and mainly provide for file management.
 These functions are usually called directly from the implementations of the
appropriate system calls.
struct inode_operations
{
struct file_operations * defauLt_file_ops;
int (*create) (struct inode *,const char *,int,int, struct inode **);
int (*lookup) (struct inode *,const char *,int, struct inode **);
int (*Link) (struct inode *,struct inode *,const char *,int);
int (*unLink) (struct inode *,const char *,int);
int (*symLink) (struct inode *,const char *,int,const char *);
int (*mkdir) (struct inode *,const char *,int,int);
int (*rmdir) (struct inode *,const char *,int);
int (*mknod) (struct inode *,const char *,int,int,int);
int (*rename) (struct inode *,const char *,int,struct inode *, const char *,int);
int (*readlink) (struct inode *,char *,int);
int (*follow_link) (struct inode *,struct inode *,int,int, struct inode **);
int (*bmap) (struct inode *,int);
void (•truncate) (struct inode *);
int (•permission) (struct inode *, int);
int (*smap) (struct inode *, int);
};
• create(dir, name, len, mode, res_inode)

 This function is called from within the VFS function open_namei().
 It performs a number of tasks.
• First, it extracts a free inode from the complete list of inodes with the aid of the
get_empty_inode()) function. The inode structure now needs to be filled with filesystem-specific data, for which, for example, a free inode on the media is sought
out.
• After this, create( ) enters the filename name of length ten in the directory
specified by the inode dir.
• Lookup(dir, name, Len, res_inode)
 This function is supplied with a filename and its length and returns the inode for the
filein the argument res_inode.
 This is carried out by scanning the directory specified by the
 inode dir.
 The Lookup() function must be defined for directories.
 The calling VFS function LookupO performs a special procedure for the name '..'.
 If the process is already in its root directory, the root inode is returned.
 However, if the root inode for a mounted file system is overstepped by '..', the VFS
function uses the 'hidden' inode to call the inode operation.
• Link(oldinode, dir, name, Len)
 This function sets up a hard link.
 The file oldinode will be linked under the stated name and the associated length in the
directory specified by the inode dir.
 Before Link() is called, a check is made that the inodes dir and old inode are on the
same device and that the current process is authorized to write to dir.
• unLink(dir, name, Len)
 This function deletes the specified file in the directory specified by the inode dir.
 The calling function first confirms that this operation possesses the relevant
permissions.
• symLink(dir, name, Len, symname)
 This function sets up the symbolic link name in the directory dir, with Len giving the
length of the name name.
 The symbolic link points to the path symname.
 Before thisfunction is called by the VFS, the access permissions will have been
checked by a call to permission().
• mkdir(dir, name, Len, mode)
 This function sets up a subdirectory with the name and the access rights mode in the
directory dir.
 The mkdir() function first has to check whether further subdirectories are permitted in
the directory,
 then allocate a free inode on the data media and a free block, to which the directory is
then written together with its default entries '.' and '..'.
 The access rights will already have been checked in the calling VFS function.
• rmdir(dir, name, Len)

 This function deletes subdirectory name from the directory dir.
 The function first checks that the directory to be deleted is empty and whether it is
currently being used by a process,
 as well as the access rights are checked beforehand by a VFS function.
• mknod(dir, name, Len, mode, rdev)
 This function sets up a new inode in the mode mode.
 This inode will be given the name in the directory dir.
 If the inode is a device file, the parameter rdev gives the number of the device.
• rename(odir, oname, oLen, ndir, nnane, nlen)
 This function changes the name of a file.
 This involves removing the old name oname from the odir directory and entering the
new name nname in ndir.
 The calling functionchecks the relevant access permissions in the . directories
beforehand, and
 a further checkis made to ensure that the directories '.' and '..' do not appear as the
source or destination of an operation.
• : readLink(Inode, but, size)
 This function reads symbolic links and should copy into the buffer in the user address
space -the pathname for the file to which the link points.
 If the buffer is too small, the pathname should simply be truncated. If the inode is not
a symbolic link, EINVAL should be returned.
 This function is called directly from sys_read Link( ) once the write
 access permission to the buffer buf has been checked and the inode has been found
using Lnamei().
• follow_Link(dir, inode, fLag, mode, res_inode)
 This function is used to resolve symbolic links.
 For the inode assigned to a symbolic link, this function returns the inode to which the
link points to the argument res_inode.
 To avoid endless loops, the maximum number of links to be resolved is set at 5
 If follow_link() is missing, the calling function of the same name in the VFS simply
returns inode, as if the link were pointing to itself.
 A symbolic link can, after all, point to another symbolic link without testing whether
the current inode describes a file or a symbolic link.
• bmap(inode, block)
 This function is called to enable memory mapping of files.
 In the argument block it is given the number of a logical data block in the file.
 This number must be converted by bmap() into the logical number of the block on the
media.
 To do this, bmap() searches for the block in the actual implementation of the specified
inode and returns its number.
 This may in some cases involve reading other blocks from the media.
 This function is used by generic_mmap() to map a block from the file to an address in
the user address space.

 If it cannot be found, executable files must first be loaded into memory completely, as
the more efficient demand paging is not then available.
• truncate(inode)
 This function is mainly intended to shorten a file, but can also lengthen a file to any
length if this is supported by the specific implementation.
 The only parameter required by truncate() is the inode of the file to be amended, with
the i_size field set to the new length before the function is called.
 The truncate() function is used at a number of places in the kernel, both by the system
call sys_truncate() and when a file is opened.
 It will also release the blocks no longer required by a file.
 Thus, the truncate() function can be used to delete a file physically if the inode on the
media is cleared afterwards.

• permission(inode, flag)
 This function checks the inode to confirm the access rights to the file given by the
mask.
 The possible values for the mask are MAY_READ, MAY_WRITE and MAY_EXEC
• smap(inode, sector)
 This function is intended to allow swap files to be created
 this inode operation supplies the logical sector number (not block or cluster) on the
media for the sector of the file specified.
 In the memory management function rw_swap_pagc(), the smap() function is required
to prepare to work with a swap file

In a multi-tasking system the problem often arises that a number of
processes wish to access a file at the same time, both to read and to
write. Even a single process may be reading and writing at different
points in the file.
To avoid synchronization problems and allow shared access to files
by different processes, LINUX has simply introduced a structure file,
contains information on a specific file's access rights f_mode, the
current file position f_pos, the type of access f_flags and the number
of accesses f_count.
struct file {
mode_t
f_mode;
/*
loff_t
f_pos;
unsigned short f_flags;
unsigned short f_count; /*
off_t f_reada;
/*
struct file *f_next,*f_prev; /*
f_owner;
/*
struct inode *f_inode;
struct file_operations * f_op;
unsigned long f_version;
/*
*private_data;
/*

access type */
/* file position */
/* openO -flags */
reference counter */
read ahead flag */
Links */
PID or -PGRP for SIGIO */
/* related inode */
/* file operations */
Dcache version management */ void
needed for tty driver */

};
The file structures are managed in a doubly linked circular list via
the pointers f_next and
f_prev. This file table can be accessed via the pointer first_file.
File operations
The file_operations structure is the general interface for work on
files, and contains the
functions to open, close, read and write files. The reason why these
functions are not held in inode_ope rat ions but in a separate
structure is that they need to make changes to the file structure.
The inode's inode_operations structure also includes the
component default_file_ops,
in which the standard file operations are already specified.
struct file_operations {
int (*Lseek) (struct inode *, struct file *, off_t, int);
int (*read) (struct inode *, struct file *, char *, int);
int (*wnte) (struct inode *, struct file *, char *, int);
int (*readdir) (struct inode *, struct file *, struct dirent *, int);
int (*select) (struct inode *, struct file *, int, select_table *);
int (*ioctl) (struct inode *, struct file *, unsigned int, unsigned long);
int (*mmap) (struct inode *, struct file *, struct vm_area_struct *);
int (*open) (struct inode *, struct file *);
void (*release) (struct inode *, struct file *);
int (*fsync) (struct inode *, struct file *);
int (*fasync) (struct inode *, struct file *, int);
int (*check_media_change) (dev_t);
int (*revalidate) (dev_t);
};
These functions are also useful for sockets and device drivers, as
they contain the actual
functions for sockets and devices. The inode operations, on the
other hand, only use the
representation of the socket or device in the related file system or
its copy in memory.
• lseek(inode, filp, offset, origin)
The job of the lseek function is to deal with positioning within the
file. If this function
is not implemented, the default action simply converts the file
position f_pos for the file
structure if the positioning is to be carried out from the start or from
the current
position. If the file is represented by an inode, the default function
can also be
positioned from the end of the file. If the function is missing, the file
position in the file
structure is updated by the VFS.

• read(inode, filp, buf, count)
This function copies count bytes from the file into the buffer but in
the user address
space. Before calling the function, the Virtual File System first
confirms that the entire
buffer is located in the user address space and can be written to,
and also that the file
pointer is valid and the file has been opened to read. If no read
function is
implemented, the error EINVAL is returned.
• write(inode, filp, buf, count)
The write function operates in an analogous manner to read and
copies data from
the user address space to the file.
 readdir(inode, flip, dirent, count)
This function returns the next directory entry in the dirent
structure or an
error. If this function is not implemented, the Virtual File System
returns
ENOTDIR error
• select(inode, filp, type, wait)
This function checks whether data can be read from a file or written
to one. An
additional test for exception conditions can also be made. This
function only serves a
useful purpose for device drivers and sockets. The main task of the
function is taken
care of by the Virtual File System; thus, when interrogating files the
VFS always returns the value 1 if it is a normal file, otherwise 0.
• ioctl(inode, filp, cmd, arg)
the ioctl() function sets device-specific parameters. However, before
the Virtual File System calls the ioctl operation, it tests the following
default arguments:
Clears the close-on-exec bit.
Sets the close-on-exec bit.
If the additional argument arg refers to a value not equal
to zero, the
O_NONBLOCK flag is set; otherwise it is cleared.
FIOASYNC
Sets or clears the O_SYNC flag as for FIONBIO. This flag is
not at
present evaluated.
FIONCLEX
FIOCLEX
FIONBIO

If cmd is not among these values, a check is performed on whether
filp refers to a normal
file. If so, the function file_ioctl() is called and the system call
terminates.
For other files,the VFS tests for the presence of an ioctl function. If
there is none, the EINVAL error is returned, otherwise the file-specific
ioctL function is called.
• map(inode, flip, vm_area)
This function maps part of a file to the user address space of the
current process. The
structure vm_area specified describes all the characteristics of the
memory area to be
mapped: the components vm_start and vm_end give the start and
end addresses of the
memory area to which the file is to be mapped and vm_offset the
position in the file
from which mapping is to be carried out.
• open(inode, flip)
This function only serves a useful purpose for device drivers, as the
standard function in
the Virtual File System will already have taken care of all the
necessary actions on
regular files, such as allocating the file structure.
• release( inode, flip)
This function is called when the file structure is released, that is,
when its reference
counter f_count is zero. This function is primarily intended for device
drivers, and its
absence will be ignored by the Virtual File System. Updating of the
inode is also taken
care of automatically by the Virtual File System.
• fsync(inode, flip)
The fsync() function ensures that all buffers for the file have been
updated and written
back to the device, which means that the function is only relevant
for file systems
• fasync(inode, flip, on)
This function is called by the VFS when a process uses the system
call fcnti to log on or
off for asynchronous messaging by sending a SIGIO signal. The
messaging will take

place when data are received and the on flag is set. If on is not set,
the process
unregisters the file structure from asynchronous messaging.
• check_media_change(dev)
This function is only relevant to block devices supporting changeable
media. It tests
whether there has been a change of media since the last operation
on it. If so, the
function will return a 1, otherwise a zero.
The check_media_change() function is called by the VFS function
check_disk_change(); if a change of media has taken place,
it calls put_super() to remove any superblock belonging to the
device, discards all the buffers belonging to the device dev which
are still in the buffer cache, along with all the inodes on this device,
and then calls revalidate().
As check_disk_change() requires a considerable amount of time, it is
only called when mounting a device. Its return values are the same
as for check_media_change0.
• revalidate(dev)
This function is called by the VFS after a media change has been
recognized, to restore
the consistency of a block device. It should establish and record all
the necessary parameters of the media, such as the number of
blocks, number of tracks and so on.
Ext2 File System
The ext2 or second extended filesystem is a file system for the Linux kernel. The canonical
implementation of ext2 is the ext2fs filesystem driver in the Linux kernel
LINUX was initially developed under MINIX, the first LINUX file system was the MINIX
file system. However, this file system restricts partitions to a maximum of 64 Mbytes and
filenames to no more than 14 characters, so the search for a better file system was not long in
starting.
The result, was the Ext file system - the first to be designed for LINUX. Although this
allowed partitions of up to 2 Gbytes and filenames up to 255 characters, it had the drawbacks
that it was slower than its MINIX counterpart and the simple implementation of free block
administration led to extensive fragmentation of the file system.
A file system which is now little used was Xia file system. This is also based on the MINIX
file system and permits partitions of up to 2 Gbytes in size along with filenames of up to 248
characters; but its administration of free blocks in bitmaps and optimizing block allocation
functions make it faster and more robust than the Ext file system.
At the same time, Remy Card, Wayne Davidson presented the Ext2 file system as a further
development of the Ext file system. It can be considered by now to be
the LINUX file system, as it is used in most LINUX systems and distributions.

The Second Extended File system (EXT2) structure
Motivations
The Second Extended File System has been designed and implemented to fix some problems
present in the first Extended File System. Our goal was to provide a powerful filesystem,
which implements Unix file semantics and offers advanced features.
Of course, we wanted to Ext2fs to have excellent performance. We also wanted to provide a
very robust filesystem in order to reduce the risk of data loss in intensive use. Last, but not
least, Ext2fs had to include provision for extensions to allow users to benefit from new
features without reformatting their filesystem.
``Standard'' Ext2fs features
The Ext2fs supports standard Unix file types: regular files, directories, device special files
and symbolic links.
Ext2fs is able to manage filesystems created on really big partitions. While the original kernel
code restricted the maximal filesystem size to 2 GB, recent work in the VFS layer have raised
this limit to 4 TB. Thus, it is now possible to use big disks without the need of creating many
partitions.
Ext2fs provides long file names. It uses variable length directory entries. The maximal file
name size is 255 characters. This limit could be extended to 1012 if needed.
Ext2fs reserves some blocks for the super user (root). Normally, 5% of the blocks are
reserved. This allows the administrator to recover easily from situations where user processes
fill up filesystems.
``Advanced'' Ext2fs features
In addition to the standard Unix features, Ext2fs supports some extensions which are not
usually present in Unix filesystems.
File attributes allow the users to modify the kernel behavior when acting on a set of files. One
can set attributes on a file or on a directory. In the later case, new files created in the directory
inherit these attributes.
BSD or System V Release 4 semantics can be selected at mount time. A mount option allows
the administrator to choose the file creation semantics. On a filesystem mounted with BSD
semantics, files are created with the same group id as their parent directory. System V
semantics are a bit more complex: if a directory has the setgid bit set, new files inherit the
group id of the directory and subdirectories inherit the group id and the setgid bit; in the other
case, files and subdirectories are created with the primary group id of the calling process.
Ext2fs allows the administrator to choose the logical block size when creating the filesystem.
Block sizes can typically be 1024, 2048 and 4096 bytes. Using big block sizes can speed up
I/O since fewer I/O requests, and thus fewer disk head seeks, need to be done to access a file.
On the other hand, big blocks waste more disk space: on the average, the last block allocated
to a file is only half full, so as blocks get bigger, more space is wasted in the last block of
each file. In addition, most of the advantages of larger block sizes are obtained by Ext2
filesystem's preallocation techniques (see section Performance optimizations).
Ext2fs implements fast symbolic links. A fast symbolic link does not use any data block on
the filesystem. The target name is not stored in a data block but in the inode itself. This policy
can save some disk space (no data block needs to be allocated) and speeds up link operations

(there is no need to read a data block when accessing such a link). Of course, the space
available in the inode is limited so not every link can be implemented as a fast symbolic link.
The maximal size of the target name in a fast symbolic link is 60 characters.
Ext2fs keeps track of the filesystem state. A special field in the superblock is used by the
kernel code to indicate the status of the file system. When a filesystem is mounted in
read/write mode, its state is set to ``Not Clean''.
When it is unmounted or remounted in read-only mode, its state is reset to ``Clean''. At boot
time, the filesystem checker uses this information to decide if a filesystem must be checked.
The kernel code also records errors in this field. When an inconsistency is detected by the
kernel code, the filesystem is marked as ``Erroneous''.
Always skipping filesystem checks may sometimes be dangerous, so Ext2fs provides two
ways to force checks at regular intervals. A mount counter is maintained in the superblock.
Each time the filesystem is mounted in read/write mode, this counter is incremented. When it
reaches a maximal value (also recorded in the superblock), the filesystem checker forces the
check even if the filesystem is ``Clean''. A last check time and a maximal check interval are
also maintained in the superblock. These two fields allow the administrator to request
periodical checks. When the maximal check interval has been reached, the checker ignores
the filesystem state and forces a filesystem check. Ext2fs offers tools to tune the filesystem
behavior. The tune2fs program can be used to modify:


the error behavior. When an inconsistency is detected by the kernel code, the
filesystem is marked as ``Erroneous'' and one of the three following actions can be
done: continue normal execution, remount the filesystem in read-only mode to avoid
corrupting the filesystem, make the kernel panic and reboot to run the filesystem
checker.



the maximal mount count.



the maximal check interval.



the number of logical blocks reserved for the super user.

Mount options can also be used to change the kernel error behavior.
An attribute allows the users to request secure deletion on files. When such a file is deleted,
random data is written in the disk blocks previously allocated to the file. This prevents
malicious people from gaining access to the previous content of the file by using a disk
editor.
Immutable files can only be read: nobody can write or delete them. This can be used to
protect sensitive configuration files. Append-only files can be opened in write mode but data
is always appended at the end of the file. Like immutable files, they cannot be deleted or
renamed. This is especially useful for log files which can only grow.
Ext2 Physical File Structure
The physical file structure of Ext2 filesystems is made up of block groups. Block groups are
not tied to the physical layout of the blocks on the disk, since modern drives tend to be
optimized for sequential access and hide their physical geometry to the operating system.
The physical structure of a filesystem is represented in this table:
Boot

Block

Block

...

Block

Sector

Group 1

Group 2

...

Group N

Each block group contains a redundant copy of crucial filesystem control informations
(superblock and the filesystem descriptors) and also contains a part of the filesystem (a block
bitmap, an inode bitmap, a piece of the inode table, and data blocks). The structure of a block
group is represented in this table:
Super
Block

FS
descriptors

Block
Bitmap

Inode
Bitmap

Inode
Table

Data
Blocks

Using block groups is a big win in terms of reliability: since the control structures are
replicated in each block group, it is easy to recover from a filesystem where the superblock
has been corrupted. This structure also helps to get good performances: by reducing the
distance between the inode table and the data blocks, it is possible to reduce the disk head
seeks during I/O on files.
Ext2fs, directories
In Ext2fs, directories are managed as linked lists of variable length entries. Each entry
contains the inode number, the entry length, the file name and its length. By using variable
length entries, it is possible to implement long file names without wasting disk space in
directories. The structure of a directory entry is shown in this table:
inode number

entry length

name length

filename

As an example, The next table represents the structure of a directory containing three files:
file1, long_file_name, and f2:
i1
i2
i3

16
40

05
14
12

file1

long_file_name

02

f2

Performance optimizations
The Ext2fs kernel code contains many performance optimizations, which tend to improve I/O
speed when reading and writing files.
Ext2fs takes advantage of the buffer cache management by performing readaheads: when a
block has to be read, the kernel code requests the I/O on several contiguous blocks. This way,
it tries to ensure that the next block to read will already be loaded into the buffer cache.
Readaheads are normally performed during sequential reads on files and Ext2fs extends them
to directory reads, either explicit reads (readdir(2) calls) or implicit ones (namei kernel
directory lookup).
Ext2fs also contains many allocation optimizations. Block groups are used to cluster together
related inodes and data: the kernel code always tries to allocate data blocks for a file in the
same group as its inode. This is intended to reduce the disk head seeks made when the kernel
reads an inode and its data blocks.
When writing data to a file, Ext2fs preallocates up to 8 adjacent blocks when allocating a new
block. Preallocation hit rates are around 75% even on very full filesystems. This preallocation
achieves good write performances under heavy load. It also allows contiguous blocks to be
allocated to files, thus it speeds up the future sequential reads.

These two allocation optimizations produce a very good locality of:


related files through block groups



related blocks through the 8 bits clustering of block allocations.

Overview of Ext3, ext4 file systems; Difference between ext2 and ext3 FS;
ext2, ext3 and ext4 are all filesystems created for Linux.
Ext2


Ext2 stands for second extended file system.



It was introduced in 1993. Developed by Rémy Card.



This was developed to overcome the limitation of the original ext file system.



Ext2 does not have journaling feature.



On flash drives, usb drives, ext2 is recommended, as it doesn’t need to do the over head of
journaling.



Maximum individual file size can be from 16 GB to 2 TB



Overall ext2 file system size can be from 2 TB to 32 TB

Ext3


Ext3 stands for third extended file system.



It was introduced in 2001. Developed by Stephen Tweedie.



Starting from Linux Kernel 2.4.15 ext3 was available.



The main benefit of ext3 is that it allows journaling.



Journaling has a dedicated area in the file system, where all the changes are tracked. When the
system crashes, the possibility of file system corruption is less because of journaling.



Maximum individual file size can be from 16 GB to 2 TB



Overall ext3 file system size can be from 2 TB to 32 TB



There are three types of journaling available in ext3 file system.



o

Journal – Metadata and content are saved in the journal.

o

Ordered – Only metadata is saved in the journal. Metadata are journaled only after
writing the content to disk. This is the default.

o

Writeback – Only metadata is saved in the journal. Metadata might be journaled
either before or after the content is written to the disk.

You can convert a ext2 file system to ext3 file system directly (without backup/restore).

Ext4


Ext4 stands for fourth extended file system.



It was introduced in 2008.



Starting from Linux Kernel 2.6.19 ext4 was available.



Supports huge individual file size and overall file system size.



Maximum individual file size can be from 16 GB to 16 TB



Overall maximum ext4 file system size is 1 EB (exabyte). 1 EB = 1024 PB (petabyte). 1 PB =
1024 TB (terabyte).



Directory can contain a maximum of 64,000 subdirectories (as opposed to 32,000 in ext3)



You can also mount an existing ext3 fs as ext4 fs (without having to upgrade it).



Several other new features are introduced in ext4: multiblock allocation, delayed allocation,
journal checksum. fast fsck, etc. All you need to know is that these new features have
improved the performance and reliability of the filesystem when compared to ext3.



In ext4, you also have the option of turning the journaling feature “off”.

1.8-File Permission:Linux permissions dictate 3 things you may do with a file, read, write and execute. They are
referred to in Linux by a single letter each.


r read - you may view the contents of the file.



w write - you may change the contents of the file.



x execute - you may execute or run the file if it is a program or script.

For every file we define 3 sets of people for whom we may specify permissions.


owner - a single person who owns the file. (typically the person who created the file
but ownership may be granted to some one else by certain users)



group - every file belongs to a single group.



others - everyone else who is not in the group or the owner.

Three persmissions and three groups of people. That's about all there is to permissions really.
Now let's see how we can view and change them.

View Permissions
To view permissions for a file we use the long listing option for the command ls.

ls -l [path]

1.

ls -l /home/ryan/linuxtutorialwork/frog.png

2.

-rwxr----x 1 harry users 2.7K Jan 4 07:32 /home/ryan/linuxtutorialwork/frog.png

3.
In the above example the first 10 characters of the output are what we look at to identify
permissions.


The first character identifies the file type. If it is a dash ( - ) then it is a normal file. If
it is a d then it is a directory.



The following 3 characters represent the persmissions for the owner. A letter
represents the presence of a permission and a dash ( - ) represents the absence of a
permission. In this example the owner has all permissions (read, write and execute).



The following 3 characters represent the permissions for the group. In this example
the group has the ability to read but not write or execute. Note that the order of
permissions is always read, then write then execute.



Finally the last 3 characters represent the permissions for others (or everyone else). In
this example they have the execute permission and nothing else.

Change Permissions
To change permissions on a file or directory we use a command called chmod It stands for
change file mode bits which is a bit of a mouthfull but think of the mode bits as the
permission indicators.

chmod [permissions] [path]

chmod has permission arguments that are made up of 3 components


Who are we changing the permission for? [ugoa] - user (or owner), group, others, all



Are we granting or revoking the permission - indicated with either a plus ( + ) or
minus ( - )



Which permission are we setting? - read ( r ), write ( w ) or execute ( x )

The following examples will make their usage clearer.
Grant the execute permission to the group. Then remove the write permission for the owner.

1.

ls -l frog.png

2.

-rwxr----x 1 harry users 2.7K Jan 4 07:32 frog.png

3.
4.

chmod g+x frog.png

5.

ls -l frog.png

6.

-rwxr-x--x 1 harry users 2.7K Jan 4 07:32 frog.png

7.
8.

chmod u-w frog.png

9.

ls -l frog.png

10.

-r-xr-x--x 1 harry users 2.7K Jan 4 07:32 frog.png

11.
Don't want to assign permissions individually? We can assign multiple permissions at once.
1.

ls -l frog.png

2.

-rwxr----x 1 harry users 2.7K Jan 4 07:32 frog.png

3.
4.

chmod g+wx frog.png

5.

ls -l frog.png

6.

-rwxrwx--x 1 harry users 2.7K Jan 4 07:32 frog.png

7.
8.

chmod go-x frog.png

9.

ls -l frog.png

10.

-rwxrw---- 1 harry users 2.7K Jan 4 07:32 frog.png

11.
It may seem odd that as the owner of a file we can remove our ability to read, write and
execute that file but there are valid reasons we may wish to do this. Maybe we have a file
with data in it we wish not to accidentally change for instance. While we may remove these
permissions, we may not remove our ability to set those permissions and as such we always
have control over every file under our ownership.

1.9-User Management:Types of Users,Powers of Root:There are three type of user accounts1- Super user (root) uid and gid =0
2- System user (ftpd,sshd) uid and gid (1-499)
3- Regular user (what u create with useradd cmd) uid and
gid (500<)
[Note: If you assign uid below to 500 like between 1 to 100 it means that user umask will be
022 like root.And you know that normally user whose uid and gid is above their umask is
002.]
Super User:The super user is also known as system administrator. The job of system administrator
involves the management of the entire system-ranging from maintaining user
accounts,security and managing disk space to performing backups.
root: The system administrator’s login
The superuser, or root, is a special user account used for system administration. It is given
full and complete access to all system resources or the "superuser". It is also used to describe
the directory named "/"as in, "the root directory" .This account need not to be separately
created but comes with every syatem. Its password is set at the time of installation of the
Linux and has to be logged in :
login: root
password:
The prompt of root is #, unlike $ used by non privileged users.
One can become root by either logging in as user "root" or by typing "su" within a normal
user's login session. The root password is required to become root.
Once you logged in as root, you are placed in root’s home directory. This directory can be / or
/root.
Since the super user has to constantly navigate the file system, the PATH for a super user
does’t include the current directory(.).
Linux uses Bash shell for normal and system administrative activities.

SU : ACQUIRING SUPERUSER LOGIN –
HOW TO BECOME SUPERUSER IN LINUX
Under Linux you use command called su . It is used is used to become another user during a
login session or to login as super user. If invoked without a username, su defaults to
becoming the super user. It is highly recommend that you use argument - to su command. It is
used to provide an environment similar to what the user root would expect had the user
logged
in
directly.
Type
su
command
as
follows:
$ su –
Output:
Password: <TYPE ROOT PASSWORD>
#
Once you typed the root user password, you become super or root user.
Tip: typing "su - " instead of "su" actually changes the login session to that of root; the
session behaves the same as though user root had actually logged in to begin with.
To be in root’s home directory on super user login, use su-l
Creating User’s environment
su when used with a - ,recreates the user’s environment without taking the login-password
route:
su – henry
This sequence executes henry’s profile, and temporarily creates henry’s environment.
Su runs a separate subshell,so this mode is terminated by hitting ctrl+d or exit
Becoming root for a Complete Login Session
The su command allows a regular user to become the system's root user if they know the root
password. A user with sudo rights to use the su command can become root, but they only
need to know their own password, not that of root as seen here.
someuser@u-bigboy:~$ sudo su Password:
root@u-bigboy:~#
Some systems administrators will use sudo to grant root privileges to their own personal user
account without the need to provide a password.
Power of Root User:The superuser has enormous powers including the following:


Can change the contents or attributes of any file like its permissions and ownership.
He can delete any file even if the directory is write protected



Installing and configuring servers



Installing and configuring application software



Creating and maintaining user accounts



Backing up and restoring files



Monitoring and tuning performance



Configuring a secure system



Using tools to monitor security



Initiate or kill any process.



To configure I/O devices like – a scanner or a TV tuner card, for example.



To configure system services like – a web or FTP server.

Examples of various functions performed by the super user to configure the system
 Set the system clock with date command
date -s "11/20/2003 12:48:00"
Set the date to the date and time shown.




Address all users concurrently on the wall
# wall
The machine will be shut down today at 14:30 hrs.
Ctrl+d
Limit the maximum size of files that users are permitted to create with ulimit
Ulimit 20971510
(measured in 512- byte blocks)
This statement can be placed in /etc/profile

MAINTAINING SECURITY
As system administrator ,you have to ensure that the system directories (/bin,/usr/bin/etc/sbin
etc )and the files written in them are protected.
Manage file permissions and ownership


Manage access permissions on both regular and special files as well as directories



Maintain security using access modes such as suid, sgid, and the sticky bit



Change the file creation mask



Grant file access to group members

Linux groups are a mechanism to manage a collection of computer system users. All Linux
users have a user ID and a group ID and a unique numerical identification number called a
userid (UID) and a groupid (GID) respectively. Groups can be assigned to logically tie users
together for a common security, privilege and access purpose. It is the foundation of Linux
security and access. Files and devices may be granted access based on a users ID or group ID.
1.10.Managing Users
Adding User

The easiest way to add a new user is to use the useradd command like this: useradd bart. We
have now created a new user called bart. To assign a password for that user use the command
passwd bart.
/etc/passwd file has several entries that are actually users for programs that need to control
processes or need "special" access to the filesystem.
/etc/passwd and other informative files
The basic user database in a Linux system is the text file, /etc/passwd (called the password
file), which lists all valid usernames and their associated information. The file has one line
per username, and is divided into seven colon-delimited fields:


Username.



Previously this was where the user's password was stored.



Numeric user id.



Numeric group id.



Full name or other description of account.



Home directory.



Login shell (program to run at login)
Each entry has the following fields:
user:password:UID:GID:comment:home:shell



user is the username that is used for logging or by programs. The username is case
sensitive on Linux systems and it is recommended to keep the special characters out of it.



password is the field where the encrypted password is stored in. The passwd command
encrypts the passwords and stores them in that field. The default encryption algorithm
used is considered rather poor today. It is better to choose shadow passwords and, in that
case, the field will remain blank and all the passwords will be stored in the /etc/shadow
file.



UID is the user ID. It's a numerical value that is bound to a user. For the root user is
always 0. The UID has to be unique and should have a value between 0 and 4 mil.
Usually, for users, the UID is greater than 100. All the files in a Linux system have an
UID. This UID determines the ownership of files and processes.



GID is the group ID for the primary group of the user. This is also a numerical value and
to root, also has the value 0. For every user, there is at least a GID. This field identifies
the primary group to which a user belongs. Note that a user can be assigned to several
groups. Both the UID and the GID are very important for filesystem security.



comment is a field that holds text information about a user. Usually, you add here the
name of the user but you can also add the phone number, the e-mail address or whatever
you like. Where there are many users to manage, the comment field can really come in
handy.



home defines the home directory of that user. This directory is created automatically by
the useradd command. If you want to change it from here, you should keep in mind that it
has to exist.



shell is the shell that will be used by the user. The default should be more than ok most of
the time. Accounts created for people will have assigned the bash shell and the accounts
created for programs will have no login which is a nice trick for disallowing logins with
that user.

Changing group ownership of files, directories, devices:
chown / chgrp
chown:
This command is used by root (system superuser) only. As root, the group ownership of a file,
directory or device can be changed with the "chown" command:
 Change the ownership of the file to the group "accounting":
chown :accounting filename


Command format: chown user:group filename

chgrp:
This command is used by any system user who is a member of multiple groups. If the user
creates a file, the default group association is the group id of user.
If he wishes to change it to another group of which he is a member issue the command:
chgrp new-group-id file-name
If the user is not a member of the group then a password is required.

Linux: Delete / Remove User Account
You need to use the userdel command to delete a user account and related files from user
account. The userdel command must be run as root user. The syntax is as follows:

userdel userName

userdel Example
To remove the user vivek account from the local system / server / workstation, enter:
# userdel vivek
To remove the user's home directory pass the -r option to userdel, enter:

# userdel -r vivek
The above command will remove all files along with the home directory itself and the user's
mail spool. Please note that files located in other file systems will have to be searched for and
deleted manually.
Complete Example
The following is recommend procedure to delete a user from the Linux server. First, lock user
account, enter:
# passwd -l username

Backup files from /home/vivek to /nas/backup
# tar -zcvf /nas/backup/account/deleted/v/vivek.$uid.$now.tar.gz /home/vivek/
Please replace $uid, $now with actual UID and date/time. userdel command will not allow
you to remove an account if the user is currently logged in. You must kill any running
processes which belong to an account that you are deleting, enter:
# pgrep -u vivek

# ps -fp $(pgrep -u vivek)

# killall -KILL -u vivek

To delete user account called vivek, enter:
# userdel -r vivek

Delete at jobs, enter
# find /var/spool/at/ -name "[^.]*" -type f -user vivek -delete

To remove cron jobs, enter:
# crontab -r -u vivek

To remove print jobs, enter:
# lprm vivek

To find all files owned by user vivek, enter:
# find / -user vivek -print

You can find file owned by a user called vivek and change its ownership as follows:
# find / -user vivek -exec chown newUserName:newGroupName {} \;

Other Topics of Unit-1
Open-source software
(OSS) is computer software that is available in source code form: the source code and certain
other rights normally reserved for copyright holders are provided under a software license
that permits users to study, change, improve and at times also to distribute the software.
Some open source software is available within the public domain.
The development of Linux is one of the most prominent examples of free and open source
software collaboration; typically all the underlying source code can be used, freely modified,
and redistributed, both commercially and non-commercially, by anyone under licenses such
as the GNU General Public License. Typically Linux is packaged in a format known as a
Linux distribution for desktop and server use.
Linux Licensing
The licenses for most software are designed to take away your freedom to share and change
it. By contrast, the GNU General Public License is intended to guarantee your freedom to
share and change free software--to make sure the software is free for all its users.

This General Public License applies to most of the Free Software Foundation's software.
When we speak of free software, we are referring to freedom, not price. Our General Public
Licenses are designed to make sure that you have the freedom to distribute copies of free
software that you receive source code or can get it if you want it, that you can change the
software or use pieces of it in new free programs.
Journaling: A journaling file system is a fault-resilient file system in which data integrity is
ensured because updates to directories and bitmaps are constantly written to a serial
log on disk before the original disk log is updated.


A journaling filesystem is a filesystem that maintains a special file called a
journal that is used to repair any inconsistencies that occur as the result of an
improper shutdown of a computer.



Such shutdowns are usually due to an interruption of the power supply or to a
software problem that cannot be resolved without a rebooting.



Journaling filesystems write metadata (i.e., data about files and directories) into the
journal that is flushed to the HDD before each command returns.



In the event of a system crash, a given set of updates may have either been
fully committed to the filesystem (i.e., written to the HDD), in which case there is no
problem, or the updates will have been marked as not yet fully committed, in which
case the system will read the journal, which can be rolled up to the most recent point
of data consistency.



This is far faster than a scan of the entire HDD when rebooting, and it guarantees that
the structure of the filesystem is always internally consistent.



Thus, although some data may be lost, a journaling filesystem typically allows a
computer to be rebooted much more quickly after a system crash.



In the case of non-journaling filesystems, HDD checks during rebooting after a
system crash can take many minutes, or even hours in the case of large HDDs with
capacities of hundreds of gigabytes.



Moreover, if an inconsistency in the data is found, it is sometimes necessary for
intervention by a skilled technician to answer complicated questions about how to
repair certain filesystem problems.



Such downtime can be very costly in the case of big systems used by large
organizations.



The most commonly used journaling filesystem for Linux is the third extended
filesystem (ext3fs), which was added to the kernel from version 2.4.16 (released in
January 1993).



It is basically an extension of ext2fs to which a journaling capability has been added,
and it provides the same high degree of reliability because of the exhaustively fieldproven nature of its underlying ext2.



Also featured is the ability for ext2 partitions to be converted to ext3 and vice-versa
without any need for backing up the data and repartitioning.



If necessary, an ext3 partition can even be mounted by an older kernel that has no ext3
support; this is because it would be seen as just another normal ext2 partition and the
journal would be ignored.

The Linux boot process

1. When a system is first booted, or is reset, the processor executes code at a well-known
location. In a personal computer (PC), this location is in the basic input/output system
(BIOS), which is stored in flash memory on the motherboard.
2. When a boot device is found, the first-stage boot loader is loaded into RAM and
executed. This boot loader is less than 512 bytes in length (a single sector), and its job
is to load the second-stage boot loader.
3. When the second-stage boot loader is in RAM and executing, a splash screen is
commonly displayed, and Linux and an optional initial RAM disk (temporary root file
system) are loaded into memory.
4. When the images are loaded, the second-stage boot loader passes control to the kernel
image and the kernel is decompressed and initialized.
5. At this stage, the second-stage boot loader checks the system hardware, enumerates
the attached hardware devices, mounts the root device, and then loads the necessary
kernel modules.
6. When complete, the first user-space program (init) starts, and high-level system
initialization is performed.
That's Linux boot in a nutshell. Now let's explore some of the details of the Linux boot
process.
System startup
The system startup stage depends on the hardware that Linux is being booted on. A bootstrap
environment is used when the system is powered on, or reset. In addition to having the ability
to store and boot a Linux image, these boot monitors perform some level of system test and
hardware initialization. In an embedded target, these boot monitors commonly cover both the
first- and second-stage boot loaders.

Commonly, Linux is booted from a hard disk, where the Master Boot Record (MBR) contains
the primary boot loader. The MBR is a 512-byte sector, located in the first sector on the disk.
After the MBR is loaded into RAM, the BIOS yields control to it.
Stage 1 boot loader
The primary boot loader that resides in the MBR is a 512-byte image containing both
program code and a small partition table (see Figure ). The first 446 bytes are the primary
boot loader, which contains both executable code and error message text.
The next sixty-four bytes are the partition table, which contains a record for each of four
partitions (sixteen bytes each). The MBR ends with two bytes that are defined as the magic
number (0xAA55). The magic number serves as a validation check of the MBR.
The job of the primary boot loader is to find and load the secondary boot loader (stage 2) by
looking through the partition table for an active partition.
When it finds an active partition, it scans the remaining partitions in the table to ensure that
they're all inactive. When this is verified, the active partition's boot record is read from the
device into RAM and executed.
Figure . Anatomy of the MBR

Stage 2 boot loader
The secondary, or second-stage, boot loader could be more aptly called the kernel loader. The
task at this stage is to load the Linux kernel and optional initial RAM disk.
The first- and second-stage boot loaders combined are called Linux Loader
(LILO) or GRand Unified Bootloader (GRUB) in the x86 PC environment.
What is LILO?
LILO is the Linux Loader, the most popular boot loader for Linux. It is used to load Linux
into memory and start the operating system.
Normally LILO is initially configured for you during the Linux installation process.
Linux Startup Process using LILO
Note: If using Grub, a similar process also occurrs.
1. LILO will read the file "/etc/lilo.conf". LILO will give the user a choice of booting
from any label included in this file. The Linux kernel is installed compressed, so it
will first uncompress itself.
2. After this, the kernel checks what other hardware there is and configures some of its
device drivers appropriately
3. Then the kernel will try to mount the root file system(read-only). The place is
configurable at compilation time, or any time with rdev or LILO. The file system type
is detected automatically.
4. After this the kernel starts the program "init" (located in /sbin/init) in the background
(process number 1). Init will start the services setup in the system.
5. The init process reads the file "/etc/inittab" and uses this file to determine how to
create processes. init is always running and can dynamically do things .The
administrator can also cause it to dynamically change system processes and run levels
by using the telinit program or editing the "/etc/inittab" file.
LILO and GRUB: Boot Loaders Made Simple
LILO (Linux Loader) and GRUB (GRand Unified Bootloader) are both configured as a
primary boot loader (installed on the MBR) or secondary boot loader (installed onto a
bootable partition). Both allow users—root users—to boot into single-user-mode.
LILO
LILO comes as standard on all distributions of Linux. To work with LILO an administrator
edits the file /etc/lilo.conf to set a default partition to boot, where LILO should be installed,
and other information.
MBR Vs. Root Partition
The configuration file, by default is read by LILO. The configuration file tells LILO where it
should place its boot loader.
In general, you can either specify the master boot record (MBR) on the first physical disk
(/dev/hda) or the root partition of your Linux installation (/dev/hda1 or /dev/hda2).

The first stage of loading LILO is completed when LILO brings up in order of the each of the
letters—L-I-L-O. When you see the LILO prompt, you are in the second stage.
When LILO loads itself it displays the word “LILO”. Each letter is printed before or after
some specific action. If LILO fails at some point, the letters printed so far can be used to
identify the problem.
(nothing)
No part of LILO has been loaded. LILO either isn't installed or the partition on which
its boot sector is located isn't active. The boot media is incorrect or faulty.
L
The first stage boot loader has been loaded and started, but it can't load the second
stage boot loader. The two-digit error codes indicate the type of problem. This
condition usually indicates a media failure or bad disk parameters in the BIOS.
LI
The first stage boot loader was able to load the second stage boot loader, but has
failed to execute it. This can be caused by bad disk parameters in the BIOS.
LIL
The second stage boot loader has been started, but it can't load the descriptor table
from the map file. This is typically caused by a media failure or by bad disk
parameters in the BIOS.
LIL?
The second stage boot loader has been loaded at an incorrect address. This is typically
caused by bad disk parameters in the BIOS.
LILThe descriptor table is corrupt. This can be caused by bad disk parameters in the
BIOS.
LILO
All parts of LILO have been successfully loaded.
LILO with WIN XP
If you have WINXP installed to MBR on your hard drive, install LILO to the root partition
instead of the MBR. If you want to boot up Linux, you must mark the LILO partition as
bootable. If you are starting with LILO, you can begin editing the configuration file.
After you install LILO on your system, you can make it take over your MBR. As a root user,
type:
# /sbin/lilo –v -v
LILO Configuration File
Given below is a sample /etc/lilo.conf file.
The /etc/lilo.conf File
The sample lilo.conf file shown below is for a typical dual-boot configuration, with Windows
installed on the first partition and Linux on the second. You can probably use this as-is,
except for the image= line and possibly the root= line, depending on where Linux was
installed. Detailed explanation follows.
boot=/dev/hda
map=/boot/map
install=/boot/boot.b
compact
prompt

timeout=50
image=/boot/vmlinuz-2.0.36
label=linux
root=/dev/hda2
read-only
other=/dev/hda1
label=win
boot=/dev/hda:
Tells LILO where to install the bootloader. In this case, it is going into the master boot
record of the first hard drive, which means LILO will control the boot process of all
operating systems from the start. It could also have been /dev/hda2, the boot sector of
the Linux partition. In that case, the DOS bootloader would need to be in the master
boot record, and booting Linux would require setting the Linux partition active using
fdisk.
map=/boot/map:
The map file is automatically generated by LILO and is used internally. Don't mess
with it.
install=/boot/boot.b:
Tells LILO what to use as the new boot sector. This file contains the "bootstrap" code
that starts your operating system.
compact:
Makes LILO read the hard drive faster.
prompt:
Tells LILO to prompt us at boot time to choose an operating system or enter
parameters for the Linux kernel.
timeout=50:
Tells LILO how long to wait at the prompt before booting the default operating
system, measured in tenths of a second. The configuration shown waits for 5 seconds.
image=/boot/vmlinuz-2.0.36:
The name of a Linux kernel for LILO to boot. The first image listed in the file is the
default, unless you specify otherwise.
label=linux:
The name that is used to identify this image at the LILO: boot prompt. Typing this
name will select this image.
root=/dev/hda2:
Tells LILO where the root (/) file system is (where Linux lives), so that the Linux
kernel can mount it at boot time.
read-only:
Tells LILO to instruct the Linux kernel to initially mount the root file system as readonly. It will be remounted as read-write later in the boot process. This is the normal
method of booting
other=/dev/hda1:
other tells LILO to boot an operating system other than Linux. It is given the value of the
partition where this other operating system lives.
label=win:
Same as the label above, gives you a way to refer to this section.
GRUB
GRUB combines installations with one install command and allows for MD5 encryption of
passwords. When a configuration file is configured incorrectly, the system reverts to the
command-line prompts.
MBR Vs. Root Partition

If you have WINXP installed to MBR on your hard drive, install GRUB to the root partition
instead of the MBR. After you install GRUB, you can make it take over your MBR. Do this
at the prompt as a root user:
# /boot/grub/grub
Now, you can use the GRUB command
grub> install (hd1,2)/boot/grub/stage1 (hd1) (hd1,2)/boot/grub/stage2 p
(hd1,2)/boot/grub/menu.conf
Let's take a look at the installation of the first stage in the install command:
install (hd1,2)/boot/grub/stage1 (hd1)
What this says is that GRUB is installing the first stage image on the third partition of the
second disk (Linux).It is also installing to MBR on this same disk.
In the second part of the command, the stage two image is installed:
(hd1,2)/boot/grub/stage2
Finally, the installation is complete with the optional location of the configuration file:
p (hd1,2)/boot/grub/menu.conf
GRUB Configuration File
Given below is a sample /boot/grub/grub.conf file.
default=0
timeout=10
splashimage=(hd1,2)/grub/splash.xpm.gz
password --md5 [encrypted password]
title Linux
password --md5 [encrypted password]
root (hd1,2)
kernel /vmlinuz-2.6.23-13 ro root=LABEL=/
initrd /initrd-2.6.23-13.img
title Windows XP
password --md5 [encrypted password]
rootnoverify (hd0,0)
chainloader +1
 The default = option tells GRUB which image to boot by default after the timeout
period.



The splashimage option specifies the location of the image for use as the background
for the GRUB GUI.



The password option specified the MD-5 password to gain access to GRUB's
interactive boot options. To generate an md5 password, run the tool grub-md5-crypt
as root. Copy this into your grub-conf password—md5. You can create separate
passwords for each entry in the file.



The initrd option specifies the file that will be loaded at boot time as the initial RAM
disk.



The rootnoverify option tells GRUB to not try to vary the root of the OS.



The chainloader+1 tells GRUB to use a chain loader to load Windows on the first
partition of the first disk. It uses the blocklist notation to grab the first sector of the
current partition with '+1'.

Which Is Better? GRUB or LILO
LILO is older and less powerful. Originally LILO did not include a GUI menu choice (but
did provide a text user interface). To work with LILO an administrator has many tasks to
perform in addition to editing the configuration files. Because LILO has some disadvantages
that were corrected in GRUB, let's look into GRUB
1. Ease in Locating configuration file -GRUB is a bit easier to administer because the
GRUB loader is smart enough to locate the /boot/grub/grub.conf file when booting.
An administrator only needs to install GRUB once, using the "grub-install" utility.
Any changes made to grub.conf will be automatically used when the system is next
booted. In contrast, any changes made to lilo.conf are not read at boot time. The MBR
needs to be "refreshed."
2. Interactive command interface-Like GRUB does, LILO has no interactive command
interface and does not support booting from a network. If LILO MBR is configured
incorrectly, the LILO system becomes unbootable. If the GRUB configuration file is
configured incorrectly, it will default to the GRUB command-line interface without
risking of making the system unbootable.
3. Security- LILO and GRUB allows users—the root users—to boot into single-user
mode. Both have a password protection feature with a difference. While GRUB
allows for MD5 encrypted passwords, LILO manages only text passwords, which
anyone can read from the lilo.conf file with the command cat /etc/lilo.conf.
For the novice, start with LILO and then migrate to GRUB.

Sponsor Documents

Or use your account on DocShare.tips

Hide

Forgot your password?

Or register your new account on DocShare.tips

Hide

Lost your password? Please enter your email address. You will receive a link to create a new password.

Back to log-in

Close