System

Published on May 2016 | Categories: Documents | Downloads: 56 | Comments: 0 | Views: 358
of 42
Download PDF   Embed   Report

Comments

Content

Hardware
The hardware are the parts of computer itself including the Central Processing Unit (CPU) and related microchips and micro-circuitry, keyboards, monitors, case and drives (hard, CD, DVD, floppy, optical, tape, etc...). Other extra parts called peripheral components or devices include mouse, printers, modems, scanners, digital cameras and cards (sound, colour, video) etc... Together they are often referred to as a personal computer. Central Processing Unit - Though the term relates to a specific chip or the processor a CPU's performance is determined by the rest of the computer's circuitry and chips. Currently the Pentium chip or processor, made by Intel, is the most common CPU though there are many other companies that produce processors for personal computers. Examples are the CPU made by Motorola and AMD.

With faster processors the clock speed becomes more important. Compared to some of the first computers which operated at below 30 megahertz (MHz) the Pentium chips began at 75 MHz in the late 1990's. Speeds now exceed 3000+ MHz or 3 gigahertz (GHz) and different chip manufacturers use different measuring standards (check your local computer store for the latest speed). It depends on the circuit board that the chip is housed in, or the motherboard, as to whether you are able to upgrade to a faster chip. The motherboard contains the circuitry and connections that allow the various component to communicate with each other. Though there were many computers using many different processors previous to this I call the 80286 processor the advent of home computers as these were the processors that made computers available for the average person. Using a processor before the 286 involved learning a proprietary system and software. Most new software are being developed for the newest and fastest processors so it can be difficult to use an older computer system. Keyboard - The keyboard is used to type information into the computer or input information. There are many different keyboard layouts and sizes with the most common for Latin based languages being the QWERTY layout (named for the first 6 keys). The standard keyboard has 101 keys. Notebooks have embedded keys accessible by special keys or by pressing key combinations (CTRL or Command and P for example). Ergonomically designed keyboards are designed to make typing easier. Hand held devices have various and different keyboard configurations and touch screens. Some of the keys have a special use. There are referred to as command keys. The 3 most common are the Control or CTRL, Alternate or Alt and the Shift keys though there can be more (the Windows key for example or the Command key). Each key on a standard keyboard has one or two characters. Press the key to get the lower character and hold Shift to get the upper.

Removable Storage and/or Disk Drives - All disks need a drive to get information off - or read - and put information on the disk - or write. Each drive is designed for a specific type of disk whether it is a CD, DVD, hard disk or floppy. Often the term 'disk' and 'drive' are used to describe the same thing but it helps to understand that the disk is the storage device which contains computer files - or software - and the drive is the mechanism that runs the disk. Digital flash drives work slightly differently as they use memory cards to store information so there are no moving parts. Digital cameras also use Flash memory cards to store information, in this case photographs. Hand held devices use digital drives and many also use memory cards. Mouse - Most modern computers today are run using a mouse controlled pointer. Generally if the mouse has two buttons the left one is used to select objects and text and the right one is used to access menus. If the mouse has one button (Mac for instance) it controls all the activity and a mouse with a third button can be used by specific software programs. One type of mouse has a round ball under the bottom of the mouse that rolls and turns two wheels which control the direction of the pointer on the screen. Another type of mouse uses an optical system to track the movement of the mouse. Laptop computers use touch pads, buttons and other devices to control the pointer. Hand helds use a combination of devices to control the pointer, including touch screens. Note: It is important to clean the mouse periodically, particularly if it becomes sluggish. A ball type mouse has a small circular panel that can be opened, allowing you to remove the ball. Lint can be removed carefully with a tooth pick or tweezers and the ball can be washed with mild detergent. A build up will accumulate on the small wheels in the mouse. Use a small instrument or finger nail to scrape it off taking care not to scratch the wheels. Track balls can be cleaned much like a mouse and touch-pad can be wiped with a clean, damp cloth. An optical mouse can accumulate material from the surface that it is in contact with which can be removed with a finger nail or small instrument. Monitors - The monitor shows information on the screen when you type. This is called outputting information. When the computer needs more information it will display a message on the screen, usually through a dialog box. Monitors come in many types and sizes. The resolution of the monitor determines the sharpness of the screen. The resolution can be adjusted to control the screen's display.. Most desktop computers use a monitor with a cathode tube or liquid crystal display. Most notebooks use a liquid crystal display monitor. To get the full benefit of today's software with full colour graphics and animation, computers need a color monitor with a display or graphics card.

Printers - The printer takes the information on your screen and transfers it to paper or a hard copy. There are many different types of printers with various levels of quality. The three basic types of printer are; dot matrix, inkjet, and laser.
• • •

Dot matrix printers work like a typewriter transferring ink from a ribbon to paper with a series or 'matrix' of tiny pins. Ink jet printers work like dot matrix printers but fires a stream of ink from a cartridge directly onto the paper. Laser printers use the same technology as a photocopier using heat to transfer toner onto paper.

Modem - A modem is used to translate information transferred through telephone lines, cable or line-of-site wireless. The term stands for modulate and demodulate which changes the signal from digital, which computers use, to analog, which telephones use and then back again. Digital modems transfer digital information directly without changing to analog. Modems are measured by the speed that the information is transferred. The measuring tool is called the baud rate. Originally modems worked at speeds below 2400 baud but today analog speeds of 56,000 are standard. Cable, wireless or digital subscriber lines can transfer information much faster with rates of 300,000 baud and up. Modems also use Error Correction which corrects for transmission errors by constantly checking whether the information was received properly or not and Compression which allows for faster data transfer rates. Information is transferred in packets. Each packet is checked for errors and is re-sent if there is an error. Anyone who has used the Internet has noticed that at times the information travels at different speeds. Depending on the amount of information that is being transferred, the information will arrive at it's destination at different times. The amount of information that can travel through a line is limited. This limit is called bandwidth. There are many more variables involved in communication technology using computers, much of which is covered in the section on the Internet. Scanners- Scanners allow you to transfer pictures and photographs to your computer. A scanner 'scans' the image from the top to the bottom, one line at a time and transfers it to the computer as a series of bits or a bitmap. You can then take that image and use it in a paint program, send it out as a fax or print it. With optional Optical Character Recognition (OCR) software you can convert printed documents such as newspaper articles to text that can be used in your word processor. Most scanners use TWAIN software that makes the scanner accessable by other software applications.

Digital cameras allow you to take digital photographs. The images are stored on a memory chip or disk that can be transferred to your computer. Some cameras can also capture sound and video. Case - The case houses the microchips and circuitry that run the computer. Desktop models usually sit under the monitor and tower models beside. They come in many sizes, including desktop, mini, midi, and full tower. There is usually room inside to expand or add components at a later time. By removing the cover off the case you may find plate covered, empty slots that allow you to add cards. There are various types of slots including IDE, ASI, USB, PCI and Firewire slots. Depending on the type notebook computers may have room to expand . Most Notebooks also have connections or ports that allows expansion or connection to exterior, peripheral devices such as monitor, portable hard-drives or other devices. Cards - Cards are components added to computers to increase their capability. When adding a peripheral device make sure that your computer has a slot of the type needed by the device. Sound cards allow computers to produce sound like music and voice. The older sound cards were 8 bit then 16 bit then 32 bit. Though the human ear can't distinguish the fine difference between sounds produced by the more powerful sound card they allow for more complex music and music production. Colour cards allow computers to produce colour (with a colour monitor of course). The first colour cards were 2 bit which produced 4 colours [CGA]. It was amazing what could be done with those 4 colours. Next came 4 bit allowing for 16 [EGA and VGA ] colours. Then came 16 bit allowing for 1064 colours and then 24 bit which allows for almost 17 million colours and now 32 bit and higher allow monitors to display almost a billion separate colours. Video cards allow computers to display video and animation. Some video cards allow computers to display television as well as capture frames from video. A video card with a digital video camera allows computers users to produce live video. A high speed connection is required for effective video transmission. Network cards allow computers to connect together to communicate with each other. Network cards have connections for cable, thin wire or wireless networks. For more information see the section on Networks. Cables connect internal components to the Motherboard, which is a board with series of electronic path ways and connections allowing the CPU to communicate with the other components of the computer. Memory - Memory can be very confusing but is usually one of the easiest pieces of hardware to add to your computer. It is common to confuse chip memory with disk storage. An example of the difference between memory and storage would be the difference between a table where the

actual work is done (memory) and a filing cabinet where the finished product is stored (disk). To add a bit more confusion, the computer's hard disk can be used as temporary memory when the program needs more than the chips can provide. Random Access Memory or RAM is the memory that the computer uses to temporarily store the information as it is being processed. The more information being processed the more RAM the computer needs. One of the first home computers used 64 kilobytes of RAM memory (Commodore 64). Today's modern computers need a minimum of 64 Mb (recommended 128 Mb or more) to run Windows or OS 10 with modern software. RAM memory chips come in many different sizes and speeds and can usually be expanded. Older computers came with 512 Kb of memory which could be expanded to a maximum of 640 Kb. In most modern computers the memory can be expanded by adding or replacing the memory chips depending on the processor you have and the type of memory your computer uses. Memory chips range in size from 1 Mb to 4 Gb. As computer technology changes the type of memory changes as well making old memory chips obsolete. Check your computer manual to find out what kind of memory your computer uses before purchasing new memory chips.

Software
The software is the information that the computer uses to get the job done. Software needs to be accessed before it can be used. There are many terms used for process of accessing software including running, executing, starting up, opening, and others. Computer programs allow users to complete tasks. A program can also be referred to as an application and the two words are used interchangeably. Examples of software programs or applications would be the Operating System (DOS, Windows, UNIX, MacOS and various others), Wordprocessor (typing letters), Spreadsheet (financial info), Database (inventory control and address book), Graphics program, Internet Browser, Email and many others. As well any document that you create, graphic you design, sound you compose, file you make, letter you write, email you send or anything that you create on your computer is referred to as software. All software is stored in files. Software is stored on a disk, card, tape or one of the dozens of other storage devices available. There are millions of different pieces of software available for almost every conceivable need. Software is available commercially through stores and mail order and also available on the Internet. Software is also available through an Open Source license which allows anyone to use the Open Source software free of charge as long as the license is

maintained. If you can't find the application that you need software development companies can custom design software for you. The largest software companies offer packages of software or suites that include many of the programs that the average person or business needs. Software packages or suites contain programs that work together and share information, making it easier to combine that information in versatile ways. For example when writing a letter you can get the mailing address from an address book, include a letterhead from a graphics program and included a financial chart from a spreadsheet and combine this collection of information in the body of the letter. The three basic types of software are; commercial, shareware and open source software. Some software is also released into the public domain without a license. Commercial software comes prepackaged and is available from software stores and through the Internet. Shareware is software developed by individual and small companies that cannot afford to market their software world wide or by a company that wants to release a demonstration version of their commercial product. You will have an evaluation period in which you can decide whether to purchase the product or not. Shareware software often is disabled in some way and has a notice attached to explain the legal requirements for using the product. Open Source software is created by generous programmers and released into the public domain for public use. There is usually a copyright notice that must remain with the software product. Open Source software is not public domain in that the company or individual that develops the software retains ownership of the program but the software can be used freely. Many popular Open Source applications are being developed and upgraded regularly by individuals and companies that believe in the Open Source concept.
Operating Systems

All computers need some sort of Operating System (OS). The majority of modern home computers use some form of Microsoft's operating systems. The original Microsoft operating system was called DOS (Disk Operating System) though most computers use Windows. Windows comes in various versions beginning with version 3.x then 95, 98, ME, XP, Vista and currently version 7. A few computers use IBM's O/S2. Apple's Mac use their own operating system beginning with OS 1 though to OS 10.x. In the past large companies and institutions would have an operating system design exclusively for them but as the commercial operating systems become more sophisticated the benefits of this practice is becoming less apparent. Some computer professionals, Internet Service Providers (ISP) and mainframe computer users use an operating system such as UNIX (or a variant such as Linux), Windows NT or 2000 (Win2k) or one of the other network or server based operating systems. There are many smaller operating systems out there. The problem is that software is currently being developed only for the main operating systems and only the newest versions of these OS. Many older computers with unique operating systems have lots of software already developed

for them but there is very little new software being developed for the older computers. The older operating systems are less likely to offer technical support than the more modern operating systems. The operating system controls the input and output or directs the flow of information to and from the CPU. Much of this is done automatically by the system but it is possible to modify and control your system if you need to. When you turn your computer on it first needs to load the operating system sometimes referred to a booting up. Basically the computer starts from scratch every time you turn the power on. It checks all its components and will usually display a message if there is a problem. Loading the system is usually automatic. Once the system is loaded the user can start the application or program that they are going to use. Most computer users will run Microsoft Windows, Mac OS or Linux as their operating system. These OS are Graphic User Interface (GUI) which allows the user to control or run the computer using a Mouse and Icons. The user simply moves the mouse on a flat surface, rolls the trackball, or moves their hand over the touchpad to control a pointer. They then choose the option they want by pressing a button or touching the pad. Without a GUI the user controls the computer using the keys on the keyboard. This is referred to as a Command Line Interface (CLI)
Disk and Storage

Disks and cards are used to store information. All information on computers is stored in files. The size of a file is measured in bytes. A byte is approximately one character (letter 'a', number '1', symbol '?' etc....). A byte is made up of 8 bits. A bit is simply an on or an off signal which passes through the computers circuitry. Every piece of software can be broken down into a series of on or off signals or it's Binary Code.
• • • •

About About About About

a a a a

thousand bytes is a kilobyte (Kb). million bytes is a megabyte (Mb). billion bytes is a gigabyte (Gb). trillion bytes is a terabyte (Tb)

* Editor's Note: I say 'about' because everything in computers must be divisible by 8 so a kilobyte is actually 1,024 bytes. The reason for this goes beyond the scope of an introductory level document but as it can cause some confusion I thought it should be mentioned.

Disk are a common way of transporting information such as bringing files home from work or sharing files. Floppy disks have become less useful as file sizes increase and Compact disks (CDs), Flash drives and Digital Video Devices (DVDs) are becoming more popular. Most software is sold on a CD. Internal Hard disks are the most common storage device. Compact disks or CDs can store large amounts of information. One disk will store 650 Mb. One type is a CD-ROM which stand for Compact Disk Read Only Memory. Another type is a CDRW which stands for Compact Disk - Read/Write. CD drives can copy information or burn information on to a blank CD. Common Read Only CD blanks can only be written to once though more expensive Read/Write CD's can be used over and over again. DVD disks can store 4.5 Gb on standard disk, 8 Gb on a dual layer disk and 16 Gb on a blue-ray disk. Digital recorders allow you to store large files, such as movies, on a single disk. Hard disks store the majority of information on today's modern computer. Some of the first hard disk stored 10 to 40 Mb. Today the standard hard disk stores 150 Gb or more (this number is constantly increasing). Information can be stored and deleted as necessary. As files get larger the speed that hard disks can read and write become more important. Flash drive or thumb drives range in size. Floppy disk or diskette comes in two basic sizes; 5.25 inch and 3.5 inch. Both have a low and high density versions though 3.5 inch high density disks are the most common though many modern computers are being sold without floppy disk drives.
Disk size 3.5 high density CD DVD DVD dual layer Amount of storage 1.44 Mb 650 Mb 4.5 Gb 8 Gb Approximate printed 8.5 x 11 inch pages 720 pages a small library a feature length movie a long feature length movie with extras

There are many other storage devices including tapes, Panasonic's LS120 3.5 inch diskettes, Iomega's Zip & Jazz disks, VCR tape and many others. Innovation in storage technology is advancing rapidly and some technologies become obsolete..

Information is stored in an electromagnetic form much like a cassette or video tape. Note: Keep disks away from strong electric or magnetic fields including x-rays. Be aware of high electromagnetic areas in the room such as televisions, speakers, high tension wires, etc... Use disks only at room temperature and keep them out of direct sunlight. If possible avoid passing electromagnetic storage devices through airport x-rays. In theory information stored on a disk will last indefinitely but the physical storage device will wear out with usage and time so be sure to back up (copy) your important files to a second storage device.

What is a computer?

[edit]

A computer is a machine that manipulates data according to a list of instructions. A computer can also be defined as an electronic machine that accepts input (data), processes it and gives out results(information)

Computer organization at a glance

[edit]

A basic computer consists of three major components: CPU (Central Processing Unit), IO (Input/Output), and Memory as illustrated in Figure 1.

Figure 1

Data comes through Input and the CPU processes the data based on a program which is in Memory. The result is returned to Memory or is presented to the user. CPU itself consists of Arithmetic and Logic Unit(ALU), Control Unit(CU) and Registers.

Contents
[hide]
• • • • • •

1 Supercomputer 2 Mainframe 3 Workstation 4 The Personal Computer or PC 5 Microcontroller 6 Server
[edit]

Supercomputer

The Columbia Supercomputer - once one of the fastest.

Supercomputers are fast because they are really many computers working together. Supercomputers were introduced in the 1960's as the world's most advanced computer. These computers were used for complex calculations such as forecasting weather and quantum physics. Today, supercomputers are one of a kind; they are fast and very advanced. The term supercomputer is always evolving as tomorrow's normal computers are today's supercomputer. As of November 2008, the fastest supercomputer is the IBM Roadrunner. It has a theoretical processing peak of 1.71 petaflops and has currently peaked at 1.456 petaflops.
• • •

Modern supercomputer architecture Current fastest supercomputer http://dictionary.reference.com/browse/supercomputer

Search for Supercomputer on Wikipedia.

Mainframe

[edit]

Mainframe computer

Mainframes are computers where all the processing is done centrally, and the user terminals are called "dumb terminals" since they only input and output (and do not process). Mainframes are computers used mainly by large organizations for critical applications, typically bulk data processing such as census. Examples: banks, airlines, insurance companies, and colleges.
Search for Mainframe on Wikipedia.

Workstation

[edit]

Sun SPARCstation

Workstations are high-end, expensive computers that are made for more complex procedures and are intended for one user at a time. Some of the complex procedures consist of science, math and engineering calculations and are useful for computer design and manufacturing. Workstations are sometimes improperly named for marketing reasons. Real workstations are not usually sold in retail. The movie Toy Story was made on a set of Sun (Sparc) workstations [1] Perhaps the first computer that might qualify as a "workstation" was the IBM 1620.
Search for Workstation on Wikipedia.

The Personal Computer or PC

[edit]

A personal computer (PC)

PC is an abbreviation for a Personal Computer, it is also known as a Microcomputer. Its physical characteristics and low cost are appealing and useful for its users. The capabilities of a personal computer have changed greatly since the introduction of electronic computers. By the early 1970s, people in academic or research institutions had the opportunity for single-person use of a computer system in interactive mode for extended durations, although these systems would still have been too expensive to be owned by a single individual. The introduction of the microprocessor, a single chip with all the circuitry that formerly occupied large cabinets, lead to the proliferation of personal computers after about 1975. Early personal computers generally called microcomputers, sold often in kit form and in limited volumes and were of interest mostly to hobbyists and technicians. By the late 1970s, mass-market pre-assembled computers allowed a wider range of people to use computers, focusing more on software applications and less on development of the processor hardware. Throughout the 1970s and 1980s, home computers were developed for household use, offering some personal productivity, programming and games, while somewhat larger and more expensive systems (although still low-cost compared with minicomputers and mainframes) were aimed for office and small business use. Today a personal computer is an all rounded device that can be used as a productivity tool, a media server and a gaming machine. The modular construction of the personal computer allows components to be easily swapped out when broken or upgraded.
Search for Personal Computer on Wikipedia.

Microcontroller

[edit]

A microcontroller

Microcontrollers are mini computers that enable the user to store data, do simple commands and tasks, with little or no user interaction with the processor. These single circuit devices have minimal memory and program length but can be integrated with other processors for more complex functionality. Many such systems are known as Embedded Systems. Examples of embedded systems include Smartphones or car safety systems. Microcontrollers are important, they are used everyday in devices such as appliances and automobiles.
Search for Microcontroller on Wikipedia.

Server

[edit]

Inside of a Rack unit Server

Similar to mainframes in that they serve many uses with the main difference that the users (called clients) do their own processing usually. The server processes are devoted to sharing files and managing log on rights. A server is a central computer that contains collections of data and programs. Also called a network server, this system allows all connected users to share and store electronic data and applications. Two important types of servers are file servers and application servers.

Hardware

[edit]

Hardware refers to the physical elements of a computer. Also referred to as the machinery or the equipment of the computer. Examples of hardware in a computer are the keyboard, the monitor, the mouse and the processing unit However, most of a computer's hardware cannot be seen; in other words, it is not an external element of the computer, but rather an internal one, surrounded by the computer's casing. A computer's hardware is comprised of many different parts, but perhaps the most important of these is the motherboard. The motherboard is made up of even more parts that power and control the computer. In contrast to software, hardware is a physical entity, while software is a non-physical entity. Hardware and software are interconnected, without software, the hardware of a computer would have no function. However, without the creation of hardware to perform tasks directed by software via the central processing unit (box), software would be useless. techno medley 2007
Search for Hardware on Wikipedia.

Software

[edit]

Software, commonly known as programs, consists of all the electronic instructions that tell the hardware how to perform a task. These instructions come from a software developer in the form that will be accepted by the operating system that they are based on. For example, a program that is designed for the Windows operating system will only work for that operating system. Compatibility of software will vary as the design of the software and the operating system differ. A software that is designed for Windows XP may experience compatibility issue when running under Windows 2000 or NT. Software can also be described as a collection of routines, rules and symbolic languages that direct the functioning of the hardware.[1] Software is capable of performing specific tasks, as opposed to hardware which only perform mechanical tasks that they are mechanically designed for. Practical computer systems divide software systems into three major classes:
1. System software: Helps run computer hardware and computer system. Computer software includes operating systems, device drivers, diagnostic tools and more. 2. Programming software: Software that assists a programmer in writing computer programs. 3. Application software: Allows users to accomplish one or more tasks.
Search for Computer software on Wikipedia.

The term "software" is sometimes used in a broader context to describe any electronic media content which embodies expressions of ideas such as film, tapes, records, etc. Software is the electronic instruction that tells the computer to do a task.
Search for Software on Wikipedia.

Firmware

[edit]

Firmware

Firmware is both hardware and software. It is a computer chip that performs only one function. Examples are a video card and sound card. Can be explained as programming instructions that are stored in a read-only memory and can only be used by connecting them with software. [2] Used so that processing happens quicker as in video and sound cards.

Windows XP
From Wikipedia, the free encyclopedia Jump to: navigation, search

Windows XP
Part of the Microsoft Windows family

Screenshot of Windows XP Developer Microsoft Corporation Website Windows XP: Homepage Releases Release date Current version Source model License Kernel type Update method RTM: August 24, 2001 Retail: October 25, 2001 (info) 5.1.2600.5512 Service Pack 3 (x86 SP3) (21 April 2008; 2 years ago) (info) Closed source, Shared source[1] Microsoft-EULA Hybrid Windows Update

Platform support IA-32, x86-64, IA-64 Support status Extended Support until 8 April 2014 (only Service Pack 3 x86 and Service Pack 2 x64)[2] Security updates will be provided free of cost. Paid support is still available. Further reading
• • • •

Windows XP editions Features new to Windows XP Development of Windows XP Criticism of Windows XP

Windows XP is an operating system produced by Microsoft for use on personal computers, including home and business desktops, laptops, and media centers. It was first released in August 2001, and is currently one of the most popular versions of Windows. The name "XP" is short for "eXPerience."[3] Windows XP is the successor to both Windows 2000 and Windows Me, and is the first consumer-oriented operating system produced by Microsoft to be built on the Windows NT kernel and architecture. Windows XP was released for retail sale on October 25, 2001, and over 400 million copies were in use in January 2006, according to an estimate in that month by an IDC analyst.[4] It was succeeded by Windows Vista, which was released to volume license customers on November 8, 2006, and worldwide to the general public on January 30, 2007. Direct OEM and retail sales of Windows XP ceased on June 30, 2008. Microsoft continued to sell XP through their System Builders (smaller OEMs who sell assembled computers) program until January 31, 2009.[5][6] XP may continue to be available as these sources run through their inventory or by purchasing Windows 7 Ultimate, Windows 7 Pro, Windows Vista Ultimate or Windows Vista Business, and then downgrading to Windows XP.[7][8] The most common editions of the operating system are Windows XP Home Edition, which is targeted at home users, and Windows XP Professional, which offers additional features such as support for Windows Server domains and two physical processors, and is targeted at power users, business and enterprise clients. Windows XP Media Center Edition has additional multimedia features enhancing the ability to record and watch TV shows, view DVD movies, and listen to music. Windows XP Tablet PC Edition is designed to run stylus applications built using the Tablet PC platform. Windows XP was eventually released for two additional architectures, Windows XP 64-bit Edition for IA-64 (Itanium) processors and Windows XP Professional x64 Edition for x86-64. There is also Windows XP Embedded, a component version of the Windows XP Professional, and editions for specific markets such as Windows XP Starter Edition. By mid 2009, a manufacturer revealed the first Windows XP powered cellular telephone.[9] The NT-based versions of Windows, which are programmed in C, C++, and assembly[10], are known for their improved stability and efficiency over the 9x versions of Microsoft Windows.[11] [12] Windows XP presents a significantly redesigned graphical user interface, a change Microsoft promoted as more user-friendly than previous versions of Windows. A new software management facility called Side-by-Side Assembly was introduced to ameliorate the "DLL hell" that plagues 9x versions of Windows.[13][14] It is also the first version of Windows to use product activation to combat illegal copying, a restriction that did not sit well with some users[who?] and privacy advocates[who?]. Windows XP has also been criticized by some users for security vulnerabilities, tight integration of applications such as Internet Explorer 6 and Windows Media Player, and for aspects of its default user interface. Later versions with Service Pack 2, Service Pack 3, and Internet Explorer 8 addressed some of these concerns. During development, the project was codenamed "Whistler", after Whistler, British Columbia, as many Microsoft employees skied at the Whistler-Blackcomb ski resort.[15]

As of the end of July 2010, Windows XP is the most widely used operating system in the world with a 54.6% market share, having peaked at 76.1% in January 2007.[16] PC Paintbrush (also known simply as Paintbrush) was graphics editing software created by the ZSoft Corporation in 1985 for computers running the MS-DOS operating system. It was originally developed as a response to the first paintbrush program for the IBM PC, PCPaint, which had been released the prior year by Mouse Systems, the company responsible for bringing the mouse to the IBM PC for the first time. In 1984 Mouse Systems had released PCPaint to compete with Apple Paint on the Apple II computer and was already positioned to compete with MacPaint on Apple Computer's new Macintosh platform. Unlike MacPaint, PCPaint enabled users to work in color. When Paintbrush was released the following year, PCPaint had already added 16-color support for the PC's 64-color Enhanced Graphics Adapter (EGA), and Paintbrush followed with the PC's advantage of EGA support as well. (The EGA supported 64 colors, of which any 16 could be on the screen at a time in normal use.)

A screenshot of PC Paintbrush IV version 1.0 Also following the lead of Mouse Systems and PCPaint, one of the first pieces of software on the PC to use a mouse, the earliest versions of Paintbrush were distributed (by Microsoft) with a mouse included. Both Microsoft and their competitor Mouse Systems bundled their mice with Mouse Systems' PCPaint in 1984. At Christmas 1984, amidst record sales volumes in the home computer market, Microsoft had created a "sidecar"[1] bundle for the PCjr, complete with their mouse, but with their competitor's product PCPaint. With the release of Paintbrush the following year, Microsoft no longer needed to sell the software of their competitor in the PC mouse hardware market in order to have the same market advantage.

A screenshot of PC Paintbrush IV version 1.0 Microsoft's mechanical mice outsold Mouse Systems' optical mice after a few years, but PCPaint outsold Paintbrush until the late 1980s. Unlike most other applications before and since, Paintbrush version numbers were recorded with Roman numerals. Along with the release of Paintbrush, ZSoft, following in the footsteps of PCPaint's Pictor PIC format, the first popular image format for the PC, created the PCX image format. The first version of Paintbrush only allowed the use of a limited EGA 16-color palette. By version III, 256 colors and extended SVGA resolutions were supported through the use of hundreds of custom-tailored graphics drivers. The PCX format grew in capability accordingly. By its final version, Paintbrush was able to open and save PCX, TIFF, and GIF files.

A screenshot of PC Paintbrush 5.0+ for DOS. Paintbrush was later adapted to the Windows 3.1 operating system as Publisher's Paintbrush. Publisher's Paintbrush allowed importation of images via TWAIN-based capture devices like hand-held and flatbed scanners. Support for 24-bit color and simple photo retouching tools were also added, as well as the ability to open more than one image at a time. The program also added many simulations of real-world media such as oil paints, watercolors, and colored pencils, and it had a number of new smudge tools that took advantage of the increased color depth. Both PC Paintbrush and Publisher's Paintbrush were supplemented and later replaced with the more budget oriented PhotoFinish.

After ZSoft was sold, resold, and then finally absorbed by The Learning Company, an extremely low priced and simple graphics application was released under the title PC Paintbrush Designer. In 1969, the US Department of Defense started a project to allow researchers and military personnel to communicate with each other in an emergency. The project was called ARPAnet and it is the foundation of the Internet. Throughout the 1970's, what would later become the Internet was developed. While mostly military personnel and scientists used it in its early days, the advent of the World Wide Web in the early 1990's changed all that. Today, the Internet is not owned or operated by any one entity. This worldwide computer network allows people to communicate and exchange information in new ways.
What is the Internet?

The Internet is the largest computer network in the world, connecting millions of computers. A network is a group of two or more computer systems linked together.
There are two types of computer networks:


Local Area Network (LAN): A LAN is two or more connected computers sharing certain resources in a relatively small geographic location (the same building, for example).



Wide Area Network (WAN): A WAN typically consists of 2 or more LANs. The computers are farther apart and are linked by telephone lines, dedicated telephone lines, or radio waves. The Internet is the largest Wide Area Network (WAN) in existence.

Servers

All computers on the Internet (a wide area network, or WAN) can be lumped into two groups: servers and clients. In a network, clients and servers communicate with one another.
A server is the common source that :


Provides shared services (for example, network security measures) with other machines

AND


Manages resources (for example, one printer many people use) in a network.

The term server is often used to describe the hardware (computer), but the term also refers to the software (application) running on the computer. Many servers are dedicated, meaning they only perform specific tasks.
For example,
• •

An email server is a computer that has software running on it allowing it to "serve" email-related services. A web server has software running on it that allows it to "serve" web-related services.

Clients

Remember, all computers on the Internet (a wide area network, or WAN) can be lumped into two groups: servers and clients, which communicate with one another. Independent computers connected to a server are called clients. Most likely, your home or office computer does not provide services to other computers. Therefore, it is a client.

Clients run multiple client software applications that perform specific functions.
For example,
• •

An email application such as Microsoft Outlook is client software. Your web browser (such as Internet Explorer or Netscape) is client software.

Internet
From Wikipedia, the free encyclopedia Jump to: navigation, search This article is about the public worldwide computer network system. For other uses, see Internet (disambiguation).

Visualization of the various routes through a portion of the Internet. From 'The Opte Project'

The Internet is a global system of interconnected computer networks that use the standard Internet Protocol Suite (TCP/IP) to serve billions of users worldwide. It is a network of networks that consists of millions of private, public, academic, business, and government networks of local to global scope that are linked by a broad array of electronic and optical networking technologies. The Internet carries a vast array of information resources and services, most notably the inter-linked hypertext documents of the World Wide Web (WWW) and the infrastructure to support electronic mail.

Most traditional communications media, such as telephone and television services, are reshaped or redefined using the technologies of the Internet, giving rise to services such as Voice over Internet Protocol (VoIP) and IPTV. Newspaper publishing has been reshaped into Web sites, blogging, and web feeds. The Internet has enabled or accelerated the creation of new forms of human interactions through instant messaging, Internet forums, and social networking sites. The origins of the Internet reach back to the 1960s when the United States funded research projects of its military agencies to build robust, fault-tolerant, and distributed computer networks. This research and a period of civilian funding of a new U.S. backbone by the National Science Foundation spawned worldwide participation in the development of new networking technologies and led to the commercialization of an international network in the mid 1990s, and resulted in the following popularization of countless applications in virtually every aspect of modern human life. As of 2009, an estimated quarter of Earth's population uses the services of the Internet. The Internet has no centralized governance in either technological implementation or policies for access and usage; each constituent network sets its own standards. Only the overreaching definitions of the two principal name spaces in the Internet, the Internet Protocol address space and the Domain Name System, are directed by a maintainer organization, the Internet Corporation for Assigned Names and Numbers (ICANN). The technical underpinning and standardization of the core protocols (IPv4 and IPv6) is an activity of the Internet Engineering Task Force (IETF), a non-profit organization of loosely affiliated international participants that anyone may associate with by contributing technical expertise.

Contents
[hide]
• • •

• • •

• • • • • •

1 Terminology 2 History 3 Technology o 3.1 Protocols o 3.2 Structure 4 Governance 5 Modern uses 6 Services o 6.1 Information o 6.2 Communication o 6.3 Data transfer 7 Access 8 Social impact 9 See also 10 Notes 11 References 12 External links

Terminology
See also: Internet capitalization conventions

The terms Internet and World Wide Web are often used in everyday speech without much distinction. However, the Internet and the World Wide Web are not one and the same. The Internet is a global data communications system. It is a hardware and software infrastructure that provides connectivity between computers. In contrast, the Web is one of the services communicated via the Internet. It is a collection of interconnected documents and other resources, linked by hyperlinks and URLs.[1] The term the Internet, when referring to the entire global system of IP networks, has traditionally been treated as a proper noun and written with an initial capital letter. In the media and popular culture a trend has developed to regard it as a generic term or common noun and thus write it as "the internet", without capitalization. The Internet is also often simply referred to as the net. In many technical illustrations when the precise location or interrelation of Internet resources is not important, the Internet is often referred as the cloud, and literally depicted as such.

History
Main article: History of the Internet

The USSR's launch of Sputnik spurred the United States to create the Advanced Research Projects Agency (ARPA or DARPA) in February 1958 to regain a technological lead.[2][3] ARPA

created the Information Processing Technology Office (IPTO) to further the research of the Semi Automatic Ground Environment (SAGE) program, which had networked country-wide radar systems together for the first time. The IPTO's purpose was to find ways to address the US Military's concern about survivability of their communications networks, and as a first step interconnect their computers at the Pentagon, Cheyenne Mountain, and SAC HQ. J. C. R. Licklider, a promoter of universal networking, was selected to head the IPTO. Licklider moved from the Psycho-Acoustic Laboratory at Harvard University to MIT in 1950, after becoming interested in information technology. At MIT, he served on a committee that established Lincoln Laboratory and worked on the SAGE project. In 1957 he became a Vice President at BBN, where he bought the first production PDP-1 computer and conducted the first public demonstration of time-sharing.

Professor Leonard Kleinrock with one of the first ARPANET Interface Message Processors at UCLA

At the IPTO, Licklider's successor Ivan Sutherland in 1965 got Lawrence Roberts to start a project to make a network, and Roberts based the technology on the work of Paul Baran,[4] who had written an exhaustive study for the United States Air Force that recommended packet switching (opposed to circuit switching) to achieve better network robustness and disaster survivability. Roberts had worked at the MIT Lincoln Laboratory originally established to work on the design of the SAGE system. UCLA professor Leonard Kleinrock had provided the theoretical foundations for packet networks in 1962, and later, in the 1970s, for hierarchical routing, concepts which have been the underpinning of the development towards today's Internet. Sutherland's successor Robert Taylor convinced Roberts to build on his early packet switching successes and come and be the IPTO Chief Scientist. Once there, Roberts prepared a report called Resource Sharing Computer Networks which was approved by Taylor in June 1968 and laid the foundation for the launch of the working ARPANET the following year.

After much work, the first two nodes of what would become the ARPANET were interconnected between Kleinrock's Network Measurement Center at the UCLA's School of Engineering and Applied Science and Douglas Engelbart's NLS system at SRI International (SRI) in Menlo Park, California, on October 29, 1969. The third site on the ARPANET was the Culler-Fried Interactive Mathematics centre at the University of California at Santa Barbara, and the fourth was the University of Utah Graphics Department. In an early sign of future growth, there were already fifteen sites connected to the young ARPANET by the end of 1971. The ARPANET was one of the "eve" networks of today's Internet. In an independent development, Donald Davies at the UK National Physical Laboratory also discovered the concept of packet switching in the early 1960s, first giving a talk on the subject in 1965, after which the teams in the new field from two sides of the Atlantic ocean first became acquainted. It was actually Davies' coinage of the wording "packet" and "packet switching" that was adopted as the standard terminology. Davies also built a packet switched network in the UK called the Mark I in 1970. [5] Following the demonstration that packet switching worked on the ARPANET, the British Post Office, Telenet, DATAPAC and TRANSPAC collaborated to create the first international packet-switched network service. In the UK, this was referred to as the International Packet Switched Service (IPSS), in 1978. The collection of X.25-based networks grew from Europe and the US to cover Canada, Hong Kong and Australia by 1981. The X.25 packet switching standard was developed in the CCITT (now called ITU-T) around 1976.

A plaque commemorating the birth of the Internet at Stanford University

X.25 was independent of the TCP/IP protocols that arose from the experimental work of DARPA on the ARPANET, Packet Radio Net and Packet Satellite Net during the same time period. The early ARPANET ran on the Network Control Program (NCP), a standard designed and first implemented in December 1970 by a team called the Network Working Group (NWG) led by Steve Crocker. To respond to the network's rapid growth as more and more locations connected,

Vinton Cerf and Robert Kahn developed the first description of the now widely used TCP protocols during 1973 and published a paper on the subject in May 1974. Use of the term "Internet" to describe a single global TCP/IP network originated in December 1974 with the publication of RFC 675, the first full specification of TCP that was written by Vinton Cerf, Yogen Dalal and Carl Sunshine, then at Stanford University. During the next nine years, work proceeded to refine the protocols and to implement them on a wide range of operating systems. The first TCP/IP-based wide-area network was operational by January 1, 1983 when all hosts on the ARPANET were switched over from the older NCP protocols. In 1985, the United States' National Science Foundation (NSF) commissioned the construction of the NSFNET, a university 56 kilobit/second network backbone using computers called "fuzzballs" by their inventor, David L. Mills. The following year, NSF sponsored the conversion to a higher-speed 1.5 megabit/second network. A key decision to use the DARPA TCP/IP protocols was made by Dennis Jennings, then in charge of the Supercomputer program at NSF. The opening of the network to commercial interests began in 1988. The US Federal Networking Council approved the interconnection of the NSFNET to the commercial MCI Mail system in that year and the link was made in the summer of 1989. Other commercial electronic e-mail services were soon connected, including OnTyme, Telemail and Compuserve. In that same year, three commercial Internet service providers (ISPs) were created: UUNET, PSINet and CERFNET. Important, separate networks that offered gateways into, then later merged with, the Internet include Usenet and BITNET. Various other commercial and educational networks, such as Telenet, Tymnet, Compuserve and JANET were interconnected with the growing Internet. Telenet (later called Sprintnet) was a large privately funded national computer network with free dial-up access in cities throughout the U.S. that had been in operation since the 1970s. This network was eventually interconnected with the others in the 1980s as the TCP/IP protocol became increasingly popular. The ability of TCP/IP to work over virtually any pre-existing communication networks allowed for a great ease of growth, although the rapid growth of the Internet was due primarily to the availability of an array of standardized commercial routers from many companies, the availability of commercial Ethernet equipment for local-area networking, and the widespread implementation and rigorous standardization of TCP/IP on UNIX and virtually every other common operating system.

This NeXT Computer was used by Sir Tim Berners-Lee at CERN and became the world's first Web server.

Although the basic applications and guidelines that make the Internet possible had existed for almost two decades, the network did not gain a public face until the 1990s. On 6 August 1991, CERN, a pan European organization for particle research, publicized the new World Wide Web project. The Web was invented by British scientist Tim Berners-Lee in 1989. An early popular web browser was ViolaWWW, patterned after HyperCard and built using the X Window System. It was eventually replaced in popularity by the Mosaic web browser. In 1993, the National Center for Supercomputing Applications at the University of Illinois released version 1.0 of Mosaic, and by late 1994 there was growing public interest in the previously academic, technical Internet. By 1996 usage of the word Internet had become commonplace, and consequently, so had its use as a synecdoche in reference to the World Wide Web. Meanwhile, over the course of the decade, the Internet successfully accommodated the majority of previously existing public computer networks (although some networks, such as FidoNet, have remained separate). During the 1990s, it was estimated that the Internet grew by 100 percent per year, with a brief period of explosive growth in 1996 and 1997.[6] This growth is often attributed to the lack of central administration, which allows organic growth of the network, as well as the non-proprietary open nature of the Internet protocols, which encourages vendor interoperability and prevents any one company from exerting too much control over the network.[7] The estimated population of Internet users is 1.97 billion as of June 30, 2010.[8]

Technology
Protocols Main article: Internet Protocol Suite

The complex communications infrastructure of the Internet consists of its hardware components and a system of software layers that control various aspects of the architecture. While the hardware can often be used to support other software systems, it is the design and the rigorous standardization process of the software architecture that characterizes the Internet and provides the foundation for its scalability and success. The responsibility for the architectural design of the Internet software systems has been delegated to the Internet Engineering Task Force (IETF). [9] The IETF conducts standard-setting work groups, open to any individual, about the various aspects of Internet architecture. Resulting discussions and final standards are published in a series of publications, each called a Request for Comments (RFC), freely available on the IETF web site. The principal methods of networking that enable the Internet are contained in specially designated RFCs that constitute the Internet Standards. Other less rigorous documents are simply informative, experimental, or historical, or document the best current practices (BCP) when implementing Internet technologies. The Internet Standards describe a framework known as the Internet Protocol Suite. This is a model architecture that divides methods into a layered system of protocols (RFC 1122, RFC 1123). The layers correspond to the environment or scope in which their services operate. At the top is the Application Layer, the space for the application-specific networking methods used in software applications, e.g., a web browser program. Below this top layer, the Transport Layer connects applications on different hosts via the network (e.g., client–server model) with appropriate data exchange methods. Underlying these layers are the core networking

technologies, consisting of two layers. The Internet Layer enables computers to identify and locate each other via Internet Protocol (IP) addresses, and allows them to connect to one-another via intermediate (transit) networks. Lastly, at the bottom of the architecture, is a software layer, the Link Layer, that provides connectivity between hosts on the same local network link, such as a local area network (LAN) or a dial-up connection. The model, also known as TCP/IP, is designed to be independent of the underlying hardware which the model therefore does not concern itself with in any detail. Other models have been developed, such as the Open Systems Interconnection (OSI) model, but they are not compatible in the details of description, nor implementation, but many similarities exist and the TCP/IP protocols are usually included in the discussion of OSI networking. The most prominent component of the Internet model is the Internet Protocol (IP) which provides addressing systems (IP addresses) for computers on the Internet. IP enables internetworking and essentially establishes the Internet itself. IP Version 4 (IPv4) is the initial version used on the first generation of the today's Internet and is still in dominant use. It was designed to address up to ~4.3 billion (109) Internet hosts. However, the explosive growth of the Internet has led to IPv4 address exhaustion which is estimated to enter its final stage in approximately 2011.[10] A new protocol version, IPv6, was developed in the mid 1990s which provides vastly larger addressing capabilities and more efficient routing of Internet traffic. IPv6 is currently in commercial deployment phase around the world and Internet address registries (RIRs) have begun to urge all resource managers to plan rapid adoption and conversion.[11] IPv6 is not interoperable with IPv4. It essentially establishes a "parallel" version of the Internet not directly accessible with IPv4 software. This means software upgrades or translator facilities are necessary for every networking device that needs to communicate on the IPv6 Internet. Most modern computer operating systems are already converted to operate with both versions of the Internet Protocol. Network infrastructures, however, are still lagging in this development. Aside from the complex physical connections that make up its infrastructure, the Internet is facilitated by bi- or multi-lateral commercial contracts (e.g., peering agreements), and by technical specifications or protocols that describe how to exchange data over the network. Indeed, the Internet is defined by its interconnections and routing policies.
Structure

The Internet structure and its usage characteristics have been studied extensively. It has been determined that both the Internet IP routing structure and hypertext links of the World Wide Web are examples of scale-free networks. Similar to the way the commercial Internet providers connect via Internet exchange points, research networks tend to interconnect into large subnetworks such as GEANT, GLORIAD, Internet2 (successor of the Abilene Network), and the UK's national research and education network JANET. These in turn are built around smaller networks (see also the list of academic computer network organizations). Many computer scientists describe the Internet as a "prime example of a large-scale, highly engineered, yet highly complex system".[12] The Internet is extremely heterogeneous; for instance, data transfer rates and physical characteristics of connections vary widely. The Internet exhibits "emergent phenomena" that depend on its large-scale organization. For example, data

transfer rates exhibit temporal self-similarity. The principles of the routing and addressing methods for traffic in the Internet reach back to their origins the 1960s when the eventual scale and popularity of the network could not be anticipated. Thus, the possibility of developing alternative structures is investigated.[13]

Governance
Main article: Internet governance

ICANN headquarters in Marina Del Rey, California, United States

The Internet is a globally distributed network comprising many voluntarily interconnected autonomous networks. It operates without a central governing body. However, to maintain interoperability, all technical and policy aspects of the underlying core infrastructure and the principal name spaces are administered by the Internet Corporation for Assigned Names and Numbers (ICANN), headquartered in Marina del Rey, California. ICANN is the authority that coordinates the assignment of unique identifiers for use on the Internet, including domain names, Internet Protocol (IP) addresses, application port numbers in the transport protocols, and many other parameters. Globally unified name spaces, in which names and numbers are uniquely assigned, are essential for the global reach of the Internet. ICANN is governed by an international board of directors drawn from across the Internet technical, business, academic, and other non-commercial communities. The US government continues to have the primary role in approving changes to the DNS root zone that lies at the heart of the domain name system. ICANN's role in coordinating the assignment of unique identifiers distinguishes it as perhaps the only central coordinating body on the global Internet. On November 16, 2005, the World Summit on the Information Society, held in Tunis, established the Internet Governance Forum (IGF) to discuss Internet-related issues.

Modern uses
The Internet is allowing greater flexibility in working hours and location, especially with the spread of unmetered high-speed connections and web applications. The Internet can now be accessed almost anywhere by numerous means, especially through mobile Internet devices. Mobile phones, datacards, handheld game consoles and cellular routers

allow users to connect to the Internet from anywhere there is a wireless network supporting that device's technology. Within the limitations imposed by small screens and other limited facilities of such pocket-sized devices, services of the Internet, including email and the web, may be available. Service providers may restrict the services offered and wireless data transmission charges may be significantly higher than other access methods. The Internet has also become a large market for companies; some of the biggest companies today have grown by taking advantage of the efficient nature of low-cost advertising and commerce through the Internet, also known as e-commerce. It is the fastest way to spread information to a vast number of people simultaneously. The Internet has also subsequently revolutionized shopping—for example; a person can order a CD online and receive it in the mail within a couple of days, or download it directly in some cases. The Internet has also greatly facilitated personalized marketing which allows a company to market a product to a specific person or a specific group of people more so than any other advertising medium. Examples of personalized marketing include online communities such as MySpace, Friendster, Facebook, Twitter, Orkut and others which thousands of Internet users join to advertise themselves and make friends online. Many of these users are young teens and adolescents ranging from 13 to 25 years old. In turn, when they advertise themselves they advertise interests and hobbies, which online marketing companies can use as information as to what those users will purchase online, and advertise their own companies' products to those users. The low cost and nearly instantaneous sharing of ideas, knowledge, and skills has made collaborative work dramatically easier, with the help of collaborative software. Not only can a group cheaply communicate and share ideas, but the wide reach of the Internet allows such groups to easily form in the first place. An example of this is the free software movement, which has produced, among other programs, Linux, Mozilla Firefox, and OpenOffice.org. Internet "chat", whether in the form of IRC chat rooms or channels, or via instant messaging systems, allow colleagues to stay in touch in a very convenient way when working at their computers during the day. Messages can be exchanged even more quickly and conveniently than via e-mail. Extensions to these systems may allow files to be exchanged, "whiteboard" drawings to be shared or voice and video contact between team members. Version control systems allow collaborating teams to work on shared sets of documents without either accidentally overwriting each other's work or having members wait until they get "sent" documents to be able to make their contributions. Business and project teams can share calendars as well as documents and other information. Such collaboration occurs in a wide variety of areas including scientific research, software development, conference planning, political activism and creative writing. Social and political collaboration is also becoming more widespread as both Internet access and computer literacy grow. From the flash mob 'events' of the early 2000s to the use of social networking in the 2009 Iranian election protests, the Internet allows people to work together more effectively and in many more ways than was possible without it. The Internet allows computer users to remotely access other computers and information stores easily, wherever they may be across the world. They may do this with or without the use of security, authentication and encryption technologies, depending on the requirements. This is encouraging new ways of working from home, collaboration and information sharing in many

industries. An accountant sitting at home can audit the books of a company based in another country, on a server situated in a third country that is remotely maintained by IT specialists in a fourth. These accounts could have been created by home-working bookkeepers, in other remote locations, based on information e-mailed to them from offices all over the world. Some of these things were possible before the widespread use of the Internet, but the cost of private leased lines would have made many of them infeasible in practice. An office worker away from their desk, perhaps on the other side of the world on a business trip or a holiday, can open a remote desktop session into his normal office PC using a secure Virtual Private Network (VPN) connection via the Internet. This gives the worker complete access to all of his or her normal files and data, including e-mail and other applications, while away from the office. This concept has been referred to among system administrators as the Virtual Private Nightmare,[14] because it extends the secure perimeter of a corporate network into its employees' homes.

Services
Information

Many people use the terms Internet and World Wide Web, or just the Web, interchangeably, but the two terms are not synonymous. The World Wide Web is a global set of documents, images and other resources, logically interrelated by hyperlinks and referenced with Uniform Resource Identifiers (URIs). URIs allow providers to symbolically identify services and clients to locate and address web servers, file servers, and other databases that store documents and provide resources and access them using the Hypertext Transfer Protocol (HTTP), the primary carrier protocol of the Web. HTTP is only one of the hundreds of communication protocols used on the Internet. Web services may also use HTTP to allow software systems to communicate in order to share and exchange business logic and data. World Wide Web browser software, such as Microsoft's Internet Explorer, Mozilla Firefox, Opera, Apple's Safari, and Google Chrome, let users navigate from one web page to another via hyperlinks embedded in the documents. These documents may also contain any combination of computer data, including graphics, sounds, text, video, multimedia and interactive content including games, office applications and scientific demonstrations. Through keyword-driven Internet research using search engines like Yahoo! and Google, users worldwide have easy, instant access to a vast and diverse amount of online information. Compared to printed encyclopedias and traditional libraries, the World Wide Web has enabled the decentralization of information. The Web has also enabled individuals and organizations to publish ideas and information to a potentially large audience online at greatly reduced expense and time delay. Publishing a web page, a blog, or building a website involves little initial cost and many cost-free services are available. Publishing and maintaining large, professional web sites with attractive, diverse and up-to-date information is still a difficult and expensive proposition, however. Many individuals and some companies and groups use web logs or blogs, which are largely used as easily updatable online diaries. Some commercial organizations encourage staff to communicate advice in their areas of specialization in the hope that visitors will be impressed by the expert knowledge and free information, and be attracted to the corporation as a result. One example of

this practice is Microsoft, whose product developers publish their personal blogs in order to pique the public's interest in their work. Collections of personal web pages published by large service providers remain popular, and have become increasingly sophisticated. Whereas operations such as Angelfire and GeoCities have existed since the early days of the Web, newer offerings from, for example, Facebook and MySpace currently have large followings. These operations often brand themselves as social network services rather than simply as web page hosts. Advertising on popular web pages can be lucrative, and e-commerce or the sale of products and services directly via the Web continues to grow. In the early days, web pages were usually created as sets of complete and isolated HTML text files stored on a web server. More recently, websites are more often created using content management or wiki software with, initially, very little content. Contributors to these systems, who may be paid staff, members of a club or other organization or members of the public, fill underlying databases with content using editing pages designed for that purpose, while casual visitors view and read this content in its final HTML form. There may or may not be editorial, approval and security systems built into the process of taking newly entered content and making it available to the target visitors.
Communication

E-mail is an important communications service available on the Internet. The concept of sending electronic text messages between parties in a way analogous to mailing letters or memos predates the creation of the Internet. Today it can be important to distinguish between internet and internal e-mail systems. Internet e-mail may travel and be stored unencrypted on many other networks and machines out of both the sender's and the recipient's control. During this time it is quite possible for the content to be read and even tampered with by third parties, if anyone considers it important enough. Purely internal or intranet mail systems, where the information never leaves the corporate or organization's network, are much more secure, although in any organization there will be IT and other personnel whose job may involve monitoring, and occasionally accessing, the e-mail of other employees not addressed to them. Pictures, documents and other files can be sent as e-mail attachments. E-mails can be cc-ed to multiple e-mail addresses. Internet telephony is another common communications service made possible by the creation of the Internet. VoIP stands for Voice-over-Internet Protocol, referring to the protocol that underlies all Internet communication. The idea began in the early 1990s with walkie-talkie-like voice applications for personal computers. In recent years many VoIP systems have become as easy to use and as convenient as a normal telephone. The benefit is that, as the Internet carries the voice traffic, VoIP can be free or cost much less than a traditional telephone call, especially over long distances and especially for those with always-on Internet connections such as cable or ADSL. VoIP is maturing into a competitive alternative to traditional telephone service. Interoperability between different providers has improved and the ability to call or receive a call from a traditional telephone is available. Simple, inexpensive VoIP network adapters are available that eliminate the need for a personal computer. Voice quality can still vary from call to call but is often equal to and can even exceed that of traditional calls. Remaining problems for VoIP include emergency telephone number dialling

and reliability. Currently, a few VoIP providers provide an emergency service, but it is not universally available. Traditional phones are line-powered and operate during a power failure; VoIP does not do so without a backup power source for the phone equipment and the Internet access devices. VoIP has also become increasingly popular for gaming applications, as a form of communication between players. Popular VoIP clients for gaming include Ventrilo and Teamspeak. Wii, PlayStation 3, and Xbox 360 also offer VoIP chat features.
Data transfer

File sharing is an example of transferring large amounts of data across the Internet. A computer file can be e-mailed to customers, colleagues and friends as an attachment. It can be uploaded to a website or FTP server for easy download by others. It can be put into a "shared location" or onto a file server for instant use by colleagues. The load of bulk downloads to many users can be eased by the use of "mirror" servers or peer-to-peer networks. In any of these cases, access to the file may be controlled by user authentication, the transit of the file over the Internet may be obscured by encryption, and money may change hands for access to the file. The price can be paid by the remote charging of funds from, for example, a credit card whose details are also passed—usually fully encrypted—across the Internet. The origin and authenticity of the file received may be checked by digital signatures or by MD5 or other message digests. These simple features of the Internet, over a worldwide basis, are changing the production, sale, and distribution of anything that can be reduced to a computer file for transmission. This includes all manner of print publications, software products, news, music, film, video, photography, graphics and the other arts. This in turn has caused seismic shifts in each of the existing industries that previously controlled the production and distribution of these products. Streaming media refers to the act that many existing radio and television broadcasters promote Internet "feeds" of their live audio and video streams (for example, the BBC). They may also allow time-shift viewing or listening such as Preview, Classic Clips and Listen Again features. These providers have been joined by a range of pure Internet "broadcasters" who never had onair licenses. This means that an Internet-connected device, such as a computer or something more specific, can be used to access on-line media in much the same way as was previously possible only with a television or radio receiver. The range of available types of content is much wider, from specialized technical webcasts to on-demand popular multimedia services. Podcasting is a variation on this theme, where—usually audio—material is downloaded and played back on a computer or shifted to a portable media player to be listened to on the move. These techniques using simple equipment allow anybody, with little censorship or licensing control, to broadcast audio-visual material worldwide. Webcams can be seen as an even lower-budget extension of this phenomenon. While some webcams can give full-frame-rate video, the picture is usually either small or updates slowly. Internet users can watch animals around an African waterhole, ships in the Panama Canal, traffic at a local roundabout or monitor their own premises, live and in real time. Video chat rooms and video conferencing are also popular with many uses being found for personal webcams, with and without two-way sound. YouTube was founded on 15 February 2005 and is now the leading website for free streaming video with a vast number of users. It uses a flash-based web player to stream and show video files. Registered users may upload an unlimited amount of video and

build their own personal profile. YouTube claims that its users watch hundreds of millions, and upload hundreds of thousands of videos daily.[15]

Access
See also: Internet access worldwide, List of countries by number of Internet users, English on the Internet, Global Internet usage, and Unicode

Graph of Internet users per 100 inhabitants between 1997 and 2007 by International Telecommunication Union

The prevalent language for communication on the Internet is English. This may be a result of the origin of the Internet, as well as English's role as a lingua franca. It may also be related to the poor capability of early computers, largely originating in the United States, to handle characters other than those in the English variant of the Latin alphabet. After English (28% of Web visitors) the most requested languages on the World Wide Web are Chinese (23%), Spanish (8%), Japanese (5%), Portuguese and German (4% each), Arabic, French and Russian (3% each), and Korean (2%).[16] By region, 42% of the world's Internet users are based in Asia, 24% in Europe, 14% in North America, 10% in Latin America and the Caribbean taken together, 5% in Africa, 3% in the Middle East and 1% in Australia/Oceania.[17] The Internet's technologies have developed enough in recent years, especially in the use of Unicode, that good facilities are available for development and communication in the world's widely used languages. However, some glitches such as mojibake (incorrect display of some languages' characters) still remain. Common methods of Internet access in homes include dial-up, landline broadband (over coaxial cable, fiber optic or copper wires), Wi-Fi, satellite and 3G technology cell phones. Public places to use the Internet include libraries and Internet cafes, where computers with Internet connections are available. There are also Internet access points in many public places such as airport halls and coffee shops, in some cases just for brief use while standing. Various terms are used, such as "public Internet kiosk", "public access terminal", and "Web payphone". Many hotels now also have public terminals, though these are usually fee-based. These terminals are widely accessed for various usage like ticket booking, bank deposit, online payment etc. Wi-Fi provides wireless access to computer networks, and therefore can do so to the Internet itself.

Hotspots providing such access include Wi-Fi cafes, where would-be users need to bring their own wireless-enabled devices such as a laptop or PDA. These services may be free to all, free to customers only, or fee-based. A hotspot need not be limited to a confined location. A whole campus or park, or even an entire city can be enabled. Grassroots efforts have led to wireless community networks. Commercial Wi-Fi services covering large city areas are in place in London, Vienna, Toronto, San Francisco, Philadelphia, Chicago and Pittsburgh. The Internet can then be accessed from such places as a park bench.[18] Apart from Wi-Fi, there have been experiments with proprietary mobile wireless networks like Ricochet, various high-speed data services over cellular phone networks, and fixed wireless services. High-end mobile phones such as smartphones generally come with Internet access through the phone network. Web browsers such as Opera are available on these advanced handsets, which can also run a wide variety of other Internet software. More mobile phones have Internet access than PCs, though this is not as widely used.[citation needed] An Internet access provider and protocol matrix differentiates the methods used to get online.

Social impact
Main article: Sociology of the Internet

The Internet has enabled entirely new forms of social interaction, activities, and organizing, thanks to its basic features such as widespread usability and access. Social networking websites such as Facebook, Twitter and MySpace have created new ways to socialize and interact. Users of these sites are able to add a wide variety of information to pages, to pursue common interests, and to connect with others. It is also possible to find existing acquaintances, to allow communication among existing groups of people. Sites like LinkedIn foster commercial and business connections. YouTube and Flickr specialize in users' videos and photographs. In the first decade of the 21st century the first generation is raised with widespread availability of Internet connectivity, bringing consequences and concerns in areas such as personal privacy and identity, and distribution of copyrighted materials. These "digital natives" face a variety of challenges that were not present for prior generations. The Internet has achieved new relevance as a political tool, leading to Internet censorship by some states. The presidential campaign of Howard Dean in 2004 in the United States was notable for its success in soliciting donation via the Internet. Many political groups use the Internet to achieve a new method of organizing in order to carry out their mission, having given rise to Internet activism. Some governments, such as those of Iran, North Korea, Myanmar, the People's Republic of China, and Saudi Arabia, restrict what people in their countries can access on the Internet, especially political and religious content.[citation needed] This is accomplished through software that filters domains and content so that they may not be easily accessed or obtained without elaborate circumvention.[original research?] In Norway, Denmark, Finland[19] and Sweden, major Internet service providers have voluntarily, possibly to avoid such an arrangement being turned into law, agreed to restrict access to sites listed by authorities. While this list of forbidden URLs is only supposed to contain addresses of known child pornography sites, the content of the list is secret.[citation needed] Many countries, including the United States, have enacted laws against the possession or distribution of certain

material, such as child pornography, via the Internet, but do not mandate filtering software. There are many free and commercially available software programs, called content-control software, with which a user can choose to block offensive websites on individual computers or networks, in order to limit a child's access to pornographic materials or depiction of violence. The Internet has been a major outlet for leisure activity since its inception, with entertaining social experiments such as MUDs and MOOs being conducted on university servers, and humorrelated Usenet groups receiving much traffic. Today, many Internet forums have sections devoted to games and funny videos; short cartoons in the form of Flash movies are also popular. Over 6 million people use blogs or message boards as a means of communication and for the sharing of ideas. The pornography and gambling industries have taken advantage of the World Wide Web, and often provide a significant source of advertising revenue for other websites.[citation needed] Although many governments have attempted to restrict both industries' use of the Internet, this has generally failed to stop their widespread popularity.[citation needed] One main area of leisure activity on the Internet is multiplayer gaming. This form of recreation creates communities, where people of all ages and origins enjoy the fast-paced world of multiplayer games. These range from MMORPG to first-person shooters, from role-playing games to online gambling. This has revolutionized the way many people interact[citation needed] while spending their free time on the Internet. While online gaming has been around since the 1970s, [citation needed] modern modes of online gaming began with subscription services such as GameSpy and MPlayer. Non-subscribers were limited to certain types of game play or certain games. Many people use the Internet to access and download music, movies and other works for their enjoyment and relaxation. Free and fee-based services exist for all of these activities, using centralized servers and distributed peer-to-peer technologies. Some of these sources exercise more care with respect to the original artists' copyrights than others. Many people use the World Wide Web to access news, weather and sports reports, to plan and book vacations and to find out more about their interests. People use chat, messaging and e-mail to make and stay in touch with friends worldwide, sometimes in the same way as some previously had pen pals. The Internet has seen a growing number of Web desktops, where users can access their files and settings via the Internet. Cyberslacking can become a serious drain on corporate resources; the average UK employee spent 57 minutes a day surfing the Web while at work, according to a 2003 study by Peninsula Business Services.[20] Internet addiction disorder is excessive computer use that interferes with daily life. Some psychologists believe that ordinary internet use has other effects on individuals for instance interfering with the deep thinking that leads to true creativity.

What do l need to know?
A browser is a program on your computer that enables you to search ("surf") and retrieve information on the WorldWideWeb (WWW), which is part of the Internet. The Web is simply a large number of computers linked together in a global network, that can be accessed using an address (URL, Uniform Resource Locator, e.g. http://www.veths.no for the Oslo Veterinary

School), in the same way that you can phone anyone in the world given their telephone number. URLs are often long and therefore easy to type incorrectly. They all begin with http://, and many (but not all) begin with http://www. In many cases the first part (http://, or even http://www.) can be omitted, and you will still be able to access the page. Try this with http://www.cnn.com. URLs are constructed in a standard fashion. This may be of use to you. Take, for example, the address of this page:
http://oslovet.veths.no/teaching/internet/basics.html

The ".no" indicates that the server is in Norway. The page you have accessed is called basics.html, and it resides in a folder on the server called "internet", which is in the folder called "teaching". If the URL that you type does not work, and you have typed it correctly (no mistakes are allowed!), the reason may be that the host has renamed the web page, or moved it to another folder on the server, or you are not allowed access to that level. Try removing the text of the URL stepwise from the right-hand end in this example, until you reach the main page:
http://www.bbc.co.uk/info/purpose/public_purposes/index.shtml.

It is possible, in many cases, to find your way back down through the hierarchy to the page you were interested in. You don't need to know how the telephone network functions to be able to make a phone call. However, you ought to know how to use your telephone apparatus and the finesses (software) it contains. Your computer is the equivalent of the telephone, and a browser is the equivalent of the software that modern telephones contain. (A browser can also be used to handle electronic mail, create and edit information on the Internet, as l have done here, and to contact discussion groups. This presentation is limited to the use of browsers to surf the WorldWideWeb).

Searching the Web
If you don't know the telephone number of the person you wish to ring to, you need a telephone directory. The Web provides two methods of searching for pages providing information: • sites presenting web pages sorted by category and subcategories, e.g. Yahoo (several sites, including http://www.yahoo.com and http://www.yahoo.no) • sites offering search engines that return lists of web pages containing text that matches a search word or string, e.g. Google (http://www.google.com), AltaVista (http://www.altavista.com) and FAST Search (http://www.alltheweb.com).

Many web sites offer both, or a combination of, these alternatives. Before you conduct a search, it is important to consider, among others, the following points:

1. Is your choice of search term is adequate, too restrictive or too general? 2. Is the search you have planned to undertake most suited for a search engine that categorizes web sites, so that you can browse through appropriate subcategories when the first results are returned? 3. Are you more interested in using a search engine that merely returns all the web pages it has found containing the search term? 4. Have you read the Search Help pages that most search pages offer? These will tell you how the search engine conducts the search, and therefore how you ought to plan your search. 5. Bear in mind the fact that engines differ in their coverage of the Internet, their speed and whether they are largely compiled manually by people or automatically by 'robots' that scan the Internet. A search strategy must include knowledge of how the search engine you have planned to use handles Boolean Logic and other similar search terms, e.g. • transgenic AND mice will find all pages covering transgenic mice, but not pages that only mention transgenic rats • transgenic NOT mice will return pages on all species other than mice. • "transgenic mice" will find pages that contain the phrase "transgenic mice", i.e. where the words are adjacent in the text, but will not return a page containing the text "transgenic rodents, including mice", for which transgenic NEAR mice would be necessary • transgen* will return occurences of trangenesis, transgenic and transgenic (thereby increasing your chances of finding pages you are interested in), but will also return pages featuring the word 'transgender', which is probably not what you were looking for! N.B. Not all search engines support all these options, some support many more, and all of them have a "default" function (e.g. AND or OR) which you must check before you start. To illustrate the enormous implications that this may have for your search results, try out the following search strings in the AltaVista or Google search engines and note the number of web pages returned for each alternative: • Karina Smith • "Karina Smith" • Karina and Smith • KARINA and SMITH • Karin* Smith • Karin*Smith Excellent reviews of these processes have been written by information specialist Krys Bottrill. These cover: • Basic principles when searching the Web • Choice of search terms and strategies

• Comparison of Search engines on the Web

Sponsor Documents

Or use your account on DocShare.tips

Hide

Forgot your password?

Or register your new account on DocShare.tips

Hide

Lost your password? Please enter your email address. You will receive a link to create a new password.

Back to log-in

Close