Latest Intranet

Published on June 2016 | Categories: Documents | Downloads: 38 | Comments: 0 | Views: 629
of 112
Download PDF   Embed   Report

Comments

Content

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES

TABLE OF CONTENTS
1. INTRODUCTION TO INTRANET...............................................................................................................................3 BENEFITS OF INTRANETS.......................................................................................................................................................3 EXTRANET...................................................................................................................................................................4 INTRANET TECHNOLOGY........................................................................................................................................5 INTRANETS..................................................................................................................................................................9 INTRANET SITE.........................................................................................................................................................11 THE INTRANET INFRASTRUCTURE.....................................................................................................................13 2. PROXY SERVER...........................................................................................................................................................16 WHAT IS A PROXY SERVER? ...............................................................................................................................................16 TYPES AND FUNCTIONS.......................................................................................................................................................17 3. FIREWALLS..................................................................................................................................................................22 FIREWALL BRIEFS:............................................................................................................................................................23 WHAT ARE THE BASIC TYPES OF FIREWALLS?.........................................................................................................................32 4. WEB SECURITY...........................................................................................................................................................35 WHAT IS COMPUTER SECURITY?.........................................................................................................................................35 WHAT IS WEB SECURITY?.................................................................................................................................................35 VULNERABILITIES..............................................................................................................................................................37 5. 6. ACCESS CONTROL.....................................................................................................................................................38 CLIENT SERVER ARCHITECTURE........................................................................................................................40 CLIENT SERVER ARCHITECTURE: ........................................................................................................................................42 N-TIER CLIENT-SERVER ARCHITECTURE .........................................................................................................44 THREE-TIER ARCHITECTURE................................................................................................................................46 7. INTERNET INFORMATION SERVICES (IIS).........................................................................................................47 VERSIONS.........................................................................................................................................................................47 HISTORY..........................................................................................................................................................................47 VERSION 7.0....................................................................................................................................................................47 VERSION 7.5....................................................................................................................................................................49 IIS MEDIA PACK..............................................................................................................................................................57 ADVANTAGES...................................................................................................................................................................66 DISADVANTAGES...............................................................................................................................................................66 EXAMPLE.........................................................................................................................................................................67 OVERVIEW.......................................................................................................................................................................68 SUPPORTED BROWSERS.......................................................................................................................................................68 OTHER USES.....................................................................................................................................................................69 PRODUCT OVERVIEW..........................................................................................................................................................69 WINDOWS LIVE ID WEB AUTHENTICATION..........................................................................................................................69 WINDOWS LIVE ID SUPPORT FOR WINDOWS CARDSPACE.......................................................................................................69 WINDOWS LIVE ID SUPPORT FOR OPENID...........................................................................................................................70 8. WEB BROWSER............................................................................................................................................................71 HISTORY..........................................................................................................................................................................71 CURRENT WEB BROWSERS..................................................................................................................................................71 1 Compiled by Mrs. Wamwati Catherine

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES

PROTOCOLS AND STANDARDS...............................................................................................................................................71 9. INTERNET EXPLORER..............................................................................................................................................73 HISTORY..........................................................................................................................................................................73 FEATURES........................................................................................................................................................................73 10. INTERNET EXPLORER 8 BETA 2 RUNNING ON WINDOWS VISTA..............................................................78 11. INTERNET EXPLORER 9...........................................................................................................................................78 12. INTERNET EXPLORER ADMINISTRATION KIT................................................................................................83 IEAK 7.........................................................................................................................................................................83 COMPARE WINDOWS..........................................................................................................................................................84 13. WEB DEVELOPMENT.................................................................................................................................................87 WEB DEVELOPMENT AS AN INDUSTRY...................................................................................................................................87 TYPICAL AREAS................................................................................................................................................................88 METHOD..........................................................................................................................................................................89 14. WEB HOSTING.............................................................................................................................................................92 HOSTING YOUR OWN WEB SITE...........................................................................................................................95 USING AN INTERNET SERVICE PROVIDER...............................................................................................................................96 THINGS TO CONSIDER WITH AN ISP.....................................................................................................................................96 WHAT IS THE WORLD WIDE WEB?.....................................................................................................................................97 HOW DOES THE WWW WORK?..........................................................................................................................................97 HOW DOES A BROWSER FETCH A WEB PAGE?......................................................................................................................97 HOW DOES A BROWSER DISPLAY A WEB PAGE?...................................................................................................................97 WHAT IS A WEB SERVER?.................................................................................................................................................98 WHAT IS AN INTERNET SERVICE PROVIDER?.........................................................................................................................98 15. THE COMMON GATEWAY INTERFACE (CGI) ..................................................................................................98 WHAT IS CGI?............................................................................................................................................................99 16. HYPERTEXT TRANSFER PROTOCOL SECURE (HTTPS)...............................................................................108 MAIN IDEA.....................................................................................................................................................................108 17. INTRODUCTION TO SSL.........................................................................................................................................111 18. HYPERTEXT TRANSFER PROTOCOL.................................................................................................................111

2 Compiled by Mrs. Wamwati Catherine

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES

INTRODUCTION TO INTRANET An intranet is a private computer network that uses Internet technologies to securely share any part of an organization's information or operational systems with its employees. Sometimes the term refers only to the organization's internal website, but often it is a more extensive part of the organization's computer infrastructure and private websites are an important component and focal point of internal communication and collaboration. An intranet is built from the same concepts and technologies used for the Internet, such as client-server computing and the Internet Protocol Suite (TCP/IP). Any of the well known Internet protocols may be found in an intranet, such as HTTP (web services), SMTP (e-mail), and FTP (file transfer). Internet technologies are often deployed to provide modern interfaces to legacy information systems hosting corporate data. An intranet can be understood as a private version of the Internet, or as a private extension of the Internet confined to an organization. Intranets differ from extranets in that the former are generally restricted to employees of the organization while extranets may also be accessed by customers, suppliers, or other approved parties. Extranets extend a private network onto the Internet with special provisions for access, authorization and authentication. An organization's intranet does not necessarily have to provide access to the Internet. When such access is provided it is usually through a network gateway with a firewall, shielding the intranet from unauthorized external access. The gateway often also implements user authentication, encryption of messages, and often virtual private network (VPN) connectivity for off-site employees to access company information, computing resources and internal communications. Increasingly, intranets are being used to deliver tools and applications, e.g., collaboration (to facilitate working in groups and teleconferencing) or sophisticated corporate directories, sales and Customer relationship management tools, project management etc., to advance productivity. Intranets are also being used as corporate culture-change platforms. For example, large numbers of employees discussing key issues in an intranet forum application could lead to new ideas in management, productivity, quality, and other corporate issues. In large intranets, website traffic is often similar to public website traffic and can be better understood by using web metrics software to track overall activity. User surveys also improve intranet website effectiveness. Intranet user-experience, editorial, and technology teams work together to produce inhouse sites. Most commonly, intranets are managed by the communications, HR or CIO departments of large organizations, or some combination of these. Benefits of intranets

1. Workforce productivity: Intranets can also help users to locate and view
information faster and use applications relevant to their roles and responsibilities. With the help of a web browser interface, users can access data held in any
3 Compiled by Mrs. Wamwati Catherine

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES

2. 3.

4.

5. 6. 7. 8. 9.

database the organization wants to make available, anytime and - subject to security provisions - from anywhere within the company workstations, increasing employees' ability to perform their jobs faster, more accurately, and with confidence that they have the right information. It also helps to improve the services provided to the users. Time: With intranets, organizations can make more information available to employees on a "pull" basis (i.e., employees can link to relevant information at a time which suits them) rather than being deluged indiscriminately by emails. Communication: Intranets can serve as powerful tools for communication within an organization, vertically and horizontally. From a communications standpoint, intranets are useful to communicate strategic initiatives that have a global reach throughout the organization. The type of information that can easily be conveyed is the purpose of the initiative and what the initiative is aiming to achieve, who is driving the initiative, results achieved to date, and who to speak to for more information. By providing this information on the intranet, staff have the opportunity to keep up-to-date with the strategic focus of the organization. Web publishing allows 'cumbersome' corporate knowledge to be maintained and easily accessed throughout the company using hypermedia and Web technologies. Examples include: employee manuals, benefits documents, company policies, business standards, newsfeeds, and even training, can be accessed using common Internet standards (Acrobat files, Flash files, CGI applications). Because each business unit can update the online copy of a document, the most recent version is always available to employees using the intranet. Business operations and management: Intranets are also being used as a platform for developing and deploying applications to support business operations and decisions across the internetworked enterprise. Cost-effective: Users can view information and data via web-browser rather than maintaining physical documents such as procedure manuals, internal phone list and requisition forms. Promote common corporate culture: Every user is viewing the same information within the Intranet. Enhance Collaboration: With information easily accessible by all authorised users, teamwork is enabled. Cross-platform Capability: Standards-compliant web browsers are available for Windows, Mac, and UNIX.

EXTRANET An extranet is a private network that uses Internet protocols, network connectivity, and possibly the public telecommunication system to securely share part of an organization's information or operations with suppliers, vendors, partners, customers or other businesses. An extranet can be viewed as part of a company's intranet that is extended to users outside the company (e.g.: normally over the Internet). It has also been described as a "state of mind" in which the Internet is perceived as a way to do business with a pre-approved set of other companies business-to-business (B2B), in isolation from all other Internet users. In contrast, business-to-consumer (B2C) involves known server(s) of one or more companies, communicating with previously unknown consumer users. Briefly, an extranet can be understood as an intranet mapped onto the public Internet or some other transmission system not accessible to the general public, but managed by more than one company's administrator(s). For example, military networks of different security levels may map onto a common military radio transmission system that never
4 Compiled by Mrs. Wamwati Catherine

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES

connects to the Internet. Any private network mapped onto a public one is a virtual private network (VPN). In contrast, an intranet is a VPN under the control of a single company's administrator(s).

INTRANET TECHNOLOGY An Intranet is a localized LAN based communication system to enhance inter-company communications using the Internet technology. A summation of such simplicity does not completely cover all the possibilities involved with real-time business collaboration available. When an organization thinks of building an Intranet, the most common mistake made is the overlooking of the wealth of features available to optimizing business strategies all the while keeping within budget. Looking at this technology from the top level, some of the most common functions that ninety percent of the companies perform for intra-company communications could be easily made efficient using the Intranet technology. Internet technologies have enabled us to communicate large amounts of information virtually anywhere in the virtually almost instantaneously. Intranets enable organizations to alleviate many typical problems associated with information access, authoring, processing and delivery. An Intranet employs the same communications and computer technologies as the Internet except within the privacy and security of your organization. An Intranet offers near effortless input and retrieval of corporate information, E-mail integration, and a gateway to the Internet. Organizations have definitely improved on data acquisition with the introduction of the electronically assisted workforce, but with the same stroke have created another problem of the location and retrieval of data. These hindrances could be easily made non-existent with using a properly suited Intranet. 1. Using Sales Force Automation, a salesperson can update corporate prices on demand 2. Collaboration between departments and individuals can be accomplished real-time 3. Activity of company representatives can be documented for improving efficiency 4. Press releases can be linked for real-time feedback 5. Sales reports may be generated on demand 6. Corporate handbooks and policies may be adjusted and updated with limited distributive effort 7. The company can publish managerial contact information for those that require immediate access 8. Online education and policy review can be updated to suit the situation as your organization grows 9. Company briefings can be distributed for informational purposes 10.Employee time logs may be filled out either on location or remotely This list is just a brief hint at the possibilities open to you and your organization's best interests.

5 Compiled by Mrs. Wamwati Catherine

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES

Planning and creating an intranet Most organizations devote considerable resources into the planning and implementation of their intranet as it is of strategic importance to the organization's success. Some of the planning would include topics such as:
• • • • • • •

The purpose and goals of the intranet Persons or departments responsible for implementation and management Implementation schedules and phase-out of existing systems Defining and implementing security of the intranet How to ensure it is within legal boundaries and other constraints Level of interactivity (eg wikis, on-line forms) desired. Is the input of new data and updating of existing data to be centrally controlled or devolved?

These are in addition to the hardware and software decisions (like Content Management Systems), participation issues (like good taste, harassment, confidentiality), and features to be supported. The actual implementation would include steps such as 1. 2. 3. 4. 5. 6. User involvement to identify users' information needs. Setting up web server(s) with the appropriate hardware and software. Setting up web server access using a TCP/IP network. Installing required user applications on computers. Creation of document framework for the content to be hosted. User involvement in testing and promoting use of intranet.

The Difference between Intranet and Internet Design Your intranet and your public website on the open Internet are two different information spaces and should have two different user interface designs. It is tempting to try to save design resources by reusing a single design, but it is a bad idea to do so because the two types of site differ along several dimensions:










Users differ. Intranet users are your own employees who know a lot about the company, its organizational structure, and special terminology and circumstances. Your Internet site is used by customers who will know much less about your company and also care less about it. The tasks differ. The intranet is used for everyday work inside the company, including some quite complex applications; the Internet site is mainly used to find out information about your products. The type of information differs. The intranet will have many draft reports, project progress reports, human resource information, and other detailed information, whereas the Internet site will have marketing information and customer support information. The amount of information differs. Typically, an intranet has between ten and a hundred times as many pages as the same company's public website. The difference is due to the extensive amount of work-in-progress that is documented on the intranet and the fact that many projects and departments never publish anything publicly even though they have many internal documents. Bandwidth and cross-platform needs differ. Intranets often run between a hundred and a thousand times faster than most Internet users' Web access which is stuck at low-band or mid-band, so it is feasible to use rich graphics and even
6 Compiled by Mrs. Wamwati Catherine

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES

multimedia and other advanced content on intranet pages. Also, it is sometimes possible to control what computers and software versions are supported on an intranet, meaning that designs need to be less cross-platform compatible (again allowing for more advanced page content). Most basically, your intranet and your website are two different information spaces. They should look different in order to let employees know when they are on the internal net and when they have ventured out to the public site. Different looks will emphasize the sense of place and thus facilitate navigation. Also, making the two information spaces feel different will facilitate an understanding of when an employee is seeing information that can be freely shared with the outside and when the information is internal and confidential. An intranet design should be much more task-oriented and less promotional than an Internet design. A company should only have a single intranet design, so users only have to learn it once. Therefore it is acceptable to use a much larger number of options and features on an intranet since users will not feel intimidated and overwhelmed as they would on the open Internet where people move rapidly between sites. (I know of a frighteningly large number of companies with multiple intranet homepages and multiple intranet styles: Step 1 is to get rid of that in favor of a unified intranet.) An intranet will need a much stronger navigational system than an Internet site because it has to encompass a larger amount of information. In particular, the intranet will need a navigation system to facilitate movement between servers, whereas a public website only needs to support within-site navigation. Managing the Intranet There are three ways of managing an intranet:

1. A single, tightly managed server: only approved documents get posted, and
the site has a single, well-structured information architecture and navigation system under the control of a single designer. Even though this approach maximizes usability of the information that passes the hurdles and gets posted, this is not the best way to build a corporate information infrastructure because the central choke point delays the spread of new and useful information. A totalitarian intranet will cause you to miss too many opportunities. 2. A mini-Internet: multiple servers are online but are not coordinated, complete chaos reigns, you have to use "resource discovery" methods like spiders to find out what is on your own intranet, no consistent design (everybody does their own pages), no information architecture. This approach might seem to increase opportunities for communication across the company, but in reality does not do so since people will be incapable of finding most of the information in an anarchy. 3. Managed diversity: many servers are in use, but pages are designed according to a single set of templates and interface standards; the entire intranet follows a well-planned (and usability-tested) information infrastructure that facilitates navigation. This is my preferred approach. Managed diversity will probably characterize many aspects of the coming network economy, but we have less experience with this approach than with more traditional topdown management.

7 Compiled by Mrs. Wamwati Catherine

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES

Just one example of improved usability from taking advantage of managed diversity: an intranet search engine can take advantage of weighted keywords to increase precision. Weights are impossible on the open Internet, since every site about widgets will claim to have the highest possible relevance weight for the keyword "widget." On an intranet, even a light touch of information management should ensure that authors assign weights reasonably fairly and that they use, say, a controlled vocabulary correctly to classify their pages. Extranets: Blended Design An extranet is a special set of pages that are made available to selected business partners such that they can directly access computational resources inside your company. Typical examples include allowing customers to check on the status of their orders (e.g., when will my urgent order ship? did you or did you not receive our payment?) and allowing approved vendors to look at requests for proposals. The extranet is a blend of the public Internet and the closed intranet and needs to be designed as such. Fundamentally, an extranet is a part of the Internet since it is accessed by people in many different companies who will be using your public website but will not have access to the truly internal parts of your intranet. Therefore, the visual style and main navigation options of the extranet should be visibly similar to the design of your Internet site: your business partners should feel that the two sites come from the same company. A subtle difference in the two styles (e.g., complimentary color tones) will help emphasize the closed and confidential nature of the extranet. It will often be reasonable to have links from extranet pages to pages on the public website, but you should not have links that point to your private intranet since your business partners will not be able to follow such links. Actual use of the extranet shares many properties with intranet use: the users will be using the extranet as a major part of their everyday job, so it will be possible to use specialized language and relatively complex interactions. It may even be reasonable to assume some amount of training on the part of the users, since they will be motivated to improve the efficiency of their own business by making better use of your extranet. The training needs and the complexity of your extranet can not be too demanding, however, since you normally cannot assume that extranet users are dedicated to the use of your particular design and nothing else. A typical extranet user may be a corporate purchasing agent who may need to deal with your extranet as well as the extranets of, say, 50 other companies where he or she has placed orders. Your extranet must be fairly easy to use if this purchasing agent is to remember its features and options from one visit to the next. Is there a difference between designing a Web site for the Internet and Intranet? These are two different environments, and it is important to consider the environment in designing Web site. What is the difference in environment between the two? Internet is characterized by the following:


Slow access speeds (e.g. 56Kbps dial up connectivity)
8 Compiled by Mrs. Wamwati Catherine

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES

• • •

Different types of web browsers are used to view the website (e.g. Netscape, IE, Opera) Different types of operating systems are used to view the website (e.g. Windows, Mac) Global audience (e.g. multilingual, different cultures)

Intranets are characterized by the following:
• • • •

Faster access speeds (e.g. 10Mbps LAN connectivity) Standardized type of browser. Minimal or no compatibility issues Standardized type of operating systems. Primarily local audience

How do I design Web site for the Internet? Designing Web site for the Internet is more difficult that designing a Web site for an Intranet environment. In general, a Web site designed for the Internet will work well in an Intranet environment. Considerations for designing an Internet Web site are:
• • • •

Small file size for fast downloading Designed to display and function correctly on a wide range of browsers Avoid use of frames, search engines have difficulty indexing frames Web site Content must consider the global audience

How do I design Web site for the Intranet? Designing Web site for an Intranet is much easier than designing for the Internet. The technologies used in an Intranet setting is more standardized and controlled, unlike the Internet, wherein Web site are accessed by so many different types of technology. Considerations for designing an Intranet Web site are:
• • •

The file size can be bigger due to faster access speeds You can design for a specific type of browser Use of frames is acceptable

Note that in global companies, Intranet Web sites are connected via slow WAN connections. If your Intranet will be viewed by your local and global offices, it would be a good idea to design the Web site following the Internet criteria.

• •

Internet -A global network of computers. Intranet -A network of computers limited to a company or organization.

INTRANETS An Intranet is a communication infrastructure. It is based on the communication standards of the Internet and the content standards of the World-Wide Web. Therefore, the tools used to create an Intranet are identical to those used for Internet and Web applications. The distinguishing feature of an Intranet is that access to information published on the Intranet is restricted to clients in the Intranet group. Historically this has been accomplished through the use of LANs protected by Firewalls. Three Sources of Information At least three sources of content quickly emerge on enterprise Intranets: formal, project/group, and informal.
9 Compiled by Mrs. Wamwati Catherine

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES



The formal information is the officially sanctioned and commissioned information of the enterprise. It usually has been reviewed for accuracy, currency, confidentiality, liability and commitment. This is the information with which the formal management infrastructure is most concerned. Project/group information is intended for use within a specific group. It may be used to communicate and share ideas, coordinate activities or manage the development and approval of content that eventually will become formal. Project/Group information generally is not listed in the enterprise-wide directories and may be protected by passwords or other restrictions if general access might create problems. Informal information begins to appear on the Intranet when authors and users discover how easy it is to publish within the existing infrastructure. Informal information is not necessarily the same thing as personal home pages. A personal folder or directory on an Intranet server can serve as a repository for white papers, notes and concepts that may be shared with others in the enterprise to further common interests, for the solicitation of comments or for some other reason. Instead of making copies, the URL can be given to the interested parties, and the latest version can be read and tracked as it changes. This type of informal information can become a powerful stimulus for the collaborative development of new concepts and ideas.









Two Types of Pages There are two basic types of pages: content pages and broker pages. Content pages contain the information of value required by a user. Broker pages help users find the content pages appropriate for their current requirements.





Content pages can take many forms. They may be static pages, like the ones you are reading here, or they may be active pages where the page content is generated "on the fly" from a database or other repository of information. Content pages generally are owned by an individual. Over time expect the "form and sense" of content pages to change as more experience is gained in the areas of non-linear documents (hyper-linking), multimedia, modular content and integration of content and logic using applets. Broker pages also come in more than one form, but all have the same function, to help users find relevant information. Good broker pages serve an explicitly defined audience or function. Many of the pages with which we already are familiar are broker pages. A hyperlink broker page contains links to other pages, in context. It also may have a short description of the content to which it is pointing to help the user evaluate the possibilities. On the other hand, a search oriented broker page is not restricted to the author's scope, but it also does not provide the same level of context to help the user formulate the appropriate question.

Combination search and hyperlink broker pages are common today. Search engines return the "hits" as a hyperlink broker page with weightings and first lines for context, and hyperlink broker pages sometimes end in a specific category that is refined by searching that defined space. It is unlikely that hyperlink broker pages ever will be generated entirely by search engines and agents, because the context that an expert broker provides often contains subjective or expert value in its own right. After all, not all content is of equal quality or value for specific purposes, and even context sensitive word searches cannot provide these qualitative assessments. As the amount of raw
10 Compiled by Mrs. Wamwati Catherine

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES

content increases, we will continue to need reviewers to screen which competing content is most useful, or the official source, for workers in our enterprise. A special use of broker pages is for assisting with the management of web content. There are several specific instances of these management pages. We call one instance the "Enterprise Map" because collectively these broker pages form a hyperlinked map of all the formal content in the organization. Other sets are used for project management, functional management and to support content review cycles. The use of broker pages for each of these management functions is discussed in more detail in the next section. INTRANET SITE In the IT domain of activity, we can define that a company intranet site is a location into a network – within an organization – that uses Internet technologies, where it can be found, accessed and shared (with or without restrictions) any information that the company wishes to be available to its employees and, sometimes, to other persons. In some large companies, intranet sites are used for e-learning and as a way for employees to access and "discover" company news. Because intranets use Internet technologies, one can find Internet protocols as TCP/IP and HTTP to transfer data. Intranet sites are not limited to a particular location and, as a consequence, any office on any continent can be connected to the same intranet. Very often, intranet sites are provided with links to Internet sites, and may use public networks to transfer data. Intranet sites are implemented in all kind of organizations order to conserve time and money. Intranet site construction The company intranet site is a powerful communication instrument because it offers all departments the opportunity to publish online the information needed to successfully run the business. The company intranet is a two-way application: information can be delivered to employees and employees can send comments, order forms, and feedback through it – fig.1.

Fig.1. Order form within an intranet

11 Compiled by Mrs. Wamwati Catherine

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES

The construction of an intranet site is a challenge which can be "condensed" as follows: • the access to information is constrained to a limited number of electronic forms • the intranet site have to integrate data/information from different sources • the intranet site is to facilitate rapid access to latest news and/or to processes which need immediate solutions • the intranet site is to provide services permanently (24 hours a day) • the intranet site is to provide different level of security for different categories of users • the intranet site is to concentrate the data/information in a single place easy to access, upgrade and correct in any situation The construction of an intranet site is to be taken into consideration only when having proper answers to the following questions: • How are intranet sites used in other companies? • Which are the objectives in order to use the intranet site? • What courses are needed before and after the implementation? • What methods are to be used in order to measure the positive/negative effects of intranet site implementation? • Regarding this new internal communication technology, what attitude will have the employees? • How will be affected the communication process between departments? • In case of implementation, how are compartments to be (re)organized? • In case of implementation, which is the cost/performance ratio? When • • • • • • referring to the technological aspect, we have the following possible questions: Which is the intranet site structure? How is organized the content of each section and of each single page? What changes will occur in the company computer network(s)? How will be integrated the information applications currently used? Which are the policies to limit access for different users? How to integrate/reuse existing resources? What HR problems imply the usage of intranet site?



Trends for intranet development According to an Internet website, there are identified 10 trends for intranet development: 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. Customers Are Becoming the Focus of the System Delivering Information Where Its Needed The Intranet Is Becoming a Utility Integration into All Business Processes More Interesting Applications More Support for Collaboration More Sophisticated Development Models Less Is More Creeping Knowledge Management New Business Opportunities

12 Compiled by Mrs. Wamwati Catherine

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES

The usage of an intranet site is to be considered both because the expansion of the Internet and the development of Information Technology. To successfully run a business it is necessary a high volume of information and this is not possible anymore without computers. As a consequence of these two major considerations, I found as absolutely necessary the construction (for didactic purposes) of an intranet site – fig.2.

Fig.2. Company intranet tutorial – the main page as seen in the Faculty of Business' intranet The tutorial is written in Romanian for Romanian users and its destination is to show students the main sections and the possible content of each page with examples from real firms. To access this application (Web-based tutorial), one can connect to the Faculty of Business' intranet.

THE INTRANET INFRASTRUCTURE Management Roles The Intranet Infrastructure relies on four distinct roles for managing the formal content: the Web Administrator, publishers, editors and authors.



The Web Administrator is responsible for facilitating cooperative opportunities among the various organizations in the enterprise and administering the enterprise content management infrastructure. By contrast, the Webmaster is responsible for the technical infrastructure. The same person may serve in both roles, but to do so requires that she have both of the distinctly different skill sets and enough time to carry out both sets of responsibilities. The Web Administrator chairs the Enterprise Web Council. Publishers determine what kinds of formal information will be created and maintained by their organization. They represent their organization on the Enterprise Web Council and may create and chair an Editorial Board within their own organization. The publishers own the processes and policies that both the enterprise and their organization require officially sanctioned information to follow. In larger organizations, they may delegate the monitoring and implementation to editors, but the responsibility remains with the publisher.



13 Compiled by Mrs. Wamwati Catherine

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES



Editors are found in organizations that have multiple product lines or service areas. For example, Human Resources might have editors for Benefits, Compensation, Equal Opportunity and Staffing. In a line of business, the editor often is the primary marketing person for each product line. The editor determines what official information will be created for specific activities and manages the information creation and update process, including the formal review cycles. Authors create the content.



The Enterprise Map A structured set of broker pages can be very useful for managing the life cycle of published content. We call this the Enterprise Map, and while the primary audience for this set of broker pages is management, we have discovered that end users frequently use the Enterprise Map for browsing or to find content when their broker pages have failed them. With the exception of the content pages at the bottom of the map, the Enterprise Map pages consist only of links. Each page corresponds to an organization committed to the creation and quality of a set of content pages. In today's organizations, commitments tend to aggregate into a hierarchical pyramid, but the mapping technique also could be applied to most any organizational model. The Enterprise Map also does not have to be based on organization. It could be a logical map where the top level is the mission, the next level the major focuses required to accomplish the mission, and so on, down to the content level. Since most large organizations are starting from a pyramidal accountability structure, that is the form of the example that follows. Using the terminology above, the Enterprise Map begins with a top page, owned by the Web Administrator (representing the CIO and /or CEO). This page consists of a link to the Map Page of each line of business and major support organization in the enterprise. This page is owned by the publisher for that organization. The Publisher Pages, in turn, consist of links to each of their Editor's Pages. The Editor's Pages may have additional pages or structure below them created and maintained by the editor that help organize the content, but ultimately these pages point to the actual content pages. This model can scale to governments or large diversified companies. In a government organization, the Administrator's Page would point to all the Agencies, and the map would follow each agency structure to the content level. Since each agency may be a large organization, each may have its own Administrator and Web Council. A major advantage of this mapping architecture is its flexibility. It can originate from the top down or the bottom up. If several government agencies developed their Intranets independently, with this type of Enterprise Mapping structure, they can be linked together at any time in the future by creating the next level map page. None of the existing Maps need to be changed. This flexibility is a result of the distributed decision making central coordination model on which the architecture is built.

14 Compiled by Mrs. Wamwati Catherine

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES

The Enterprise Map

The Map provides a commitment (or accountability) view of all the formal content in the enterprise. Management can start at their point in the map and follow the links to all the content which supports the functions for which they are responsible. An Enterprise Map has several interesting characteristics. Once it is in place, authors and editors can self publish, and the information automatically shows up in a logical structure. Also, content categories and even editor level functions generally are not affected by reorganizations, because major product lines and service areas generally are not added or deleted. Most reorganizations shift responsibilities at higher levels in the Map. This means that when reorganization does occur, the Map can be adjusted quickly, by the new managers, by changing one or a few links. Content does not need to be moved around. The result is a very low maintenance path to all the formal enterprise content, without forcing publishing through a central authority that can quickly become a bottleneck. Access to Database Information Discrete, structured information still is managed best by a database management system. However, the quest for a universal user interface has led to the requirement for access to existing database information through a web browser. Three models of access can be identified:
• • •

Automatic tailoring of page content User specified database requests User initiated database updates

From a technical standpoint, there are a number of ways these interfaces can be created. What is important is that access be provided to the content providers (knowledge workers) in a way that supports the distributed decision-making, enabling model rather than the centralized expertise model. This means that relatively naive users need to be able to incorporate database managed data into their pages. A number of tools are beginning to emerge that move in this direction. One set combines a library of cgi-scripts or Java-scripts residing on the hosting web-server with templates, wizards and "bots" incorporated into WYSIWYG authoring packages (e.g. Microsoft's FrontPage ). The other set, coming from the database side, automatically converts
15 Compiled by Mrs. Wamwati Catherine

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES

database schemas into hyperlinked web pages that allow users to browse and access the data from their web browser (e.g. Netscheme). When applications that merge these two functional approaches begin to appear, very powerful packages will be available to content providers who need to incorporate database information into their pages. Creating an effective Intranet requires attention to the management infrastructure, the technical infrastructure and the content creation process. The focus of this paper has been on architecting a management infrastructure that supports content creation, maintenance and use in a distributed decision-making environment. The architecture and models outlined above describe a process rather than specific tools. Since we first described and implemented this architecture, Intranet tools have evolved at an unprecedented rate. Even so, today's tools have not made the need for a management architecture obsolete, because tools provide support, not the purpose or goals that define an organization. The evolution of Intranet tools will continues to make implementation and operation of many aspects of the architecture easier into the foreseeable future. Intranets are rapidly becoming the primary information infrastructure for enterprises. To effectively utilize this infrastructure, we must become as proficient at managing content and coordinating our actions on our Intranets as we are at managing content and coordinating our actions using paper today. The architecture and models above were put forth to provide the first few steps in this direction.

PROXY SERVER
What is a proxy server? Definition 1: • A proxy server, also known as a "proxy" or "application level gateway", is a computer that acts as a gateway between a local network (e.g., all the computers at one company or in one building) and a larger-scale network such as the Internet. Proxy servers provide increased performance and security. In some cases, they monitor employees' use of outside resources.

A proxy server works by intercepting connections between sender and receiver. All incoming data enters through one port and is forwarded to the rest of the network via another port. By blocking direct access between two networks, proxy servers make it much more difficult for hackers to get internal addresses and details of a private network. Some proxy servers are a group of applications or servers that block common Internet services. For example, an HTTP proxy intercepts web access, and an SMTP proxy intercepts email. A proxy server uses a network addressing scheme to present one organization-wide IP address to the Internet. The server funnels all user requests to the Internet and returns responses to the appropriate users. In addition to restricting access from outside, this mechanism can prevent inside users from reaching specific Internet resources (e.g., certain web sites). A proxy server can also be one of the components of a firewall.
16 Compiled by Mrs. Wamwati Catherine

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES

Proxies may also cache web pages. Each time an internal user requests a URL from outside, a temporary copy is stored locally. The next time an internal user requests the same URL, the proxy can serve the local copy instead of retrieving the original across the network, improving performance. Definition 2



In computer networks, a proxy server is a server (a computer system or an application program) that services the requests of its clients by forwarding requests to other servers. A client connects to the proxy server, requesting some service, such as a file, connection, web page, or other resource, available from a different server. The proxy server provides the resource by connecting to the specified server and requesting the service on behalf of the client. A proxy server may optionally alter the client's request or the server's response, and sometimes it may serve the request without contacting the specified server. In this case, it would 'cache' the first request to the remote server, so it could save the information for later, and make everything as fast as possible.

NB: A proxy server that passes all requests and replies unmodified is usually called a gateway or sometimes tunneling proxy. A proxy server can be placed in the user's local computer or at various points between the user and the destination servers or the Internet. Types and functions Proxy servers implement one or more of the following functions:Caching proxy server A caching proxy server accelerates service requests by retrieving content saved from a previous request made by the same client or even other clients. Caching proxies keep local copies of frequently requested resources, allowing large organizations to significantly reduce their upstream bandwidth usage and cost, while significantly increasing performance. Most ISPs and large businesses have a caching proxy. These machines are built to deliver superb file system performance (often with RAID and journaling) and also contain hot-rodded versions of TCP. Caching proxies were the first kind of proxy server. The HTTP 1.0 and later protocols contain many types of headers for declaring static (cacheable) content and verifying content freshness with an original server, e.g. ETAG (validation tags), If-Modified-Since (date-based validation), Expiry (timeout-based invalidation), etc. Other protocols such as DNS support expiry only and contain no support for validation. Some poorly-implemented caching proxies have had downsides (e.g., an inability to use user authentication). Some problems are described as HTTP Proxy/Caching Problems. Another important use of the proxy server is to reduce the hardware cost. In organization there may be many systems working in the same network or under control of one server, now in this situation we can not have individual connection for all systems with internet.
17 Compiled by Mrs. Wamwati Catherine

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES

We can simply connect those systems with one proxy server and proxy server with the main server. Web proxy A proxy that focuses on WWW traffic is called a "web proxy". The most common use of a web proxy is to serve as a web cache. Most proxy programs (e.g. Squid) provide a means to deny access to certain URLs in a blacklist, thus providing content filtering. This is usually used in a corporate environment, though with the increasing use of Linux in small businesses and homes, this function is no longer confined to large corporations. Some web proxies reformat web pages for a specific purpose or audience (e.g., cell phones and PDAs). AOL dialup customers used to have their requests routed through an extensible proxy that 'thinned' or reduced the detail in JPEG pictures. This sped up performance, but caused trouble, either when more resolution was needed or when the thinning program produced incorrect results. This is why in the early days of the web many web pages would contain a link saying "AOL Users Click Here" to bypass the web proxy and to avoid the bugs in the thinning software. Content-filtering web proxy A content-filtering web proxy server provides administrative control over the content that may be relayed through the proxy. It is commonly used in commercial and noncommercial organizations (especially schools) to ensure that Internet usage conforms to acceptable use policy. Some common methods used for content filtering include: URL or DNS blacklists, URL regex filtering, MIME filtering, or content keyword filtering. Some products have been known to employ content analysis techniques to look for traits commonly used by certain types of content providers. A content filtering proxy will often support user authentication, to control web access. It also usually produces logs, either to give detailed information about the URLs accessed by specific users, or to monitor bandwidth usage statistics. It may also communicate to daemon based and/or ICAP based antivirus software to provide security against virus and other malware by scanning incoming content in real time before it enters the network. Anonymizing proxy server An anonymous proxy server (sometimes called a web proxy) generally attempts to anonymize web surfing. These can easily be overridden by site administrators, and thus rendered useless in some cases. There are different varieties of anonymizers. One of the more common variations is the open proxy. Because they are typically difficult to track, open proxies are especially useful to those seeking online anonymity, from political dissidents to computer criminals. Access control: Some proxy servers implement a logon requirement. In large organizations, authorized users must log on to gain access to the web. The organization can thereby track usage to individuals.

18 Compiled by Mrs. Wamwati Catherine

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES

Hostile proxy Proxies can also be installed in order to eavesdrop upon the dataflow between client machines and the web. All accessed pages, as well as all forms submitted, can be captured and analyzed by the proxy operator. For this reason, passwords to online services (such as webmail and banking) should always be exchanged over a cryptographically secured connection, such as SSL. Intercepting proxy server An intercepting proxy (also known as a "transparent proxy") combines a proxy server with a gateway. Connections made by client browsers through the gateway are redirected through the proxy without client-side configuration (or often knowledge). Intercepting proxies are commonly used in businesses to prevent avoidance of acceptable use policy, and to ease administrative burden, since no client browser configuration is required. It is often possible to detect the use of an intercepting proxy server by comparing the external IP address to the address seen by an external web server, or by examining the HTTP headers on the server side. Transparent and non-transparent proxy server The term "transparent proxy" is most often used incorrectly to mean "intercepting proxy" (because the client does not need to configure a proxy and cannot directly detect that its requests are being proxied). Transparent proxies can be implemented using Cisco's WCCP (Web Cache Control Protocol). This proprietary protocol resides on the router and is configured from the cache, allowing the cache to determine what ports and traffic is sent to it via transparent redirection from the router. This redirection can occur in one of two ways: GRE Tunneling (OSI Layer 3) or MAC rewrites (OSI Layer 2). However, (Hypertext Transfer Protocol -- HTTP/1.1) has different definitions: "A 'transparent proxy' is a proxy that does not modify the request or response beyond what is required for proxy authentication and identification". "A 'non-transparent proxy' is a proxy that modifies the request or response in order to provide some added service to the user agent, such as group annotation services, media type transformation, protocol reduction, or anonymity filtering". Forced proxy The term "forced proxy" is ambiguous. It means both "intercepting proxy" (because it filters all traffic on the only available gateway to the Internet) and its exact opposite, "non-intercepting proxy" (because the user is forced to configure a proxy in order to access the Internet). Forced proxy operation is sometimes necessary due to issues with the interception of TCP connections and HTTP. For instance interception of HTTP requests can affect the usability of a proxy cache, and can greatly affect certain authentication mechanisms. This is primarily because the client thinks it is talking to a server, and so request headers required by a proxy are unable to be distinguished from headers that may be required by
19 Compiled by Mrs. Wamwati Catherine

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES

an upstream server (esp authorization headers). Also the HTTP specification prohibits caching of responses where the request contained an authorization header. Open proxy server Main article: Open proxy Because proxies might be used to abuse, system administrators have developed a number of ways to refuse service to open proxies. Many IRC networks automatically test client systems for known types of open proxy. Likewise, an email server may be configured to automatically test e-mail senders for open proxies. Groups of IRC and electronic mail operators run DNSBLs publishing lists of the IP addresses of known open proxies, such as AHBL, CBL, NJABL, and SORBS. The ethics of automatically testing clients for open proxies are controversial. Some experts, such as Vernon Schryver, consider such testing to be equivalent to an attacker port-scanning the client host. Others consider the client to have solicited the scan by connecting to a server whose terms of service include testing. Reverse proxy server A reverse proxy is a proxy server that is installed in the neighborhood of one or more web servers. All traffic coming from the Internet and with a destination of one of the web servers goes through the proxy server. There are several reasons for installing reverse proxy servers:




• • •





Encryption / SSL acceleration: when secure web sites are created, the SSL encryption is often not done by the web server itself, but by a reverse proxy that is equipped with SSL acceleration hardware. See Secure Sockets Layer. Furthermore, a hoster can provide a single "SSL proxy" to provide SSL encryption for an arbitrary number of hosts; removing the need for a separate SSL Server Certificate for each host, with the downside that all hosts behind the SSL proxy have to share a common DNS name or IP address for SSL connections. Load balancing: the reverse proxy can distribute the load to several web servers, each web server serving its own application area. In such a case, the reverse proxy may need to rewrite the URLs in each web page (translation from externally known URLs to the internal locations). Serve/cache static content: A reverse proxy can offload the web servers by caching static content like pictures and other static graphical content. Compression: the proxy server can optimize and compress the content to speed up the load time. Spoon feeding: reduces resource usage caused by slow clients on the web servers by caching the content the web server sent and slowly "spoon feeds" it to the client. This especially benefits dynamically generated pages. Security: the proxy server is an additional layer of defense and can protect against some OS and Web Server specific attacks. However, it does not provide any protection to attacks against the web application or service itself, which is generally considered the larger threat. Extranet Publishing: a reverse proxy server facing the Internet can be used to communicate to a firewalled server internal to an organization, providing extranet access to some functions while keeping the servers behind the firewalls. If used in this way, security measures should be considered to protect the rest of your
20 Compiled by Mrs. Wamwati Catherine

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES

infrastructure in case this server is compromised, as its web application is exposed to attack from the Internet. What is a firewall proxy server? Most modern firewalls distinguish between packet filtering and proxy server services. A firewall proxy server is an application that acts as an intermediary between two end systems. Firewall proxy servers operate at the application layer of the firewall, where both ends of a connection are forced to conduct the session through the proxy. They do this by creating and running a process on the firewall that mirrors a service as if it were running on the end host.



A firewall proxy server essentially turns a two-party session into a four-party session, with the middle process emulating the two real hosts. Because they operate at the application layer, proxy servers are also referred to as application layer firewalls. A proxy service must be run for each type of Internet application the firewall will support -- a Simple Mail Transport Protocol (SMTP) proxy for e-mail, an HTTP proxy for Web services and so on. Proxy servers are almost always one-way arrangements running from the internal network to the outside network. In other words, if an internal user wants to access a Web site on the Internet, the packets making up that request are processed through the HTTP server before being forwarded to the Web site. Packets returned from the Web site in turn are processed through the HTTP server before being forwarded back to the internal user host.

Because firewall proxy servers centralize all activity for an application into a single server, they present the ideal opportunity to perform a variety of useful functions. Having the application running right on the firewall presents the opportunity to inspect packets for much more than just source / destination addresses and port numbers. This is why nearly all modern firewalls incorporate some form of proxy-server architecture. For example, inbound packets headed to a server set up strictly to disburse information (say, an FTP server) can be inspected to see if they contain any write commands (such as the PUT command). In this way, the proxy server could allow only connections containing read commands.

21 Compiled by Mrs. Wamwati Catherine

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES

FIREWALLS

What Is a Firewall? A firewall is a secure and trusted machine that sits between a private network and a public network. The firewall machine is configured with a set of rules that determine which network traffic will be allowed to pass and which will be blocked or refused. In some large organizations, you may even find a firewall located inside their corporate network to segregate sensitive areas of the organization from other employees. Many cases of computer crime occur from within an organization, not just from outside. Firewalls can be constructed in quite a variety of ways. The most sophisticated arrangement involves a number of separate machines and is known as a perimeter network. Two machines act as "filters" called chokes to allow only certain types of network traffic to pass, and between these chokes reside network servers such as a mail gateway or a World Wide Web proxy server. This configuration can be very safe and easily allows quite a great range of control over who can connect both from the inside to the outside, and from the outside to the inside. This sort of configuration might be used by large organizations. Typically though, firewalls are single machines that serve all of these functions. These are a little less secure, because if there is some weakness in the firewall machine itself that allows people to gain access to it, the whole network security has been breached. Nevertheless, these types of firewalls are cheaper and easier to manage than the more sophisticated arrangement just described. Figure below illustrates the two most common firewall configurations.

The two major classes of firewall design: Packet-filter and Application layer firewall

22 Compiled by Mrs. Wamwati Catherine

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES

Firewall Briefs: 1. What is a firewall? A firewall protects networked computers from intentional hostile intrusion that could compromise confidentiality or result in data corruption or denial of service. It may be a hardware device (see Figure 1) or a software program (see Figure 2) running on a secure host computer. In either case, it must have at least two network interfaces, one for the network it is intended to protect, and one for the network it is exposed to. A firewall sits at the junction point or gateway between the two networks, usually a private network and a public network such as the Internet. The earliest firewalls were simply routers. The term firewall comes from the fact that by segmenting a network into different physical subnet-works, they limited the damage that could spread from one subnet to another just like firedoors or firewalls. Figure 1: Hardware Firewall Hardware firewall providing protection to a Local Network

Figure 2: Computer with Firewall Software Computer running firewall software to provide protection

23 Compiled by Mrs. Wamwati Catherine

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES

2. What does a firewall do?
A firewall examines all traffic routed between the two networks to see if it meets certain criteria. If it does, it is routed between the networks, otherwise it is stopped. A firewall filters both inbound and outbound traffic. It can also manage public access to private networked resources such as host applications. It can be used to log all attempts to enter the private network and trigger alarms when hostile or unauthorized entry is attempted. Firewalls can filter packets based on their source and destination addresses and port numbers. This is known as address filtering. Firewalls can also filter specific types of network traffic. This is also known as protocol filtering because the decision to forward or reject traffic is dependant upon the protocol used, for example HTTP, FTP or telnet. Firewalls can also filter traffic by packet attribute or state.

3. What can't a firewall do?
A firewall cannot prevent individual users with modems from dialling into or out of the network, bypassing the firewall altogether. Employee misconduct or carelessness cannot be controlled by firewalls. Policies involving the use and misuse of passwords and user accounts must be strictly enforced. These are management issues that should be raised during the planning of any security policy but that cannot be solved with firewalls alone. The arrest of the Phone-masters cracker ring brought these security issues to light. Although they were accused of breaking into information systems run by AT&T Corp., British Telecommunications Inc., GTE Corp., MCI WorldCom, Southwestern Bell, and Sprint Corp, the group did not use any high tech methods such as IP spoofing (see question 10). They used a combination of social engineering and dumpster diving. Social engineering involves skills not unlike those of a confidence trickster. People are tricked into revealing sensitive information. Dumpster diving or garbology, as the name suggests, is just plain old looking through company trash. Firewalls cannot be effective against either of these techniques.

4. Who needs a firewall?
Anyone who is responsible for a private network that is connected to a public network needs firewall protection. Furthermore, anyone who connects so much as a single computer to the Internet via modem should have personal firewall software. Many dial-up Internet users believe that anonymity will protect them. They feel that no malicious intruder would be motivated to break into their computer. Dial up users who have been victims of malicious attacks and who have lost entire days of work, perhaps having to reinstall their operating system, know that this is not true. Irresponsible pranksters can use automated robots to scan random IP addresses and attack whenever the opportunity presents itself.

5. How does a firewall work?
There are two access denial methodologies used by firewalls. A firewall may allow all traffic through unless it meets certain criteria, or it may deny all traffic unless it meets certain criteria (see figure 3). The type of criteria used to determine whether traffic should be allowed through varies from one type of firewall to another. Firewalls may be concerned with the type of traffic, or with source or
24 Compiled by Mrs. Wamwati Catherine

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES

destination addresses and ports. They may also use complex rule bases that analyse the application data to determine if the traffic should be allowed through. How a firewall determines what traffic to let through depends on which network layer it operates at. A discussion on network layers and architecture follows.

Figure 3: Basic Firewall Operation

6. What are the OSI and TCP/IP Network models?
To understand how firewalls work it helps to understand how the different layers of a network interact. Network architecture is designed around a seven layer model. Each layer has its own set of responsibilities, and handles them in a welldefined manner. This enables networks to mix and match network protocols and physical supports. In a given network, a single protocol can travel over more than one physical support (layer one) because the physical layer has been dissociated from the protocol layers (layers three to seven). Similarly, a single physical cable can carry more than one protocol. The TCP/IP model is older than the OSI industry standard model which is why it does not comply in every respect. The first four layers are so closely analogous to OSI layers however that interoperability is a day to day reality. Firewalls operate at different layers to use different criteria to restrict traffic. The lowest layer at which a firewall can work is layer three. In the OSI model this is the network layer. In TCP/IP it is the Internet Protocol layer. This layer is concerned with routing packets to their destination. At this layer a firewall can determine whether a packet is from a trusted source, but cannot be concerned with what it contains or what other packets it is associated with. Firewalls that operate at the transport layer know a little more about a packet, and are able to grant or deny access depending on more sophisticated criteria. At the application level, firewalls know a great deal about what is going on and can be very selective in granting access.

25 Compiled by Mrs. Wamwati Catherine

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES

Figure 4: The OSI and TCP/IP models

It would appear then, that firewalls functioning at a higher level in the stack must be superior in every respect. This is not necessarily the case. The lower in the stack the packet is intercepted, the more secure the firewall. If the intruder cannot get past level three, it is impossible to gain control of the operating system. Figure 5: Professional Firewalls Have Their Own IP Layer

Professional firewall products catch each network packet before the operating system does, thus, there is no direct path from the Internet to the operating system's TCP/IP stack. It is therefore very difficult for an intruder to gain control of the firewall host computer then "open the doors" from the inside. According To Byte Magazine*, traditional firewall technology is susceptible to misconfiguration on non-hardened OSes. More recently, however, "...firewalls have moved down the protocol stack so far that the OS doesn't have to do much more than act as a bootstrap loader, file system and GUI". The author goes on to state that newer firewall code bypasses the operating system's IP layer altogether, never permitting "potentially hostile traffic to make its way up the protocol stack to applications running on the system".
26 Compiled by Mrs. Wamwati Catherine

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES

7. What different types of firewalls are there?
Firewalls fall into four broad categories: o Packet filters,

o Circuit level gateways, o Application level gateways o Stateful multilayer inspection firewalls.
Packet filtering firewalls work at the network level of the OSI model, or the IP layer of TCP/IP. They are usually part of a router. A router is a device that receives packets from one network and forwards them to another network. In a packet filtering firewall each packet is compared to a set of criteria before it is forwarded. Depending on the packet and the criteria, the firewall can drop the packet, forward it or send a message to the originator. Rules can include source and destination IP address, source and destination port number and protocol used. The advantage of packet filtering firewalls is their low cost and low impact on network performance. Most routers support packet filtering. Even if other firewalls are used, implementing packet filtering at the router level affords an initial degree of security at a low network layer. This type of firewall only works at the network layer however and does not support sophisticated rule based models (see Figure 5). Network Address Translation (NAT) routers offer the advantages of packet filtering firewalls but can also hide the IP addresses of computers behind the firewall, and offer a level of circuit-based filtering. Figure 6: Packet Filtering Firewall

Circuit level gateways work at the session layer of the OSI model, or the TCP layer of TCP/IP. They monitor TCP handshaking between packets to determine whether a requested session is legitimate. Information passed to remote computer through a circuit level gateway appears to have originated from the gateway. This is useful for hiding information about protected networks. Circuit level gateways are relatively inexpensive and have the advantage of hiding information about the private network they protect. On the other hand, they do not filter individual packets.
27 Compiled by Mrs. Wamwati Catherine

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES

Figure 7: Circuit level Gateway

Application level gateways, also called proxies, are similar to circuit-level gateways except that they are application specific. They can filter packets at the application layer of the OSI model. Incoming or outgoing packets cannot access services for which there is no proxy. In plain terms, an application level gateway that is configured to be a web proxy will not allow any ftp, gopher, telnet or other traffic through. Because they examine packets at application layer, they can filter application specific commands such as http: post and get, etc. This cannot be accomplished with either packet filtering firewalls or circuit level neither of which knows anything about the application level information. Application level gateways can also be used to log user activity and logins. They offer a high level of security, but have a significant impact on network performance. This is because of context switches that slow down network access dramatically. They are not transparent to end users and require manual configuration of each client computer. (See Figure 7) Figure 8: Application level Gateway

28 Compiled by Mrs. Wamwati Catherine

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES

Stateful Multilayer Inspection Firewalls combine the aspects of the other three types of firewalls. They filter packets at the network layer, determine whether session packets are legitimate and evaluate contents of packets at the application layer. They allow direct connection between client and host, alleviating the problem caused by the lack of transparency of application level gateways. They rely on algorithms to recognize and process application layer data instead of running application specific proxies. Stateful multilayer inspection firewalls offer a high level of security, good performance and transparency to end users. They are expensive however, and due to their complexity are potentially less secure than simpler types of firewalls if not administered by highly competent personnel. (See Figure 8) Figure 9: Stateful Multilayer Inspection Firewall

8. How do I implement a firewall?
We suggest you approach the task of implementing a firewall by going through the following steps: a. Determine the access denial methodology to use. It is recommended you begin with the methodology that denies all access by default. In other words, start with a gateway that routes no traffic and is effectively a brick wall with no doors in it. b. Determine inbound access policy. If all of your Internet traffic originates on the LAN this may be quite simple. A straightforward NAT router will block all inbound traffic that is not in response to requests originating from within the LAN. As previously mentioned, the true IP addresses of hosts behind the firewall are never revealed to the outside world, making intrusion extremely difficult. Indeed, local host IP addresses in this type of configuration are usually non-public addresses, making it impossible to route traffic to them from the Internet. Packets coming in from the Internet in response to requests from local hosts are addressed to dynamically allocated port numbers on the public side of the NAT router. These change rapidly making it difficult or
29 Compiled by Mrs. Wamwati Catherine

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES

impossible for an intruder to make assumptions about which port numbers to use. If your requirements involve secure access to LAN based services from Internet based hosts, then you will need to determine the criteria to be used in deciding when a packet originating from the Internet may be allowed into the LAN. The stricter the criteria, the more secure your network will be. Ideally you will know which public IP addresses on the Internet may originate inbound traffic. By limiting inbound traffic to packets originating from these hosts, you decrease the likelihood of hostile intrusion. You may also want to limit inbound traffic to certain protocol sets such as ftp or http. All of these techniques can be achieved with packet filtering on a NAT router. If you cannot know the IP addresses that may originate inbound traffic, and you cannot use protocol filtering then you will need more a more complex rule based model and this will involve a Stateful multilayer inspection firewall. c. Determine outbound access policy. If your users only need access to the web, a proxy server may give a high level of security with access granted selectively to appropriate users. As mentioned, however, this type of firewall requires manual configuration of each web browser on each machine. Outbound protocol filtering can also be transparently achieved with packet filtering and no sacrifice in security. If you are using a NAT router with no inbound mapping of traffic originating from the Internet, then you may allow LAN users to freely access all services on the Internet with no security compromise. Naturally, the risk of employees behaving irresponsibly with email or with external hosts is a management issue and must be dealt with as such. d. Determine if dial-in or dial-out access is required. Dial-in requires a secure remote access PPP server that should be placed outside the firewall. If dial-out access is required by certain users, individual dial-out computers must be made secure in such a way that hostile access to the LAN through the dial-out connection becomes impossible. The surest way to do this is to physically isolate the computer from the LAN. Alternatively, personal firewall software may be used to isolate the LAN network interface from the remote access interface. e. Decide whether to buy a complete firewall product, have one implemented by a systems integrator or implement one yourself. Once the above questions have been answered, it may be decided whether to buy a complete firewall product or to configure one from multipurpose routing or proxy software. This decision will depend as much on the availability of in-house expertise as on the complexity of the need. A satisfactory firewall may be built with little expertise if the requirements are straightforward. However, complex requirements will not necessarily entail recourse to external resources if the system administrator has sufficient grasp of the elements. Indeed, as the complexity of the security model increases, so does the need for in-house expertise and autonomy.
30 Compiled by Mrs. Wamwati Catherine

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES

9. Is a firewall sufficient to secure my network or do I need anything else?
The firewall is an integral part of any security program, but it is not a security program in and of itself. Security involves data integrity (has it been modified?), service or application integrity (is the service available, and is it performing to spec?), data confidentiality (has anyone seen it?) and authentication (are they really who they say they are?). Firewalls only address the issues of data integrity, confidentiality and authentication of data that is behind the firewall. Any data that transits outside the firewall is subject to factors out of the control of the firewall. It is therefore necessary for an organization to have a well planned and strictly implemented security program that includes but is not limited to firewall protection.

10. What is IP spoofing?
Many firewalls examine the source IP addresses of packets to determine if they are legitimate. A firewall may be instructed to allow traffic through if it comes from a specific trusted host. A malicious cracker would then try to gain entry by "spoofing" the source IP address of packets sent to the firewall. If the firewall thought that the packets originated from a trusted host, it may let them through unless other criteria failed to be met. Of course the cracker would need to know a good deal about the firewall's rule base to exploit this kind of weakness. This reinforces the principle that technology alone will not solve all security problems. Responsible management of information is essential. One of Courtney's laws sums it up: "There are management solutions to technical problems, but no technical solutions to management problems". An effective measure against IP spoofing is the use of a Virtual Private Network (VPN) protocol such as IPSec. This methodology involves encryption of the data in the packet as well as the source address. The VPN software or firmware decrypts the packet and the source address and performs a checksum. If either the data or the source address has been tampered with, the packet will be dropped. Without access to the encryption keys, a potential intruder would be unable to penetrate the firewall.

11. Firewall related problems
Firewalls introduce problems of their own. Information security involves constraints, and users don't like this. It reminds them that Bad Things can and do happen. Firewalls restrict access to certain services. The vendors of information technology are constantly telling us "anything, anywhere, any time", and we believe them naively. Of course they forget to tell us we need to log in and out, to memorize our 27 different passwords, not to write them down on a sticky note on our computer screen and so on. Firewalls can also constitute a traffic bottleneck. They concentrate security in one spot, aggravating the single point of failure phenomenon. The alternatives however are either no Internet access, or no security, neither of which are acceptable in most organizations.

12. Benefits of a firewall

31 Compiled by Mrs. Wamwati Catherine

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES

Firewalls protect private local area networks from hostile intrusion from the Internet. Consequently, many LANs are now connected to the Internet where Internet connectivity would otherwise have been too great a risk. Firewalls allow network administrators to offer access to specific types of Internet services to selected LAN users. This selectivity is an essential part of any information management program, and involves not only protecting private information assets, but also knowing who has access to what. Privileges can be granted according to job description and need rather than on an all-or-nothing basis. What are the basic types of firewalls? Conceptually, there are two types of firewalls: 1. Network layer 2. Application layer They are not as different as you might think, and latest technologies are blurring the distinction to the point where it's no longer clear if either one is ``better'' or ``worse.'' As always, you need to be careful to pick the type that meets your needs. The International Standards Organization (ISO) Open Systems Interconnect (OSI) model for networking defines seven layers, where each layer provides services that ``higherlevel'' layers depend on. In order from the bottom, these layers are physical, data link, network, transport, session, presentation, application. The important thing to recognize is that the lower-level the forwarding mechanism, the less examination the firewall can perform. Generally speaking, lower-level firewalls are faster, but are easier to fool into doing the wrong thing. Network layer firewalls These generally make their decisions based on the source, destination addresses and ports in individual IP packets. A simple router is the ``traditional'' network layer firewall, since it is not able to make particularly sophisticated decisions about what a packet is actually talking to or where it actually came from. Modern network layer firewalls have become increasingly sophisticated, and now maintain internal information about the state of connections passing through them, the contents of some of the data streams, and so on. One thing that's an important distinction about many network layer firewalls is that they route traffic directly though them, so to use one you either need to have a validly assigned IP address block or to use a ``private internet'' address block . Network layer firewalls tend to be very fast and tend to be very transparent to users. Figure 1: Screened Host Firewall

32 Compiled by Mrs. Wamwati Catherine

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES

In Figure 1, a network layer firewall called a ``screened host firewall'' is represented. In a screened host firewall, access to and from a single host is controlled by means of a router operating at a network layer. The single host is a bastion host; a highly-defended and secured strong-point that (hopefully) can resist attack.

Figure 2: Screened Subnet Firewall

Example Network layer firewall: In figure 2, a network layer firewall called a ``screened subnet firewall'' is represented. In a screened subnet firewall, access to and from a whole network is controlled by means of a router operating at a network layer. It is similar to a screened host, except that it is, effectively, a network of screened hosts. Application layer firewalls These generally are hosts running proxy servers, which permit no traffic directly between networks, and which perform elaborate logging and auditing of traffic passing through them. Since the proxy applications are software components running on the
33 Compiled by Mrs. Wamwati Catherine

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES

firewall, it is a good place to do lots of logging and access control. Application layer firewalls can be used as network address translators, since traffic goes in one ``side'' and out the other, after having passed through an application that effectively masks the origin of the initiating connection. Having an application in the way in some cases may impact performance and may make the firewall less transparent. Early application layer firewalls such as those built using the TIS firewall toolkit, are not particularly transparent to end users and may require some training. Modern application layer firewalls are often fully transparent. Application layer firewalls tend to provide more detailed audit reports and tend to enforce more conservative security models than network layer firewalls. Example Application layer firewall : In figure 3, an application layer firewall called a ``dual homed gateway'' is represented. A dual homed gateway is a highly secured host that runs proxy software. It has two network interfaces, one on each network, and blocks all traffic passing through it. The Future of firewalls lies someplace between network layer firewalls and application layer firewalls. It is likely that network layer firewalls will become increasingly ``aware'' of the information going through them, and application layer firewalls will become increasingly ``low level'' and transparent. The end result will be a fast packet-screening system that logs and audits data as it passes through. Increasingly, firewalls (network and application layer) incorporate encryption so that they may protect traffic passing between them over the Internet. Firewalls with end-to-end encryption can be used by organizations with multiple points of Internet connectivity to use the Internet as a ``private backbone'' without worrying about their data or passwords being sniffed.

34 Compiled by Mrs. Wamwati Catherine

Figure 3: Dual Homed Gateway

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES

WEB SECURITY
Web security, let alone general computer security, is a career in itself. These notes merely skim the surface of the topic, pointing out the issues that are of particular importance to someone attempting to secure a web application. What is Computer Security?


"A computer is secure if you can depend on it and its software to behave as you expect."

Unfortunately, this terribly vague definition is the only one that covers all of the aspects of computer security. The scope of the problem is very large, and the solutions are not well understood. • Consider the different types of acts that computer security is intended to protect against: • data and program deletion or corruption; • theft of identity, intellectual property, information assets, physical assets, and money; • denial of service; • illicit use of computer resources; • using a compromised computer to launch further attacks; and, • Opening of security "holes" to enable future exploits. • While much of the effort in computer security is focused on countering the threat from malicious humans, a truly secure computer must also contend with the threats posed by legitimate, though careless, humans as well as acts of nature.


What is Web Security?


The web poses some additional security troubles because: • so very many different computers are involved in any networked environment; • the fundamental protocols of the Internet were not designed with security in mind; and, • the physical infrastructure of the Internet is not owned or controlled by any
35 Compiled by Mrs. Wamwati Catherine

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES

one organization, and no guarantees can be made concerning the integrity and security of any part of the Internet. • Unfortunately, a web-based system is often advertised as "secure" merely because the web server uses SSL encryption to protect portions of the site. As we'll soon see, there is a great deal more to the story than that. Hardware










Physical accesses to computer hardware gives even a slightly-skilled person total control of that hardware. Without physical security to protect hardware (i.e. doors that lock) nothing else about a computer system can be called secure. There are many ways in which malicious humans can attack hardware: • using operating system installation floppies and CDs to circumvent normal OS access control to devices and hard disk contents; • physical removal or destruction of the hardware; • electromagnetic interference, including nuclear EMP munitions and ebombs; • direct eavesdropping technologies such as keyboard loggers and network sniffers; and • indirect eavesdropping technologies such as van Eck Phreaking (reconstituting the display of a computer monitor in a remote location by gathering the emitted radiation from that monitor, Hardware is also most susceptible to natural occurrences: • water and humidity; • smoke and dust; • heat and fire; • lightning and other electrical phenomenon; • radiation, particularly alpha particles which can flip memory bits; • flora and fauna, especially circuit board-eating molds and insects; and • weather and geological effects such as tornados, hurricanes, and earthquakes. Securing hardware is usually a matter of installing locking doors and electromagnetic shielding, deploying redundant hardware in remote locations, installing temperature/moisture/air quality controls and filters, performing and checking file system backups, and so forth. Networking hardware is susceptible to all of the above problems, but often must be exposed (i.e. cables) which makes it a fine target for attack. Some simple things greatly improve the security of LANs: installing switches instead of hubs to limit Ethernet's chatty broadcasts, thus making it much harder to jack in and eavesdrop with a "promiscuous" NIC. Operating System

As the software charged with controlling access to the hardware, the filesystem, and the network, weaknesses in an operating system are the most valued amongst crackers. • When we speak here of an operating system, we really mean just the kernel, filesystem(s), network software/stack, and authentication (who are you?) and authorization (what can you do?) mechanisms. • Most OS authentication is handled through user names and passwords. Biometric (e.g. voice, face, retina, iris, fingerprint) and physical token-based (swipe cards, pin-generating cards) authentication are sometimes used to augment simple passwords, but the costs and accuracy of the technology limit their adoption. • Once authenticated, the OS is responsible for enforcing authorization rules for a user's account. The guiding thought here is the Principle of Least Privilege:


36 Compiled by Mrs. Wamwati Catherine

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES

disallow every permission that isn't explicitly required. • Protecting an operating system from attack is a cat and mouse game, that requires constant vigilance. Obviously, code patches must be applied (if the benefit of the patch is deemed to outweigh the risk of changing a functioning system), but system logs must be gathered and studied on a regular basis to identify suspicious activity. • A number of tools can be used to strengthen and monitor the security of an OS: • filesystem rights (sometimes access control lists and partition mount permissions limit non-superuser accounts to only the files they require; • disk quotas prevent users from intentionally or accidentally filling a disk, thereby denying other users' access to the partition; • change-detection software (e.g. Tripwire) reports modifications to systemcritical files and directories; • firewalls (i.e. packet filters, proxy servers, Network Address Translation, and Virtual Private Networks) help to block out spurious network traffic, but don't stop attacks on the layers that follow; • intrusion-detection software (e.g. Snort) identify network-based attacks based on a library of attack profiles; and • anti-virus software removes, disables, or warns about dangerous viruses, worms, or trojan horses. • For server computers, the most important rule is to only install and run those software packages that are absolutely required. The more programs that are running, the greater the opportunity for someone to find a hole in the defenses. Vulnerabilities The following examples illustrate: • Attempted break-in: Someone attempting to break into a system might generate an abnormally high rate of password failures with respect to a single account or the system as a whole. • Masquerading or successful break-in: Someone logging into a system through an unauthorized account and password might have a different login time, location, or connection type from that of the account's legitimate user. In addition, the penetrator’s behavior may differ considerably from that of the legitimate, user-, in particular, he might spend most of his time browsing through directories and executing system status commands, whereas the legitimate user might concentrate on editing or compiling and linking programs. Many break-ins have been discovered by security officers or other users on the system who have noticed the alleged user behaving strangely. • Penetration by legitimate user: A user attempting to penetrate the security mechanisms in the operating system might execute different programs or trigger more protection violations from attempts to access unauthorized files or programs. If his attempt succeeds, he will have access to commands and files not normally permitted to him. • Leakage by legitimate user: A user trying to leak sensitive documents might log into the system at unusual times or route data to remote printers not normally used. • Inference by legitimate user: A user attempting to obtain unauthorized data from a database through aggregation and inference might retrieve more records than usual. • Trojan horse: The behavior of a Trojan horse planted in or substituted for a program may differ from the legitimate program in terms of its CPU time or 1/0 activity. • Virus: A virus planted in a system might cause an increase in the frequency of
37 Compiled by Mrs. Wamwati Catherine

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES



executable files rewritten, storage used by executable files, or a particular program being executed as the virus spreads. Denial-of-Service: An intruder able to monopolize a resource (e.g., network) might have abnormally high activity with respect to the resource, while activity for all other users is abnormally low.

The steps of Intrusion Detection and Incident Response often go together. In fact, they work well together if you are doing things manually. The following table illustrates this:
Intrusion Detection and Incident Response Guidelines Look for longer file lengths in system and application file lengths. Immediately discontinue use of any infected computer. Put a quarantine sign on it. Isolate and make a copy of the virus. Eradicate it on all desktops and floppies. Look for virus warning when opening documents that use application's own macro programming language. Isolate and make a copy of the virus. Disinfect all documents. Look for unfamiliar processes running (usually with an unusual name) that consume system processing capacity. Worms also write unusual messages to users. Try to find and save a copy of the worm code. Reconfigure firewall and disinfect system. Train users to avoid downloading freeware or installing software of unknown source, as this is most common entry of Trojans. Impossible to detect beforehand, once discovered, discontinue use of affected machines. Eradicate by doing complete uninstall of software program or Trojan part of program. Look for programs planted in system that elevate privileges, obtain passwords, or disguise presence by running Checksum utility. Identify hacker and lock them out while killing processes they've created or set up fishbowl to obtain more information. Save copies of utilities. Reconfigure passwords, directory and file systems. Look for system slowdown or crash. Reconfigure router to minimize effect of the flooding. Establishing identity of attacker may not be a worthwhile investment of time. Look for altered web pages. Investigate while site stays online with partial fix, then restore to original status. Look at logs and records of activity. Interview attacker if possible. Perform forensic duplication. Decide if prosecute.

Virus

Macro virus

Worms

Trojan horse

Hacking utilities

DoS attack Web defacement Theft/Unauthorized use

ACCESS CONTROL Access to protected information must be restricted to people who are authorized to access the information. The computer programs, and in many cases the computers that process the information, must also be authorized. This requires that mechanisms be in place to control the access to protected information. The sophistication of the access
38 Compiled by Mrs. Wamwati Catherine

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES

control mechanisms should be in parity with the value of the information being protected - the more sensitive or valuable the information the stronger the control mechanisms need to be. The foundation on which access control mechanisms are built start with identification and authentication. Identification is an assertion of who someone is or what something is. If a person makes the statement "Hello, my name is John Doe." they are making a claim of who they are. However, their claim may or may not be true. Before John Doe can be granted access to protected information it will be necessary to verify that the person claiming to be John Doe really is John Doe. Authentication is the act of verifying a claim of identity. When John Doe goes into a bank to make a withdrawal, he tells the bank teller he is John Doe (a claim of identity). The bank teller asks to see a photo ID, so he hands the teller his driver's license. The bank teller checks the license to make sure it has John Doe printed on it and compares the photograph on the license against the person claiming to be John Doe. If the photo and name match the person, then the teller has authenticated that John Doe is who he claimed to be. There are three different types of information that can be used for authentication: something you know, something you have, or something you are. Examples of something you know include such things as a PIN, a password, or your mother's maiden name. Examples of something you have include a driver's license or a magnetic swipe card. Something you are refers to biometrics. Examples of biometrics include palm prints, finger prints, voice prints and retina (eye) scans. Strong authentication requires providing information from two of the three different types of authentication information. For example, something you know plus something you have. This is called two factor authentication. On computer systems in use today, the Username is the most common form of identification and the Password is the most common form of authentication. Usernames and passwords have served their purpose but in our modern world they are no longer adequate. Usernames and passwords are slowly being replaced with more sophisticated authentication mechanisms. After a person, program or computer has successfully been identified and authenticated then it must be determined what informational resources they are permitted to access and what actions they will be allowed to perform (run, view, create, delete, or change). This is called authorization. Authorization to access information and other computing services begins with administrative policies and procedures. The policies prescribe what information and computing services can be accessed, by whom, and under what conditions. The access control mechanisms are then configured to enforce these policies. Different computing systems are equipped with different kinds of access control mechanisms - some may even offer a choice of different access control mechanisms. The access control mechanism a system offers will be based upon one of three approaches to access control or it may be derived from a combination of the three approaches. The non-discretionary approach consolidates all access control under a centralized administration. The access to information and other resources is usually based on the individuals function (role) in the organization or the tasks the individual must perform.
39 Compiled by Mrs. Wamwati Catherine

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES

The discretionary approach gives the creator or owner of the information resource the ability to control access to those resources. In the Mandatory access control approach, access is granted or denied bases upon the security classification assigned to the information resource. Examples of common access control mechanisms in use today include Role-based access control available in many advanced Database Management Systems, simple file permissions provided in the UNIX and Windows operating systems, Group Policy Objects provided in Windows network systems, Kerberos, RADIUS, TACACS, and the simple access lists used in many firewalls and routers. To be effective, policies and other security controls must be enforceable and upheld. Effective policies ensure that people are held accountable for their actions. All failed and successful authentication attempts must be logged, and all access to information must leave some type of audit trail.[ In computer security, an access control list (ACL) is a list of permissions attached to an object. The list specifies who or what is allowed to access the object and what operations are allowed to be performed on the object. In a typical ACL, each entry in the list specifies a subject and an operation: for example, the entry (Alice, delete) on the ACL for file WXY gives Alice permission to delete file WXY.

CLIENT SERVER ARCHITECTURE
Client/server architecture. As a result of the limitations of file sharing architectures, the client/server architecture emerged. This approach introduced a database server to replace the file server. Using a relational database management system (DBMS), user queries could be answered directly. The client/server architecture reduced network traffic by providing a query response rather than total file transfer. It improves multi-user updating through a GUI front end to a shared database. In client/server architectures, Remote Procedure Calls (RPCs) or standard query language (SQL) statements are typically used to communicate between the client and server Client/Server is a network application architecture which separates the client (usually the graphical user interface) from the server. Each instance of the client software connects to a server or application server. Client/Server is a scalable architecture whereby each computer or process on the network is either a client or a server. Server software generally but not always runs on powerful computers dedicated for exclusive use to running the business application. Client software on the other hand generally runs on common PCs or workstations. Clients get all or most of their information and rely on the application server for things such as configuration files, stock quotes, business application programs or to offload compute intensive application tasks back the server to keep the client computer (and client computer user) free to perform other tasks. A popular client in widespread use today is the web browser which communicates with web servers over the internet to fetch and display web page content.

40 Compiled by Mrs. Wamwati Catherine

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES

Another type of client in the Client/Server architecture is known as a thin client, which is a minimal client. Thin clients utilize as few resources on the host PC as possible. A thin client's job is generally just to graphically display information from the application server. This allows a company the ease of managing their business logic for all applications at a central location. Application servers usually store data on a third machine, known as the database server. This is called three-tier architecture whereas generic client/server architecture is two-tier. In general, n-tier or Multi-tier architecture may deploy any number of distinct services, including transitive relations between application servers implementing different functions of business logic, each of which may or may not employ a distinct or shared database system. Another type of network architecture is known as a peer-to-peer architecture because each node or instance of the program is both a client and a server and each has equivalent responsibilities. Both client/server and peer-to-peer architectures are in wide use. Each has advantages and disadvantages



The client-server software architecture model distinguishes client systems from server systems, which communicate over a computer network. A client-server application is a distributed system comprising both client and server software. A client software process may initiate a communication session, while the server waits for requests from any client.

Client-server describes the relationship between two computer programs in which one program, the client program, makes a service request to another, the server program. Standard networked functions such as email exchange, web access and database access, are based on the client-server model. For example, a web browser is a client program at the user computer that may access information at any web server in the world. To check your bank account from your computer, a web browser client program in your computer forwards your request to a web server program at the bank. That program may in turn forward the request to its own database client program that sends a request to a database server at another bank computer to retrieve your account balance. The balance is returned to the bank database client, which in turn serves it back to the web browser client in your personal computer, which displays the information for you. The client-server model has become one of the central ideas of network computing. Most business applications being written today use the client-server model. So do the Internet's main application protocols, such as HTTP, SMTP, Telnet, DNS, etc. In marketing, the term has been used to distinguish distributed computing by smaller dispersed computers from the "monolithic" centralized computing of mainframe computers. But this distinction has largely disappeared as mainframes and their applications have also turned to the client-server model and become part of network computing. Each instance of the client software can send data requests to one or more connected servers. In turn, the servers can accept these requests, process them, and return the requested information to the client. Although this concept can be applied for a variety of reasons to many different kinds of applications, the architecture remains fundamentally the same.
41 Compiled by Mrs. Wamwati Catherine

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES

The most basic type of client-server architecture employs only two types of hosts: clients and servers. This type of architecture is sometimes referred to as two-tier. It allows devices to share files and resources. The two tier architecture means that the client acts as one tier and application in combination with server acts as another tier. These days, clients are most often web browsers, although that has not always been the case. Servers typically include web servers, database servers and mail servers. Online gaming is usually client-server too. In the specific case of MMORPG, the servers are typically operated by the company selling the game; for other games one of the players will act as the host by setting his game in server mode. The interaction between client and server is often described using sequence diagrams. Sequence diagrams are standardized in the Unified Modeling Language. When both the client- and server-software are running on the same computer, this is called a single seat setup.

• •

Specific types of clients include web browsers, email clients, and online chat clients. Specific types of servers include web servers, ftp servers, application servers, database servers, mail servers, file servers, print servers, and terminal servers. Most web services are also types of servers

Client Server Architecture: Client-server is a computing architecture which separates a client from a server. Each client or server connected to a network can also be referred to as a node. The most basic type of client-server architecture employs only two types of nodes: clients and servers. This type of architecture is sometimes referred to as two-tier. Each instance of the client software can send data requests to one or more connected servers. In turn, the servers can accept these requests, process them, and return the requested information to the client. Although this concept can be applied for a variety of reasons to many different kinds of applications, the architecture remains fundamentally the same. These days, clients are most often web browsers. Servers typically include web servers, database servers and mail servers. Online gaming is usually client-server too. Characteristics of a client:• Request sender is known as client • Initiates requests • Waits and receives replies. • Usually connects to a small number of servers at one time • Typically interacts directly with end-users using a graphical user interface Characteristics of a server:• Receiver of request which is send by client • Upon receipt of requests, processes them and then serves replies • Usually accepts connections from a large number of clients • Typically does not interact directly with end-users The following are the examples of client/server architectures:1) Two tier architectures
42 Compiled by Mrs. Wamwati Catherine

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES

In two tier client/server architectures, the user interface is placed at user's desktop environment and the database management system services are usually in a server that is a more powerful machine that provides services to the many clients. Information processing is split between the user system interface environment and the database management server environment. The database management server supports for stored procedures and triggers. Software vendors provide tools to simplify development of applications for the two tier client/server architecture. 2) Multi-tiered architecture Some designs are more sophisticated and consist of three different kinds of nodes: clients, application servers which process data for the clients and database servers which store data for the application servers. This configuration is called three-tier architecture, and is the most commonly used type of client-server architecture. Designs that contain more than two tiers are referred to as multi-tiered or n-tiered. •

The advantage of n-tiered architectures is that they are far more scalable, since they balance and distribute the processing load among multiple, often redundant, specialized server nodes. This in turn improves overall system performance and reliability, since more of the processing load can be accommodated simultaneously.



The disadvantages of n-tiered architectures include more load on the network itself, due to a greater amount of network traffic, more difficult to program and test than in two-tier architectures because more devices have to communicate in order to complete a client's request. Advantages of Client-Server Architecture:-







In most cases, client-server architecture enables the roles and responsibilities of a computing system to be distributed among several independent computers that are known to each other only through a network. This creates an additional advantage to this architecture: greater ease of maintenance. For example, it is possible to replace, repair, upgrade, or even relocate a server while its clients remain both unaware and unaffected by that change. This independence from change is also referred to as encapsulation. All the data is stored on the servers, which generally have far greater security controls than most clients. Servers can better control access and resources, to guarantee that only those clients with the appropriate permissions may access and change data. Since data storage is centralized, updates to those data are far easier to administer. It functions with multiple different clients of different capabilities. Disadvantages of Client-Server Architecture:-





Traffic congestion on the network has been an issue since the inception of the client-server paradigm. As the number of simultaneous client requests to a given server increases, the server can become severely overloaded. The client-server paradigm lacks the robustness. Under client-server, should a critical server fail, clients’ requests cannot be fulfilled.
43 Compiled by Mrs. Wamwati Catherine

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES

N-TIER CLIENT-SERVER ARCHITECTURE Description of 1-Tier and 2-Tier Web Applications This article will discuss the various architectures of Client-Server environments. Perhaps the most influential Client-Server environment is the Internet and its global users. With the increasing use of web applications, an examination of the best architecture to support web applications is timely. The architectural component of this discussion will focus on the underlying structures and schematics that best build web applications. Specifically, we will be discussing tier architecture, which is the breaking down of an application into logical chunks that are called Tiers. Tiers can exist on the same computer and be connected virtually or logically or on different machines. The simplest examples of tier architecture are enumerated as 1-Tier, 2-Tier, and 3-Tier. 1-Tier Architecture is the simplest, single tier on single user, and is the equivalent of running an application on a personal computer. All, the required component to run the application are located within it. User interface, business logic, and data storage are all located on the same machine. They are the easiest to design, but the least scalable. Because they are not part of a network, they are useless for designing web applications. 2-Tier Architectures supply a basic network between a client and a server. For example, the basic web model is a 2-Tier Architecture. A web browser makes a request from a web server, which then processes the request and returns the desired response, in this case, web pages. This approach improves scalability and divides the user interface from the data layers. However, it does not divide application layers so they can be utilized separately. This makes them difficult to update and not specialized. The entire application must be updated because layers aren’t separated. 3-Tier Architecture is most commonly used to build web applications. In this model, the browser acts like a client, middleware or an application server contains the business logic, and database servers handle data functions. This approach separates business logic from display and data. But, it does not specialize functional layers. Its fine for prototypical or very simple web applications, but it doesn’t measure up to the complexity demanded of web applications. The application server is still too broad, with too many functions grouped together. This reduces flexibility and scalability. N-Tier Architectures provide finer granularity, which provides more modules to choose from as the application is separated into smaller functions. N-Tier and Example Usually N-Tier Architecture begins as a 3-Tier model and is expanded. It provides finer granularity. Granularity is the ability of a system, in this case, an application, to be broken down into smaller components or granules. The finer the granularity, the greater the flexibility of a system. It can also be referred to as a system’s modularity. Therefore, it refers to the pulling apart of an application into separate layers or finer grains. One of the best examples of N-Tier Architecture in web applications is the popular shopping-cart web application. The client tier interacts with the user through GUIs (Graphic User Interfaces) and with the application and the application server. In web applications, this client tier is a web browser. In addition to initiating the request, the web browser also receives and displays code in dynamic HTML (Hypertext Markup Language), the primary language of the World Wide Web. In a shopping cart web application, the presentation tier displays information related to such services as browsing merchandise, purchasing, and shopping cart contents. It communicates with other tiers by outputting results to the browser/client tier and all other tiers in the
44 Compiled by Mrs. Wamwati Catherine

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES

network. This layer calls custom tags throughout the network and to other networks. It also calls database stored procedures and web services, all in the goal of providing a more sophisticated response. This layer glues the whole application together and allows different nodes to communicate with each other and be displayed to the user through the browser. It is located in the application server.

45 Compiled by Mrs. Wamwati Catherine

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES

THREE-TIER ARCHITECTURE Visual overview of a Three-tiered application 'Three-tier' is a client-server architecture in which the user interface, functional process logic ("business rules"), computer data storage and data access are developed and maintained as independent modules, most often on separate platforms. The three-tier model is considered to be a software architecture (the software architecture of a program or computing system is the structure or structures of the system, which comprise software components, the externally visible properties of those components, and the relationships between them. The term also refers to documentation of a system's software architecture. Documenting software architecture facilitates communication between stakeholders, documents early decisions about high-level design, and allows reuse of design components and patterns between projects) and a software design pattern. Apart from the usual advantages of modular software with well defined interfaces, the three-tier architecture is intended to allow any of the three tiers to be upgraded or replaced independently as requirements or technology change. For example, a change of operating system in the presentation tier would only affect the user interface code. Typically, the user interface runs on a desktop PC or workstation and uses a standard graphical user interface, functional process logic may consist of one or more separate modules running on a workstation or application server, and an RDBMS on a database server or mainframe contains the computer data storage logic. The middle tier may be multi-tiered itself (in which case the overall architecture is called an "n-tier architecture"). The 3-Tier architecture has the following three tiers: • Presentation Tier This is the topmost level of the application. The presentation tier displays information related to such services as browsing merchandise, purchasing, and shopping cart contents. It communicates with other tiers by outputting results to the browser/client tier and all other tiers in the network. • Application Tier (Business Logic/Logic Tier) The logic tier is pulled out from the presentation tier and, as its own layer, it controls an application’s functionality by performing detailed processing. • Data Tier This tier consists of Database Servers. Here information is stored and retrieved. This tier keeps data neutral and independent from application servers or business logic. Giving data its own tier also improves scalability and performance.

46 Compiled by Mrs. Wamwati Catherine

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES

INTERNET INFORMATION SERVICES (IIS)
Internet Information Services (IIS) - formerly called Internet Information Server is a set of Internet-based services for servers created by Microsoft for use with Microsoft Windows. It is the world's second most popular web server in terms of overall websites behind the industry leader Apache HTTP Server. As of November 2008 it served 34.49% of all websites according to Netcraft. The servers currently include FTP, SMTP, NNTP, and HTTP/HTTPS. Versions
• • • • • • • • •

IIS IIS IIS IIS IIS IIS IIS IIS IIS

1.0, 2.0, 3.0, 4.0, 5.0, 5.1, 6.0, 7.0, 7.5,

Windows Windows Windows Windows Windows Windows Windows Windows Windows

NT 3.51 available as a free add-on NT 4.0 NT 4.0 Service Pack 3 NT 4.0 Option Pack 2000 XP Professional, Windows MCE Server 2003 and Windows XP Professional x64 Edition Server 2008 and Windows Vista Server 2008 R2 (Beta) and Windows 7 (Beta)

History The first Microsoft web server was a research project by the European Microsoft Windows NT Academic Centre (EMWAC), part of the University of Edinburgh in Scotland, and was distributed as freeware. However since the EMWAC server was unable to scale sufficiently to handle the volume of traffic going to microsoft.com, Microsoft was forced to develop its own web server, IIS. IIS was initially released as an additional set of Internet based services for Windows NT 3.51. IIS 2.0 followed, adding support for the Windows NT 4.0 operating system; and IIS 3.0 introduced the Active Server Pages dynamic scripting environment. IIS 4.0 dropped support for the Gopher protocol and was bundled with Windows NT as a separate "Option Pack" CD-ROM.[ The current shipping version of IIS is 7.0 for Windows Vista and Windows Server 2008, 6.0 for Windows Server 2003 and Windows XP Professional x64 Edition, and IIS 5.1 for Windows XP Professional. Windows XP has a restricted version of IIS 5.1 that supports only 10 simultaneous connections and a single web site. IIS 6.0 added support for IPv6. A Fast CGI module is also available for IIS5.1, IIS6 and IIS7. IIS 7.0 is not installed by Windows Vista by default but it can be selected from the list of optional components. It is available in all editions of Windows Vista including Home Basic. IIS 7 on Vista does not limit the number of allowed connections as IIS on XP did but limits concurrent requests to 10 (Windows Vista Ultimate, Business, and Enterprise Editions) or 3 (Vista Home Premium). Additional requests are queued which hampers performance but they are not rejected as with XP which resulted in the 'server too busy' error message. Version 7.0
47 Compiled by Mrs. Wamwati Catherine

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES

Debuting with Windows Vista, and included in Windows Server 2008, IIS 7.0 features a modular architecture. Instead of a monolithic server which features all services, IIS 7 has a core web server engine. Modules offering specific functionality can be added to the engine to enable its features. The advantage of having this architecture is that only the features required can be enabled and that the functionalities can be extended by using custom modules. IIS 7 is built on a modular architecture. Modules, also called extensions, can be added or removed individually so that only modules required for specific functionality have to be installed. IIS 7 includes native modules as part of the full installation. These modules are individual features that the server uses to process requests and include the following:


HTTP modules – Used to perform tasks specific to HTTP in the request-processing pipeline, such as responding to information and inquiries sent in client headers, returning HTTP errors, and redirecting requests. Security modules – Used to perform tasks related to security in the requestprocessing pipeline, such as specifying authentication schemes, performing URL authorization, and filtering requests. Content modules – Used to perform tasks related to content in the requestprocessing pipeline, such as processing requests for static files, returning a default page when a client does not specify a resource in a request, and listing the contents of a directory. Compression modules – Used to perform tasks related to compression in the request-processing pipeline, such as compressing responses, applying Gzip compression transfer coding to responses, and performing pre-compression of static content. Caching modules – Used to perform tasks related to caching in the requestprocessing pipeline, such as storing processed information in memory on the server and using cached content in subsequent requests for the same resource. Logging and Diagnostics modules – Used to perform tasks related to logging and diagnostics in the request-processing pipeline, such as passing information and processing status to HTTP.sys for logging, reporting events, and tracking requests currently executing in worker processes.











Writing extensions to IIS 7 using ISAPI has been deprecated in favor of the module API, which allows modules to be plugged in anywhere within the request processing pipeline. Much of IIS's own functionality is built on this API, and as such, developers will have much more control over a request process than was possible in prior versions. Modules can be written using C++, or using the Http Module interface from a .NET Framework language. Modules can be loaded globally where the services provided by the module can affect all sites, or loaded on a per-site basis. IIS 7 has an integrated mode application pool where .NET modules are loaded into the pipeline using the module API, rather than ISAPI. As a result ASP.NET code can be used with all requests to the server.[14] For applications requiring strict IIS 6.0 compatibility, the Classic application pool mode loads asp.NET as an ISAPI. A significant change from previous versions of IIS is that all Web server configuration information is stored solely in XML configuration files, instead of in the metabase. The
48 Compiled by Mrs. Wamwati Catherine

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES

server has a global configuration file that provides defaults, and each virtual web's document root (and any subdirectory thereof) may contain a web.config containing settings that augment or override the defaults. Changes to these files take effect immediately. This marks a significant departure from previous versions whereby web interfaces, or machine administrator access, were required to change simple settings such as default document, active modules and security/authentication. It also eliminates the need to perform metabase synchronization between multiple servers in a farm of web servers. IIS 7 also features a completely rewritten administration interface that takes advantage of modern MMC features such as task panes and asynchronous operation. Configuration of ASP.NET is more fully integrated into the administrative interface. Other changes:
• • • • • • •

PICS content ratings, support for Microsoft Passport and server-side image maps is no longer included. Executing commands via server-side includes is no longer permitted. IISRESET -reboot has been removed. The CONVLOG tool, which converts IIS log files into NCSA format, has been removed. Support for enabling a folder for "Web Sharing" via the Windows Explorer interface has been removed. IIS Media Pack (see below), which allows IIS to be used as a bare-bones media server, without using Windows Media Services. New FTP module, that integrates with the new configuration store, as well as the new management environment.]

Version 7.5 IIS 7.5 is the latest update to the IIS 7.0 server. This release comes with Windows Server 2008 R2 and Windows 7. This integrates many separate downloads available from Microsoft into the release. Features Windows Server 2008 R2, with Internet Information Services 7.5 (IIS 7.5), provides a security-enhanced, easy-to-manage platform for developing and reliably hosting Web applications and services. More than just a Web server, IIS 7.5 is a major enhancement to the Windows Web platform and plays a central role in unifying Microsoft Web platform technologies—ASP.NET, Windows Communication Foundation Web services, and Windows SharePoint Services. Learn more about how the specific scenarios, enhancements, and features can help you.

More Control


Centralized Web Management
49 Compiled by Mrs. Wamwati Catherine

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES

Configure and manage your Web infrastructure from one place through a wide selection of administration tools. The new administration utility, IIS 7.5 Manager, is an enhanced tool for managing the Web server. Its updated GUI makes server administration easy with its logically organized, task-based interface allowing administrators to perform familiar tasks easily while also accessing significant new capabilities. Database Manager allows you to easily manage your databases within the IIS 7.5 Manager, both locally and through remote administration. Database Manager reads the list of connection strings stored in the configuration system for a given object and preloads them in the Database Manager Connections tree view where they can be expanded and managed. The Windows PowerShell Provider for IIS 7.5 expands the functionality of Windows PowerShell, allowing IT professionals and hosters to easily automate complex IIS 7.5 administration tasks, increasing the productivity of administrators. Performing regular tasks such as creating Web sites, enabling request tracing, or performing other routine operations become trivial with the use of this provider.

The AppCmd consolidated utility is used for efficiently accessing IIS 7.5 configuration through the command line, allowing automated server management tasks like creating sites or virtual directories, without having to write complex code. AppCmd is a powerful management tool for IIS deployments on all version of Windows Server 2008 R2, and may be especially useful in managing Server Core installations. With support for shared configuration, IIS 7.5 allows administrators to create one configuration file that can be used to set the configuration settings for multiple servers from a single network share. When the configurations file changes, each IIS 7.5 instance recycles only the affected parts of the system.

50 Compiled by Mrs. Wamwati Catherine

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES

Administrators can easily move customer sites from one server to another, implement convenient backup plans, and reduce overall downtime. The ASP.NET API has also been expanded to allow more control over request processing than was previously possible. The ability to plug directly into the server pipeline allows ASP.NET modules to replace, run before, or run after any IIS functionality. IIS 7.5 includes a new WMI provider that allows WMI developers to take advantage of the new IIS 7.5 features. The WMI provider allows for automating management tasks with scripts written in .NET or VBScript.

51 Compiled by Mrs. Wamwati Catherine

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES



Delegated Remote Management Delegate site configuration management and publishing to remote users.

Distributed configuration in IIS 7.5 enables those who host or administer Web sites or Windows Communication Foundation (WCF) services to delegate varying levels of administrative control to developers or content owners, thus helping to reduce the cost of ownership and the administration workload. For example, the administrative control of a Web site might be delegated so that the application developer can configure and maintain the default document or other properties used for that Web site. Administrators can also lock specific configuration settings so that they cannot be changed by anyone else. Configuration setting locking can ensure that a security policy like one that prevents script execution is not overridden by a content developer who has been delegated administrative access to the Web site. The delegation can be very specific, allowing an administrator to decide exactly which functions to delegate, on a case-bycase basis.

When a system administrator delegates a feature, all site or application-level administrators with delegated permissions are able to configure that feature for their site or application. In addition, features can be delegated, but set to Read-only. This setting allows a site or application administrator to view the setting for a feature, but does not allow them to make changes to the setting for that feature.

52 Compiled by Mrs. Wamwati Catherine

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES

IIS 7.5 Manager for Remote Administration provides Web developers and administrators with a firewall-friendly way to remotely manage IIS 7.5 Web servers over a secure Internet connection, by using Windows Vista, Windows XP, or Windows Server 2003. IIS 7.5 Manager for Remote Administration offers the same improved interface available on Windows Server 2008 R2 to manage and configure the Web server. Server administrators can perform almost any task remotely as if they were sitting in front of the server. Server administrators can use the remote administration feature of IIS 7.5 Manager to add user accounts and to allow site owners and Web application developers to connect to, modify, and view settings of any sites or applications for which they have been delegated permission. Users of shared hosting services allow their changes to affect only their site, not other sites or the entire server.



Easy Application Deployment Archive, package, migrate, and deploy complete applications and Web servers more easily.

The Web Deployment Tool is a tool that simplifies the migration, management, and deployment of Web applications, sites, and servers. It can be used to synchronize between IIS 6.0 and IIS 7.5 or to migrate between IIS 6.0 to IIS 7.5. In addition, it can be used to automatically package a Web site including its content, configuration, certificates, and databases. These packages can be used for versioning, backup, or in other deployments. The Web Deployment Tool allows you to efficiently synchronize sites, applications, or servers across your IIS 7.5 server farm by transferring only those changes which need synchronization. The tool simplifies the process by automatically determining the configuration, content, databases, and certificates to be synchronized for a specific site. In addition to the default behavior, you still have the option to specify additional providers for the synchronization including COM, GAC, and registry settings. ~ "The Web Deployment Tool can be used to automatically package a Web site including its content, configuration, certificates, and databases" "In combination, powerful packaging capabilities and dependencies checking help streamline the development and deployment of Web applications" ~ The Web Deployment Tool enables you to package configuration and content of your installed Web applications, including SQL databases, and use the packages for storage or redeployment. These packages, which may include certificates, can be deployed using the IIS Manager interface without requiring administrative privileges. This tool also integrates in Visual Studio 10 which, in combination with the powerful packaging capabilities and dependencies checking, helps developers streamline the development and deployment of Web applications. Simplify the planning of your IIS 6.0 to IIS 7.5 migrations by determining incompatibilities and previewing the proposed changes before starting the process. Learning about any potential issues in advance gives you the chance to take corrective
53 Compiled by Mrs. Wamwati Catherine

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES

measures, considerably improving your chances of having a smooth execution of your migration.

54 Compiled by Mrs. Wamwati Catherine

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES

More Choice Modular Web Server Deploy a streamlined, modular and extensible Web server IIS 7.5 has been redesigned from the ground up to incorporate a modular architecture that enables administrators to customize their Web servers by selectively installing or removing modules. Web servers are more flexible with IIS 7.5 as business needs change. The modular architecture makes it easy to add, remove, and replace any built-in module or third-party module by using the new IIS Manager interface. Developers can customize or extend the IIS 7.5 Web server to introduce new features using native (C/C++) or managed code (C#/Visual Basic .NET).

Administrators can choose to install a minimal environment with the Server Core installation option of Windows Server 2008 R2. Server Core omits graphical services and most libraries, in favor of a streamlined, command-line driven system. Server Core can be administered locally via the IIS command-line utility AppCmd, or remotely by using WMI. A Server Core installation installs the minimal files needed to provide the required functionality, so less disk space will be used on the server. With a smaller Server Core installation, there are fewer installed components that will need to be updated or patched, and the number of required restarts will be reduced, saving both WAN bandwidth usage by servers and administration time for the IT staff.

Existing ASP, ASP.NET 1.1 and ASP.NET 2.0 applications are expected to run on IIS 7.5 without code changes, using the compatible ISAPI support. This allows IT Professionals to run existing ISAPI tools, and to leverage existing investments. ISAPI filters that rely on
55 Compiled by Mrs. Wamwati Catherine

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES

READ RAW DATA notification are not supported in IIS 7.5 but most typically used ISAPI extensions and filters will operate as expected.

56 Compiled by Mrs. Wamwati Catherine

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES

Other IIS 7.5 Features that give you more choice: IIS Media Pack The IIS Media Pack[18] is a set of free add-on modules for delivering digital audio and video files from an Internet Information Services 7.0 (IIS7) Web server. Download delivery from a Web server to media player software is often as a progressive download, which allows the end user's media player to quickly start rendering the media file even as the download is still in progress. Examples of media player software that will work with the IIS Media Pack include Adobe Flash Player, Apple QuickTime Player, RealNetworks RealPlayer, Microsoft Windows Media Player, and Microsoft Silverlight. The IIS Media Pack provides some of the cost savings and content control benefits of streaming media servers to Web server delivery of media files. The first module, Bit Rate Throttling, was released to the general public on March 14, 2008[19]. For media files, Bit Rate Throttling downloads the first few seconds of the file as fast as possible, allowing playback to begin very quickly, and then automatically detects the encoded bit rate of the file and meters out the rest of the download at that bit rate. If an end user stops playback before the end of the file, the server has only downloaded a few more seconds of file than were actually consumed, reducing bandwidth costs when compared to traditional send-and-forget HTTP downloads. Metering the delivery of media files also reduces overall bandwidth and CPU usage on the IIS server, freeing resources to serve a higher number of concurrent users. The following eleven media file formats are supported by default in the Bit Rate Throttling module: ASF, AVI, FLV, M4V, MOV, MP3, MP4, RM, RMVB, WMA, WMV. Additional media file formats can be added using the IIS configuration system. Non-media files may also be throttled at a server-administratorspecified delivery rate. The second module is called Web Playlists, and is now in its second Customer Technology Preview (CTP) release[20]. This feature allows an IIS server administrator to specify a sequenced playback order for a set of media files without exposing the source URLs. Playback order and the ability to limit whether an end user can seek within or skip a file are controlled on the IIS server. The Web Playlists feature can also be used to dynamically generate personalized playlists for users.



Intelligent Media Serving Optimize bandwidth and set content delivery options through intelligent media serving.

 Bit Rate Throttling is an IIS 7.5 Extension that provides IT professionals and
hosters a highly configurable tool to control the speed of delivery for any file based on its type. In particular, it offers an optimal experience for media files as they are requested, while intelligently allocating bandwidth as the content is progressively downloaded. As a result, users can maintain a high quality experience viewing media while the server administrator controls the bandwidth usage based on the user’s consumption of the media, rather than the bandwidth availability.
57 Compiled by Mrs. Wamwati Catherine

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES

"Users can maintain a high quality experience viewing media while the server administrator controls the bandwidth usage"  Bit Rate Throttling enables you to save bandwidth costs by dynamically adjusting the download rate based on the type of the content being delivered through easily configurable server-side rules. A Fast Start experience for media is guaranteed by sending a few seconds of the content at the highest possible data rate before throttling down the delivery. Bit Rate Throttling allows you to manage the bandwidth allocation on all your concurrent downloads by implementing features that dynamically adjust the bandwidth according to the characteristics of the media files being downloaded. Bit Rate Throttling allows you to add support for other media file formats through its extensible architecture.







Web Playlists is an IIS 7.5 Extension that provides developers and hosters unprecedented control of how media content is delivered to users. Powerful customization features make it possible to monetize media delivery scenarios by inserting advertisement media and dynamically determining the content to be downloaded based on the session history and server-side configurable rules.

"Powerful customization features make it possible to monetize media delivery scenarios by inserting advertisement media"







Web Playlists allows you to create sets comprised of any type of digital media file that can be downloaded from a Web server using a playlist format based on the W3C Synchronized Multimedia Interface Language (SMIL). Web Playlists allows you to control the order and playback of advertising content to prevent end-users from skipping or seeking within commercial content by dynamically determining and controlling the media delivered to the user. Web Playlists gives you great control over the content delivered to your end-users by offering the ability to dynamically create client-side playlists through the implementation of custom providers and by leveraging existent ASP, PHP and other Web applications. In addition to controlling the playlist creation, the output of the client-side playlist can be transformed to any XML format by using Extensible Stylesheet Language (XSL).

ASP.NET and PHP Support Develop and deploy ASP.NET and PHP applications together on a flexible Web platform. Integrated pipeline In previous IIS versions, ASP.NET integrated with IIS via an ISAPI extension, exposing its own application and request processing model. This effectively exposed two separate server pipelines, one for native ISAPI filters and extension components, and another for managed application components. ASP.NET components would execute entirely inside
58 Compiled by Mrs. Wamwati Catherine

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES

the ASP.NET ISAPI extension, and only for requests mapped to ASP.NET in the IIS script map configuration. IIS 7.5 introduced a fundamental change from IIS 6.0 in the way managed requests are handled. In IIS 7.5, both native and managed code can be processed by default through a single request pipeline called the integrated pipeline. The integrated pipeline allows different application frameworks to run within a single Web server request pipeline, with built-in ASP.NET extensibility for all applications. Existing ASP.NET features, such as forms-based authentication or URL authorization, to be used for all types of Web content requests in the integrated pipeline. Develop and deploy ASP.NET and PHP applications together on a flexible Web platform IIS 7.5 provides an open platform for hosting PHP and ASP.NET applications from a single server, using a common set of administrative tools for management. The Web Platform Installer (Web PI) provides the foundation of the Microsoft Web platform by installing and configuring Microsoft's entire Web Platform including IIS 7.5, Visual Web Developer 2008 Express Edition, SQL Server 2008 Express Edition, and the .NET Framework. Using the Web PI’s simple user interface you can select specific components or install the entire Microsoft Web Platform onto your computer. To help you stay up-to-date with product releases the Web PI always contains the most current versions and new additions to the Microsoft Web Platform, including IIS 7.5 RC and RTW extensions for management, request handling, publishing and media.

Once you have the Microsoft Web platform installed, you can leverage key functionality like IIS’s FastCGI support to run FastCGI-compliant languages, like PHP, reliably and with greatly improved performance. The FastCGI protocol enables PHP applications to be hosted on the IIS web server in a high-performance and reliable way. FastCGI provides a high-performance alternative to the Common Gateway Interface (CGI), a standard way of interfacing external applications with Web servers that has been supported as part of the IIS feature-set since the very first release. ~ "With FastCGI—CGI and FastCGI-based Web applicationsrun faster while maintaining stability." ~ CGI programs are executables launched by the web server for each request in order to process the request and generate dynamic responses that are sent back to the client.
59 Compiled by Mrs. Wamwati Catherine

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES

Because many of these frameworks do not support multi-threaded execution, CGI enables them to execute reliably on IIS by executing exactly one request per process. Unfortunately, it provides poor performance due to the high cost of starting and shutting down a process for each request. FastCGI addresses the performance issues inherent to CGI by providing a mechanism to reuse a single process over and over again for many requests. Additionally, FastCGI maintains compatibility with non-thread-safe libraries by providing a pool of reusable processes and ensuring that each process will only handle one request at a time. To make it easier to administer the FastCGI settings, download the IIS 7.5 Administration Pack for a rich user experience integrated into IIS Manager for tasks like adding a PHP file handler. The Web Application Installer (Web AI) is designed to help get you up and running with the most widely used Web Applications freely available for your Windows Server. Web AI provides support for popular ASP.NET and PHP Web applications including Graffiti, DotNetNuke, WordPress, Drupal, OSCommerce, and more. With a few simple clicks Web AI will check your machine for the necessary pre-requisites, download these applications from their source location in the community, walk you through basic configuration items, and then install the free community applications on your computer. In addition, WebAI automatically configures FastCGI on IIS 7.5 for use with the community applications installed.

60 Compiled by Mrs. Wamwati Catherine

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES

More Reliable


Scalable Web Infrastructure Implement a scalable Web infrastructure with HTTP-based load balancing and intelligent request handling and routing.

Application Request Routing (ARR) enables hosting providers and Web server administrators to expand application and server availability through powerful rules controlling the routing of incoming HTTP requests. ARR automatically determines the best content server to service each request based on HTTP response header information, server variables, and sophisticated load balancing algorithms. With ARR, administrators can optimize resource utilization for application servers to reduce management costs for Web farms and shared hosting environments. ARR lets administrator create, manage, and apply load balancing rules to Server Farms in IIS 7.5 Manager. Administrators can then easily add or remove servers from a Server Farm to increase or decrease available capacity to match demand, without impacting the application’s availability. ~ "URL Rewriter for IIS 7.5 is an IIS Extension that provides functionality to perform redirects, send custom responses, or abort requests." ~ ARR also includes real traffic and URL test monitoring capabilities to determine the health of individual servers and configuration settings. Administrators can view aggregated runtime statistics in IIS 7.5 Manager. "This optimizes resource utilization for application servers and can reduce management costs for Web farms and shared hosting environments."

Using URL Rewriter, ARR gives administrators the ability to create powerful routing rules based on HTTP headers and server variables to determine the most appropriate content server for each request. URL Rewriter is an IIS 7.5 Extension that gives IIS administrators the ability to create powerful rules to implement easy-to-remember URLs for Web site pages, improve search results by making site URLs search engine-friendly, map static URLs, and enforce a consistent host name for a site. Using rule templates, rewrite maps and other functionality integrated into IIS Manager, administrators can easily set up rules to define URL rewriting behavior based on HTTP headers and server variables. URL Rewriter integrates seamlessly into IIS 7.5 Manager and supports both user-mode and kernel-mode caching for faster performance, and Failed Request Tracing to troubleshoot application logic execution.


Dynamic Caching and Compression Improve performance by enabling high-speed dynamic caching and compression.
61 Compiled by Mrs. Wamwati Catherine

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES

IIS 7.5 includes performance improvements like powerful compression for dynamic and static content, output caching, and SSL and Windows authentication in kernel mode, including kernel mode and user mode caching support for all types of dynamic content. For browsers that are compression-enabled, static compression allows a compressed copy of a static file to be sent to a client. The improved static compression in IIS 7.5 reduces the processor cost and memory requirements of using static compression. "The improved static compression in IIS 7.5 reduces the processor cost and memory requirements of using static compression."

Kernel caching with the HTTP.sys response cache can be one of the most effective means of scaling and improving Web server performance. Cached responses are served from the kernel, which greatly improves response times and increases the number of requests per second that IIS can serve because requests for cached content never enter IIS user mode. ~ "HTTP compression makes the best use of available bandwidth and can significantly increase site performance." ~ HTTP compression allows faster transmission of pages between the Web server and compression-enabled clients. In addition, HTTP compression makes the best use of available bandwidth and can significantly increase site performance. When the CPU of your server is not heavily loaded, the simplest compression strategy is to enable static and dynamic compression for all of the sites and site elements (directories and files) on the server. This is known as global HTTP compression. However, when the CPU load of your server is high, you might not want to enable compression for all of the sites and site elements on the server. If your Web sites use large amounts of bandwidth or if you want to use bandwidth more effectively, consider enabling HTTP compression, which provides faster transmission times between IIS and compression-enabled browsers regardless of whether your content is served from local storage or a UNC resource. If your network bandwidth is restricted, HTTP compression can be beneficial unless your processor usage is already very high. You can change the level of compression for static or dynamic files. Higher compression levels produce smaller compressed files but use more CPU and memory. Lower compression levels produce slightly larger compressed files, but with less impact on CPU and memory usage.


Powerful Diagnostic Tools

Find and fix issues quickly and easily with powerful diagnostic tools. IIS 7.5 simplifies troubleshooting by providing detailed and actionable error messages to server administrators. The new custom errors module in IIS 7.5 allows detailed error information to be sent back to the browser running on the local host server. Instead of seeing an error code, administrators can now see detailed information about the request, which potential issues may have caused the error, and suggestions about
62 Compiled by Mrs. Wamwati Catherine

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES

how to fix the error. The custom error information is only displayed to the server administrator; users of the Web site see the standard Web application error pages without the detailed information for server administrators.

IIS 7.5 makes it possible to troubleshoot failures without having to manually reproduce them. The Failed Request Tracing feature enables server administrators to define error conditions that they wish to monitor. This allows administrators to capture trace logs for a pre-configured failure condition automatically, all while avoiding the performance penalty of saving logs for all requests. With Failed Request Tracing, administrators can capture the valuable tracing information when errors occur, even if they are intermittent or hard to reproduce. If this feature is configured, and IIS 7.5 detects an error condition, it can automatically log detailed trace events for everything that led to the error. In addition, developers can instrument their application code with ASP.NET trace events. Failed Request Tracing will include the trace event information with the Failed Request Trace reports for a centralized troubleshooting experience. Failed Request Tracing helps any Web administrator, including Web hosters who manage many sites. A Web hoster can use Failed Request tracing for a single site or multiple sites to monitor for errors.

The underlying tracing infrastructure is exposed to IIS modules using the server extensibility model, allowing all IIS Extensions, whether they ship with IIS or are developed by third parties, to relay detailed tracing information during request processing. This allows developers and system administrators to create custom managed modules to take advantage of the unified tracing model. Developers can now write tracing modules that provide new ways to process and output traces, like a module to save IIS tracing information to a Microsoft SQL Server or to a text file.

63 Compiled by Mrs. Wamwati Catherine

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES

More Secure


Improved Server Protection Maximize Web site security through reduced server footprint and automatic application isolation.

Administrators can depend on IIS 7.5 for more secure hosting of Web applications. IIS 7.5 has been redesigned from the ground up to incorporate a modular architecture that enables administrators to customize their Web servers by selectively installing or removing modules. Administrators can install only the options that address the needs of the business while eliminating the server performance reductions and security risks that come with running unused server functionality. The modular architecture allows administrators to install only the smallest set of components needed. Administrators can easily minimize the attack and servicing surface, as well as shrink the process memory footprint. Only the modules required to run IIS as a static image server are installed by default in IIS 7.5. The default installation allows the IT administrator to start from the most secure base, adding on modules only as needed by the applications and services hosted on the Web server.

To further limit security exposure administrators can choose to install a minimal environment with the Server Core installation option of Windows Server 2008 R2. Server Core omits graphical services and most libraries, in favor of a stripped-down, commandline driven system. Server Core can be administered locally via the IIS command-line utility AppCmd, or remotely by using WMI. Because Server Core has a select number of roles, it can improve security and reduce the footprint of the operating system. With fewer files installed and running on the server, there are fewer attack vectors exposed to the network; therefore, there is less of an attack surface. Administrators can install just the specific services needed for a given server, keeping the exposure risk to an absolute minimum. "IIS 7.5 offers greater application isolation by giving worker processes a completely unique identity and sandboxed configuration by default."

64 Compiled by Mrs. Wamwati Catherine

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES

IIS 7.5 offers greater application isolation by giving worker processes a completely unique identity and sandboxed configuration by default, further reducing security risks. IIS 7.5 includes automatic application pool isolation and can sandbox thousands of Web sites on a single server. This allows each Web site to run in its own memory space with its own credentials, which helps to ensure applications are not affected by other failures or security breaches of applications running on the same server. This capability enables organizations to consolidate more Web sites onto fewer servers, and increases security and reliability for all Web sites running on a shared host.



Secure Content Publishing Publish Web content more securely using standards-based protocols.

The FTP Publishing Service for IIS 7.5 allows Web content creators to publish content more easily and securely to IIS 7.5 Web servers using modern Internet publishing standards. New features like membership-based authentication and enhanced logging give administrators a rich management and diagnostic experience for FTP sites. Built as an Extension for IIS 7.5, the new FTP service offers Web administrators and hosters an integrated management and configuration experience for FTP and Web sites through IIS Manager. The deep integration allows administrators to use IIS configuration management scripting tools such as AppCmd and the IIS PowerShell Provider to manage FTP configuration. "FTP for IIS 7.5 integrates seamlessly with the IIS 7.5 Manager to enable secure publishing of content using FTP over SSL"

FTP for IIS 7.5 integrates seamlessly with the IIS 7.5 Manager to enable secure publishing of content using FTP over SSL (FTPS), with support for Internet standards such as UTF8 and IPv6. FTP for IIS 7.5 allows users to enable FTP for an existing Web site, instead of creating separate FTP and Web sites to host the same content. FTP for IIS 7.5 also allows hosting multiple FTP sites on the same IP address through virtual host name support. FTP for IIS 7.5 removes the need to create Windows user accounts on the server to enable FTP publishing by allowing authentication using IIS Manager User accounts and .NET Membership. FTP for IIS 7.5 also provides enhanced logging that records all FTP traffic to help track FTP activity and diagnose potential issues. The WebDAV Extension for IIS 7.5 is a new module written specifically for Windows Server 2008 R2 that enables Web authors to publish content more easily and securely than before, and offers Web administrators and hosters better integration, configuration and authorization features. "The WebDAV Extension for IIS 7.5 is a new module that enables Web authors to publish content more easily and securely than before."

WebDAV for IIS 7.5 integrates seamlessly with the new IIS 7.5 Manager console and allows more secure publishing of content using HTTP over SSL. WebDAV for IIS 7.5 can be enabled at the site level, unlike in IIS 6.0, which enabled WebDAV at the server-level through a Web Service Extension. WebDAV for IIS 7.5 supports per-URL authoring rules,
65 Compiled by Mrs. Wamwati Catherine

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES

allowing administrators to specify custom WebDAV security settings on a per-URL basis with one set of security settings for normal HTTP requests and a separate set of security settings for WebDAV authoring. WebDAV conforms to the HTTP Extensions for Distributed Authoring standard.


Unauthorized Access Protection

IIS 5.0 and higher support the following authentication mechanisms: Basic access authentication In the context of an HTTP transaction, the basic access authentication is a method designed to allow a web browser, or other client program, to provide credentials – in the form of a user name and password – when making a request. Before transmission, the user name is appended with a colon and concatenated with the password. The resulting string is encoded with the Base64 algorithm. For example, given the user name Aladdin and password open sesame, the string Aladdin:open sesame is Base64 encoded, resulting in QWxhZGRpbjpvcGVuIHNlc2FtZQ==. The Base64-encoded string is transmitted and decoded by the receiver, resulting in the colon-separated user name and password string. While encoding the user name and password with the Base64 algorithm typically makes them unreadable by the naked eye, they are as easily decoded as they are encoded. Security is not the intent of the encoding step. Rather, the intent of the encoding is to encode non-HTTP-compatible characters that may be in the user name or password into those that are HTTP-compatible. The basic access authentication was originally defined by RFC 1945 (Hypertext Transfer Protocol – HTTP/1.0) although further information regarding security issues may be found in RFC 2616 (Hypertext Transfer Protocol – HTTP/1.1) and RFC 2617 (HTTP Authentication: Basic and Digest Access Authentication). Advantages One advantage of the basic access authentication is that it is supported by all popular[1] web browsers. It is rarely used on publicly accessible Internet web sites but may sometimes be used by small, private systems. A later mechanism, digest access authentication, was developed in order to replace the basic access authentication and enable credentials to be passed in a relatively secure manner over an otherwise insecure channel. Programmers and system administrators sometimes use basic access authentication, in a trusted network environment, to manually test web servers using Telnet or other plaintext network tools. This is a cumbersome process, but the network traffic is humanreadable for diagnostic purposes. Disadvantages Although the scheme is easily implemented, it relies on the assumption that the connection between the client and server computers is secure and can be trusted. Specifically, if SSL/TLS is not used, then the credentials are passed as plaintext and could
66 Compiled by Mrs. Wamwati Catherine

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES

be intercepted easily. The scheme also provides no protection for the information passed back from the server. Existing browsers retain authentication information until the tab or browser is closed or the user clears the history. [1] HTTP does not provide a method for a server to direct clients to discard these cached credentials. This means that there is no effective way for a server to "log out" the user without closing the browser. This is a significant defect that requires browser manufacturers to support a 'logout' user interface element or API available to JavaScript, further extensions to HTTP, or use of existing alternative techniques such as retrieving the page over SSL/TLS with an unguessable string in the URL.

Example Here is a typical transaction between an HTTP client and an HTTP server running on the local machine (localhost). It comprises the following steps.


• •





The client asks for a page that requires authentication but does not provide a user name and password. Typically this is because the user simply entered the address or followed a link to the page. The server responds with the 401 response code and provides the authentication realm. At this point, the client will present the authentication realm (typically a description of the computer or system being accessed) to the user and prompt for a user name and password. The user may decide to cancel at this point. Once a user name and password have been supplied, the client adds an authentication header (with value base64encode(username+":"+password)) to the original request and re-sends it. In this example, the server accepts the authentication and the page is returned. If the user name is invalid or the password incorrect, the server might return the 401 response code and the client would prompt the user again.

Note: A client may pre-emptively send the authentication header in its first request, with no user interaction required. Digest Access Authentication –
o

HTTP digest access authentication is one of the agreed methods a web server can use to negotiate credentials with a web user (using the HTTP protocol). Digest authentication is intended to supersede unencrypted use of the Basic access authentication, allowing user identity to be established securely without having to send a password in plaintext over the network. Digest authentication is basically an application of MD5 cryptographic hashing with usage of nonce values to prevent cryptanalysis.

Integrated Windows Authentication –

o Integrated Windows Authentication (IWA) is a term associated with

Microsoft products that refers to the SPNEGO, Kerberos, and NTLMSSP authentication protocols with respect to SSPI functionality introduced with
67 Compiled by Mrs. Wamwati Catherine

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES

Microsoft Windows 2000 and included with later Windows NT-based operating systems. The term is used more commonly for the automatically authenticated connections between Microsoft Internet Information Services, Internet Explorer, and other Active Directory aware applications. IWA is also known by several names like HTTP Negotiate authentication, NT Authentication, NTLM Authentication, Domain authentication, Windows Integrated Authentication, Windows NT Challenge/Response authentication, or simply Windows Authentication, . Overview Integrated Windows Authentication uses the security features of Windows clients and servers. Unlike Basic or Digest authentication, initially, it does not prompt users for a user name and password. The current Windows user information on the client computer is supplied by the browser through a cryptographic exchange involving hashing with the Web server. If the authentication exchange initially fails to identify the user, the browser will prompt the user for a Windows user account user name and password. Integrated Windows Authentication itself is not a standard or an authentication protocol. When IWA is selected as an option of a program (e.g. within the Directory Security tab of the IIS site properties dialog)[3] this implies that underlying security mechanisms should be used in a preferential order. If the Kerberos provider is functional and a Kerberos ticket can be obtained for the target, and any associated settings permit Kerberos authentication to occur (e.g. Intranet sites settings in Internet Explorer), the Kerberos 5 protocol will be attempted. Otherwise NTLMSSP authentication is attempted. Similarly, if Kerberos authentication is attempted, yet it fails, then NTLMSSP is attempted. IWA uses SPNEGO to allow initiators and acceptors to negotiate either Kerberos or NTLMSSP. Third party utilities have extended the Integrated Windows Authentication paradigm to UNIX, Linux and Mac systems. For technical information regarding the protocols behind IWA, see the articles for SPNEGO, Kerberos, NTLMSSP, NTLM, SSPI, and GSSAPI. Supported browsers Integrated Windows Authentication works with most modern browsers[4], but does not work over HTTP proxy servers.[3] Therefore, it is best for use in intranets where all the clients are within a single domain. It may work with other Web browsers if they have been configured to pass the user's logon credentials to the server that is requesting authentication. In Mozilla Firefox on Windows operating systems, the names of the domains/websites to which the authentication is to be passed can be entered (comma delimited for multiple domains) for the "network.negotiate-auth.trusted-uris" (for Kerberos) or in the "network. automatic-ntlm-auth.trusted-uris" (NTLM) Preference Name on the about:config page. On the Macintosh operating systems this no longer works as of version 3 of Firefox. Some websites may also require configuring the "network.negotiate-auth.delegationuris". Opera 9.01 and later versions can use NTLM/Negotiate, but will use Basic or Digest authentication if that is offered by the server. Chrome asks the user to enter credentials, even for Kerberos.
68 Compiled by Mrs. Wamwati Catherine

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES

Safari works, once you have a Kerberos ticket Other uses Windows Authentication is not limited to web-technology but is commonly used between all software running on Windows such as service-programs and Microsoft SQL Server. Also file sharing permissions can use Windows Authentication when integrated Microsoft Active Directory: this way user only needs to give login credentials once on a PC and has access to shared files over network with suitable permissions. .NET Passport Authentication (not supported in Windows Server 2008 and above) Windows Live ID (originally Microsoft Wallet[1], Microsoft Passport,[2], .NET Passport, then briefly Microsoft Passport Network) is a single sign-on service developed and provided by Microsoft that allows users to log in to many websites using one account. The service is commonly referred to as "MSN", because many services incorporating the Passport/Live ID are or were previously branded with the MSN brand. Product overview Most of the web sites and applications that use Windows Live ID are Microsoft sites, services, and properties such as Hotmail, MSNBC, MSN, Xbox 360's Xbox Live, the .NET Messenger Service, Zune or MSN subscriptions, but there are also several other companies affiliated with Microsoft that use it, such as Hoyts. Users of Hotmail or MSN automatically have a Windows Live ID that corresponds to their accounts. Most recently, user login data has started to allow demographic targeting by advertisers using Microsoft adCenter.[citation needed] Microsoft's Windows XP has an option to link a Windows user account with a Windows Live ID (appearing with its former names), logging users into Windows Live ID whenever they log into Windows.

Windows Live ID Web Authentication On August 15, 2007, Microsoft released the Windows Live ID Web Authentication SDK, enabling web developers to integrate Windows Live ID into their websites running on a broad range of web server platforms - including ASP.NET (C#), Java, Perl, PHP, Python and Ruby.[3][4] Windows Live ID support for Windows CardSpace The Windows Live ID login page presents users with the existing option to sign in with the usual Windows Live ID username/password credentials, or the alternative - to sign in using Windows CardSpace. Windows Live ID account owners can enable integration with Windows CardSpace (a component of the .NET Framework versions 3.0 and 3.5) by selecting an Information Card from the Windows CardSpace selector UI to link to their Windows Live ID. This CardSpace identity then becomes the alternate login credentials for that account, replacing the need for a password. [5]
69 Compiled by Mrs. Wamwati Catherine

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES

Windows Live ID support for OpenID On October 27, 2008, Microsoft announced that it was publicly committed to supporting the OpenID framework, with Windows Live ID becoming an OpenID provider.[6] This would allow users to use their Windows Live ID to sign-in to any website that supports OpenID authentication. IIS 7.5 includes the following additional security features:
• • • •

Client Certificate Mapping IP Security Request Filtering URL Authorization

Authentication changed slightly between IIS 6.0 and IIS 7, most notably in that the anonymous user which was named "IUSR_{machine-name}" is a built-in account in Vista and future operating systems and named "IUSR". Notably, in IIS 7, each authentication mechanism is isolated into its own module and can be installed or uninstalled. IIS Extensions IIS releases new feature modules between major version releases to add new functionality. The following extensions are available for IIS 7:


FTP Publishing Service – Lets Web content creators publish content securely to IIS 7 Web servers with SSL-based authentication and data transfer. Administration Pack – Adds administration UI support for management features in IIS 7, including ASP.NET authorization, custom errors, FastCGI configuration, and request filtering. Application Request Routing – Provides a proxy-based routing module that forwards HTTP requests to content servers based on HTTP headers, server variables, and load balance algorithms. Database Manager – Allows easy management of local and remote databases from within IIS Manager. Media Services – Integrates a media delivery platform with IIS to manage and administer delivery of rich media and other Web content. URL Rewrite Module – Provides a rule-based rewriting mechanism for changing request URLs before they are processed by the Web server. WebDAV – Lets Web authors publish content securely to IIS 7 Web servers, and lets Web administrators and hosters manage WebDAV settings using IIS 7 management and configuration tools. Web Deployment Tool – Synchronizes IIS 6.0 and IIS 7 servers, migrates an IIS 6.0 server to IIS 7, and deploys Web applications to an IIS 7 server.















70 Compiled by Mrs. Wamwati Catherine

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES

WEB BROWSER
A Web browser is a software application which enables a user to display and interact with text, images, videos, music, games and other information typically located on a Web page at a Web site on the World Wide Web or a local area network. Text and images on a Web page can contain hyperlinks to other Web pages at the same or different Web site. Web browsers allow a user to quickly and easily access information provided on many Web pages at many Web sites by traversing these links. Web browsers format HTML information for display, so the appearance of a Web page may differ between browsers. Web browsers are the most-commonly-used type of HTTP user agent. Although browsers are typically used to access the World Wide Web, they can also be used to access information provided by Web servers in private networks or content in file systems. History The history of the Web browser dates back to late 1980s, when a variety of technologies laid the foundation for the first Web browser, the Worldwide Web, by Tim Berners-Lee in 1991. That browser brought together a variety of existing and new software and hardware technologies. Over the following years, Web browsers were introduced by companies like Mozilla, Netscape, Microsoft, Apple, and Opera. More recently, Google entered the browser market. Current Web browsers Some of the Web browsers currently available for personal computers include Internet Explorer, Mozilla Firefox, Safari, Opera, Avant Browser, Konqueror, Lynx, Google Chrome, Maxthon, Flock, Arachne, Epiphany, K-Meleon and AOL Explorer. Protocols and standards Web browsers communicate with Web servers primarily using Hypertext Transfer Protocol (HTTP) to fetch Web pages. HTTP allows Web browsers to submit information to Web servers as well as fetch Web pages from them. The most-commonly-used version of HTTP is HTTP/1.1, which is fully defined in RFC 2616. HTTP/1.1 has its own required standards that Internet Explorer does not fully support, but most other currentgeneration Web browsers do. Pages are located by means of a URL (Uniform Resource Locator, RFC 1738), which is treated as an address, beginning with http: for HTTP transmission. Many browsers also support a variety of other URL types and their corresponding protocols, such as gopher: for Gopher (a hierarchical hyper linking protocol), ftp: for File Transfer Protocol (FTP), rtsp: for Real-time Streaming Protocol (RTSP), and https: for HTTPS (HTTP Secure, which is HTTP augmented by Secure Sockets Layer or Transport Layer Security). The file format for a Web page is usually HTML (Hypertext Markup Language) and is identified in the HTTP protocol using a MIME content type. Most browsers natively support a variety of formats in addition to HTML, such as the JPEG, PNG and GIF image formats, and can be extended to support more through the use of plug-in. The combination of HTTP content type and URL protocol specification allows Web-page
71 Compiled by Mrs. Wamwati Catherine

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES

designers to embed images, animations, video, sound, and streaming media into a Web page, or to make them accessible through the Web page. Early Web browsers supported only a very simple version of HTML. The rapid development of proprietary Web browsers led to the development of non-standard dialects of HTML, leading to problems with Web interoperability. Modern Web browsers support a combination of standards-based and de facto HTML and XHTML, which should be rendered in the same way by all browsers. No browser fully supports HTML 4.01, XHTML 1.x or CSS 2.1 yet. Many sites are designed using WYSIWYG HTML-generation programs such as Adobe Dreamweaver or Microsoft FrontPage. Microsoft FrontPage often generates non-standard HTML by default, hindering the work of the W3C in promulgating standards, specifically with XHTML and Cascading Style Sheets (CSS), which are used for page layout. Dreamweaver and other more modern Microsoft HTML development tools such as Microsoft Expression Web and Microsoft Visual Studio conform to the W3C standards. Some of the more popular browsers include additional components to support Usenet news, Internet Relay Chat (IRC), and e-mail. Protocols supported may include Network News Transfer Protocol (NNTP), Simple Mail Transfer Protocol (SMTP), Internet Message Access Protocol (IMAP), and Post Office Protocol (POP). These browsers are often referred to as "Internet suites" or "application suites" rather than merely Web browsers.

72 Compiled by Mrs. Wamwati Catherine

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES

INTERNET EXPLORER

Windows Internet Explorer (formerly Microsoft Internet Explorer; abbreviated MSIE), commonly abbreviated to IE, is a series of graphical web browsers developed by Microsoft and included as part of the Microsoft Windows line of operating systems starting in 1995. It has been the most widely used web browser since 1999, attaining a peak of about 95% usage share during 2002 and 2003 with IE5 and IE6 and that percentage share has declined since in the face of renewed competition from other web browser developers. Microsoft spent over $100 million a year in the late 1990s, with over 1,000 people working on IE by 1999. The most recent stable release is Internet Explorer 7, which is available as a free update for Windows XP Service Pack 2, and Windows Server 2003 with Service Pack 1 or later, Windows Vista, and Windows Server 2008. A public release candidate of Internet Explorer 8 was released in January 2009. History The Internet Explorer project was started in the summer of 1994 by Thomas Reardon and subsequently led by Benjamin Slivka, leveraging source code from Spyglass, Inc. Mosaic, an early commercial web browser with formal ties to the pioneering NCSA Mosaic browser. In late 1994, Microsoft licensed Spyglass Mosaic for a quarterly fee plus a percentage of Microsoft's non-Windows revenues for the software. Although bearing a name similar to NCSA Mosaic, Spyglass Mosaic had used the NCSA Mosaic source code sparingly.[3] Internet Explorer was first released as part of the add-on package Plus! for Windows 95 in 1995. Later versions were available as free downloads, or in service packs, and included in the OEM service releases of Windows 95 and later versions of Windows. Other versions available since the late 1990s include an embedded OEM version called Internet Explorer for Windows CE (IE CE), which is available for WinCE based platforms and currently based on IE6. Internet Explorer for Pocket PC, later rebranded Internet Explorer Mobile for Windows Mobile was also developed, and remain in development alongside the more advanced desktop versions.

Features Internet Explorer has been designed to view a broad range of web pages and to provide certain features within the operating system, including Microsoft Update. During the heyday of the historic browser wars, Internet Explorer superseded Netscape only when it caught up technologically to support the progressive features of the time. Standards support
73 Compiled by Mrs. Wamwati Catherine

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES

Internet Explorer, using the Trident layout engine:
• •



• •

Fully supports HTML 4.01, CSS Level 1, XML 1.0 and DOM Level 1, with minor implementation gaps. fully supports XSLT 1.0 as well as an obsolete Microsoft dialect of XSLT often referred to as WD-xsl, which was loosely based on the December 1998 W3C Working Draft of XSL. Support for XSLT 2.0 lies in the future: semi-official Microsoft bloggers have indicated that development is underway, but no dates have been announced. Partially supports CSS Level 2 and DOM Level 2, with major implementation gaps and conformance issues. Full conformance to the CSS 2.1 specification is on the agenda for the final Internet Explorer 8 release.[32]. Does not support XHTML, though it can render XHTML documents authored with HTML compatibility principles and served with a text/html MIME-type. Does not support SVG, neither for current version 7.0, nor for upcoming 8.0 version [33].

Internet Explorer uses DOCTYPE sniffing to choose between "quirks mode" (renders similarly to older versions of MSIE) and standards mode (renders closer to W3C's specifications) for HTML and CSS rendering on screen (Internet Explorer always uses standards mode for printing). It also provides its own dialect of ECMAScript called JScript. Internet Explorer has been subjected to criticism over its limited support for open web standards. Standards extensions Internet Explorer has introduced an array of proprietary extensions to many of the standards, including HTML, CSS and the DOM. This has resulted in a number of web pages that can only be viewed properly using Internet Explorer. Internet Explorer has introduced a number of extensions to JScript which have been adopted by other browsers. These include the inner-HTML property, which returns the HTML string within an element; the XML Http Request object, which allows the sending of HTTP request and receiving of HTTP response; and the design Mode attribute of the content Document object, which enables rich text editing of HTML documents. Some of these functionalities were not possible until the introduction of the W3C DOM methods. Its Ruby character extension to HTML is also accepted as a module in W3C XHTML 1.1, though it is not found in all versions of W3C HTML. Microsoft submitted several other features of IE for consideration by the W3C for standardization. These include the 'behavior' CSS property, which connects the HTML elements with JScript behaviors (known as HTML Components, HTC); HTML+TIME profile, which adds timing and media synchronization support to HTML documents (similar to the W3C XHTML+SMIL); and the VML vector graphics file format. However, all were rejected, at least in their original forms. VML was, however, subsequently combined with PGML (proposed by Adobe and Sun), resulting in the W3C-approved SVG format, currently one of the few vector image formats being used on the web, and which IE is now virtually unique in not supporting.] Other proprietary standards include, support for vertical text, but in a syntax different from W3C CSS3 candidate recommendation. Support for a variety of image effects and
74 Compiled by Mrs. Wamwati Catherine

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES

page transitions, which are not found in W3C CSS. Support for obfuscated script code, in particular JScript.Encode(). Support for embedding EOT fonts in web pages.[ Favicon The favicon (short for "favorites icon") introduced by Internet Explorer is now also supported and extended in other browsers. It allows web pages to specify a 16-by-16 pixel image for use in bookmarks. Originally, support was provided only for the native Windows ICO format, however it has now been extended to other types of images such as PNG and GIF.

INTERNET EXPLORER 7 Everyday Tasks Made Easier

• •

Streamlined interface-A redesigned, streamlined interface maximizes the area of the screen that displays a webpage, so you see more of what you need and less of what you don't. Advanced printing-Internet Explorer 7 automatically scales a webpage for printing, so the entire webpage fits on your printed page. Print options also include adjustable margins, customizable page layouts, removable headers and footers, and an option to print only selected text. Instant Search box-Web searches using your favorite search provider can now be entered into a search box within the toolbar, eliminating the clutter of separate toolbars. You can easily choose a provider from the dropdown list or add more providers. Favourites Center-Get quick and easy access to your Favorites, Tab Groups, Browsing History and RSS Feed subscriptions. Your Favorites Center expands when needed and can be anchored in place for even easier access. RSS feeds-Internet Explorer 7 automatically detects RSS feeds on sites and illuminates an icon on the toolbar. A single click on the icon allows you to preview and subscribe to the RSS feed if you want - so you’re automatically notified as content is updated. Read RSS feeds directly in the browser, scan for important stories and filter your view with search terms or site-specific categories. Tabbed browsing-View multiple sites in a single browser window. Easily switch from one site to another through tabs at the top of the browser frame. Quick Tabs-Easily select and navigate through open tabs by displaying thumbnails of them all in a single window. Tab Groups-Tabs can be grouped and saved into logical categories, so you can open multiple tabs with a single click. A Tab Group can easily be set as the Home Page Group so the entire Tab Group opens every time Internet Explorer is launched. Page zoom-Enlarge individual web pages, including both text and graphics, to either focus on specific content or to make content more accessible to those with vision limitations.



• •

• • •



75 Compiled by Mrs. Wamwati Catherine

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES

HERE ARE THE NEW FEATURES OF IE7 OVER ITS PREDECESSORS.


















• •



ActiveX Opt-in · Disables nearly all pre-installed ActiveX controls to prevent potentially vulnerable controls from being exposed to attack. You can easily enable or disable ActiveX controls as needed through the Information Bar and the Add-on Manager. Simplified User Experience · The Internet Explorer 7 frame is reorganized to make it noticeably simpler, more streamlined, and less cluttered with unnecessary items. This maximizes the area of the screen devoted to the webpages that you want to see and makes performing the most common browsing tasks easy. CSS Improvements · Addresses many of the major inconsistencies that can cause web developers problems when producing visually rich, interactive webpages. Improved support for CSS2.1, including selectors and fixed positioning, allow web developers to create more powerful effects without the use of script. Security Status Bar · Enhances awareness of website security and privacy settings by displaying colorcoded notifications next to the address bar. Internet Explorer 7 changes the Address Bar green for websites bearing new High Assurance certificates, indicating the site owner has completed extensive identity verification checks. Phishing Filter notifications, certificate names, and the gold padlock icon are now also adjacent to the address bar for better visibility. Certificate and privacy detail information can easily be displayed with a single click on the Security Status Bar. Advanced Printing · Automatically scales a printed webpage so that it’s not wider than the paper it will be printed on. Internet Explorer 7 also includes a multi-page print preview with live margins, resizing text to avoid document clipping, and an option to print only selected text. Application Compatibility Toolkit · An application compatibility kit will be available for Internet Explorer 7, allowing IT pros and developers to understand any incompatibilities with their existing websites, applications, and deployments. Phishing Filter · Proactively warns and helps protect you against potential or known fraudulent sites and blocks the site if appropriate. The opt-in filter is updated several times per hour using the latest security information from Microsoft and several industry partners about fraudulent websites. Toolbar Search Box Web searches using your favorite search provider can now be entered into a search box within the toolbar, eliminating the clutter of separate toolbars. You can easily choose a provider from the dropdown list or add more providers. Alpha Channel in PNG · Supports transparency within the PNG image format, resulting in better-looking websites that are simpler to build. Cross-Domain Barriers · Limits script on webpages from interacting with content from other domains or windows. This enhanced safeguard will further protect against malware by limiting the potential for malicious websites to manipulate flaws in other websites or cause you to download undesired content or software. Favorites Center · Offers easy and fast access to Favorites, Tab Groups, Browsing History, and RSS
76 Compiled by Mrs. Wamwati Catherine

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES

• • •

• •

• •



• •





Feed subscriptions. Expands out when needed, and can be pinned in place for even easier access. Group Policy Improvements · Provides support for all aspects of Internet Explorer settings through Group Policy, greatly easing management across an enterprise. Delete Browsing History · Allows you to clean up cached pages, passwords, form data, cookies, and history, all from a single window. RSS Feeds · Automatically detects RSS feeds on sites by illuminating an icon on the toolbar. A single click on the icon allows you to preview and optionally subscribe to the site’s RSS feed, and then be automatically notified as content is updated. Read RSS feeds directly in the browser, scan for important stories, and filter your view with search terms or site-specific categories. Internet Explorer Administration Kit · OEMs and deployment specialists can pre-package Internet Explorer with customized settings or additional programs for their users. Address Bar Protection · Every window, regardless of whether it’s a pop-up or standard window, will present an address bar to the user, helping to block malicious sites from emulating trusted sites. Tabbed Browsing · View multiple sites in a single browser window. Easily switch from one site to another through tabs at the top of the browser frame. Improved AJAX Support · Improves the implementation of the XMLHTTP Request as a native JavaScript object for rich AJAX-style applications. While Internet Explorer 6 handled XMLHTTP requests with an ActiveX control, Internet Explorer 7 exposes XMLHTTP natively. This improves syntactical compatibility across different browsers and allows clients to configure and customize a security policy of their choice without compromising key AJAX scenarios. International Domain Name Anti-spoofing · In addition to adding support for International Domain Names in URLs, Internet Explorer also notifies you when visually similar characters in the URL are not expressed in the same language, thus protecting you against sites that could otherwise appear as known, trustworthy sites. Quick Tabs · Provides easy tab selection and navigation by displaying thumbnails of all open tabs in a single window. Open Search Extensions · In conjunction with Amazon.com, a set of RSS Simple List Extensions were submitted to the RSS community, and released under the Creative Commons license. Among other features, these extensions greatly simplify development of applications that interact with Open Search-compatible search providers. URL Handling Security · Redesigned URL parsing ensures consistent processing and minimizes possible exploits. The new URL handler helps centralize critical data parsing and increases data consistency throughout the application. Tab Groups · Tabs can be grouped and saved into logical categories, allowing you to open multiple tabs with a single click. A Tab Group can easily be set as the Home Page
77 Compiled by Mrs. Wamwati Catherine

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES









Group so the entire Tab Group opens every time Internet Explorer is launched from the Start menu. RSS Platform · Provides rich functionality for downloading, storing, and accessing RSS feeds across the entire operating system, and enables more users than ever before to embrace RSS. Once a feed is subscribed to in one application, that subscription, and all the associated content, will be made available across the operating system for any application that wishes to consume it. Fix My Settings · To keep you protected from browsing with unsafe settings, Internet Explorer 7 warns you with an Information Bar when current security settings may put you at risk. Within the Internet Control Panel, you will see certain critical items highlighted in red when they are unsafely configured. In addition to dialog alerts warning you about unsafe settings, you will be reminded by the Information Bar as long as the settings remain unsafe. You can instantly reset Internet security settings to the ‘Medium-High’ default level by clicking the ‘Fix My Settings’ option in the Information Bar. Page Zoom · Enlarge or zoom in on individual WebPages, including both text and graphics, to either focus on specific content or to make content more accessible to those with vision limitations. Add-ons Disabled Mode · To help troubleshoot difficulties launching Internet Explorer or reaching specific websites, you have the ability to start in “No Add-ons” mode, where only critical system Add-ons are enabled.

Internet Explorer 8 Beta 2 running on Windows Vista
Internet Explorer 8 is the latest version of Internet Explorer and has been in development since August 2007 at the latest.] On March 5, 2008, the first public beta (Beta 1) was released to the general public. On August 27, 2008, the second public beta (Beta 2) was released. It supports Windows XP SP2 and SP3, Windows Server 2003 SP2, Windows Vista and Windows Server 2008 on both 32-bit as well as 64-bit architectures.[ Internet Explorer 8 (IE8) RC1 was released on January 26th, 2009. Security, ease of use, and improvements in RSS, CSS, and Ajax support are Microsoft's priorities for IE8. It includes much stricter compliance with web standards, including a planned full Cascading Style Sheets 2.1 compliance for the release version. All these changes allow Internet Explorer 8 to pass the Acid2 test. However, to prevent compatibility issues, IE8 also includes the IE7 rendering behavior. Sites that expect IE7 quirks can disable IE8's breaking changes by including a Meta element in the HEAD section of the any webpage. IE8 also includes numerous improvements to JavaScript support as well as performance improvements. It includes support for Accelerators - which allow supported web applications to be invoked without explicitly navigating to them - and Web Slices - which allows portions of page to be subscribed to and monitored from a redesigned Favorites Bar. Other features include InPrivate privacy features, and SmartScreen phishing filter.

INTERNET EXPLORER 9
In this topic • Use the new browser controls
78 Compiled by Mrs. Wamwati Catherine

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES

• Pin sites to the taskbar • Search in the Address bar • Use Download Manager • Work with tabs • Protect your information while you browse • Information that doesn't slow you down

Getting started with Internet Explorer 9 Windows Internet Explorer 9 has a streamlined look and many new features that speed up your web browsing experience.

Favorites Center Tabs automatically appear to the right of the Address bar, but you can move them so they appear below the Address bar, as they did in previous versions of Internet Explorer. You can always show the Favorites, Command, Status, and Menu bars by right-clicking the Tools button , and then selecting them on a menu.

79 Compiled by Mrs. Wamwati Catherine

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES

Pin sites to the taskbar You can access websites that you visit regularly by pinning them to the taskbar on your Windows 7 desktop. Pinned site Pinning a site is simple: just drag its tab to the taskbar—the website's icon will stay there

until you remove it. When you click the icon later, the website will open in Internet Explorer. Whenever you open a pinned site, the website icon appears at the top of the browser, so you have easy access to original webpage that you pinned. The Back and Forward buttons change color to match the color of the icon. Search in the Address bar You can now search directly from the Address bar. If you enter a website's address, you'll go directly to the website. If you enter a search term or incomplete address, you'll launch a search using the currently selected search engine. Click the address bar to select your search engine from the listed icons or to add new ones.

Search in the Address bar When you search from the Address bar, you'll have the option of opening a search results page or the top search result (if your chosen search provider supports the feature). You can also turn on optional search suggestions in the Address bar.

80 Compiled by Mrs. Wamwati Catherine

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES

Use Download Manager Download Manager keeps a list of the files you download and notifies you when a file might be malware (malicious software). It also lets you pause and restart a download, and shows you where to find downloaded files on your computer.

Download Manager Work with tabs You can open a tab by clicking the New Tab button to the right of the most recently opened tab. Use tabbed browsing to open many web pages in a single window. To look at two tabbed pages at the same time, click a tab, and then drag it out of the Internet Explorer window to open the tab's webpage in a new window.

New Tab page Protect your information while you browse To protect the security and privacy of your information while you browse, Internet Explorer 9 has introduced Tracking Protection and ActiveX Filtering:
• Use Tracking Protection to limit the browser's communication with certain

websites—determined by a Tracking Protection List—to help keep your information private. Anyone can create a Tracking Protection List, and they'll be available online.

81 Compiled by Mrs. Wamwati Catherine

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES

• ActiveX is a technology web developers use to create interactive content on their

sites, but it can also pose a security risk. You can use Internet Explorer 9 to block ActiveX controls for all sites, and then turn them back on for only the sites that you trust.

Information that doesn't slow you down The new Notification bar that appears at the bottom of Internet Explorer gives you important status information when you need it, but it won't force you to click a series of messages to continue browsing.

Notification bar

82 Compiled by Mrs. Wamwati Catherine

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES

INTERNET EXPLORER ADMINISTRATION KIT
Internet Explorer Administration Kit (IEAK), was an add-on to Internet Explorer to let an organization customize IE for their needs, released by Microsoft. Knowledge of the IEAK is tested on for MCSE. Versions released include:
• • • •

Internet Explorer Administration Kit 6, and 6 SP1, for Internet Explorer 6 Internet Explorer Administration Kit 5 and 5.5, for Internet Explorer 5 Internet Explorer Administration Kit 4 for Internet Explorer 4 IEAK for Internet Explorer 3

IEAK 7 IEAK for Internet Explorer 7 can be used by organizations to customize the settings for the browser, integrate add-ons, change branding of the browser to use customized logos, and centrally manage the distribution of the software. The IEAK consists of the following components:






Internet Explorer Customization Wizard, which lets an organization customize the configuration of the browser, and create redistributable packages with the customizations applied. IEAK Profile Manager, which lets create multiple sets of IE settings and customizations. Any of the set can then be quickly selected for building the redistributable. IEAK Toolkit, which provides tools, sample scripts and resources such as bitmaps.

Windows Internet Explorer 7 (IE7) is a web browser released by Microsoft in October 2006. Internet Explorer 7 is part of a long line of versions of Internet Explorer and was the first major update to the browser in over 5 years. It ships as the default browser in Windows Vista and Windows Server 2008 and is offered as a replacement for Internet Explorer 6 on Windows XP and Windows Server 2003. Estimates of IE7's global market share place it between approximately 26% and 47%. Large portions of the underlying architecture, including the rendering engine and security framework, have been significantly reworked. New features include tabbed browsing, page zooming, an integrated search box, a feed reader, better internationalization, and improved support for web standards. Security enhancements include a publishing filter, stronger encryption on Windows Vista, and a "Delete browsing history" button to easily clear private data.

83 Compiled by Mrs. Wamwati Catherine

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES

Compare Windows Windows 7 editions Feature comparison Starter Home Premium Professional Ultimate Windows vs. Windows PC vs. Mac Simplifies everyday tasks ( = improved) Buy it Windows XP Multitask more easily. Chat and share with free photo, e-mail, and IM programs. Browse the web easily and more safely. Find files and programs instantly. Open programs and files you use most in a click or two. Connect to any available wireless network in just three clicks. Navigate lots of open windows more quickly. Easily share files, photos, and music on your home network. Print to a single printer from any PC in the house. Manage printers, cameras, and other devices better. Organize lots of files, documents, and photos effortlessly. Windows Windows Vista 7 Feature Windows Taskbar Windows Live Essentials Internet Explorer 8 Windows Search Jump Lists

View Available Networks Peek, Shake, Snap HomeGroup

HomeGroup Device Stage

Windows Libraries

84 Compiled by Mrs. Wamwati Catherine

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES

Works the way you want Personalize your desktop with themes and photos. Connect to company networks more securely. Fully compatible with 64bit PCs. Run Windows XP business programs. Built-in defense against spyware and other malware. Help keep your data private and secure. Manage and monitor your children’s PC use. Designed for faster sleep and resume. Improved power management for longer battery life.

Desktop Domain Join 64-bit support Windows XP Mode Windows Defender Bit Locker Parental Controls Performance improvements Power management

Makes new things possible Watch and record TV on your PC. Create and share movies and slide shows in minutes. Get the most realistic game graphics and vivid multimedia. Stream music, photos, and videos around your house. Connect to your home PC media library while you're away. Touch and tap instead of point and click.
85 Compiled by Mrs. Wamwati Catherine

Windows Media Center Windows Liv e Movie Maker DirectX 11

Play To

Remote Media Streaming Windows Touch

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES

86 Compiled by Mrs. Wamwati Catherine

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES

WEB DEVELOPMENT
Web development is a broad term for any activity related to developing a web site for the World Wide Web or an intranet. This can include e-commerce business development, web design, web content development, client-side/server-side scripting, and web server configuration. However, among web professionals, "web development" usually refers only to the non-design aspects of building web sites, e.g. writing markup and coding. Web development can range from developing the simplest static single page of plain text to the most complex web-based internet applications, electronic businesses, or social network services. For larger businesses and organizations, web development teams can consist of hundreds of people (web developers). Smaller organizations may only require a single permanent or contracting webmaster, or secondary assignment to related job positions such as a graphic designer and/or Information systems technician. Web development may be a collaborative effort between departments rather than the domain of a designated department. Web development as an industry Since the mid-1990s, web development has been one of the fastest growing industries in the world. In 1995 there were fewer than 1,000 web development companies in the United States alone, but by 2005 there were over 30,000 such companies. The web development industry is expected to grow over 20% by 2010. The growth of this industry is being pushed by large businesses wishing to sell products and services to their customers and to automate business workflow. In addition, cost of Web site development and hosting has dropped dramatically during this time. Instead of costing tens of thousands of dollars, as was the case for early websites, one can now develop a simple web site for less than a thousand dollars, depending on the complexity and amount of content. Smaller Web site development companies are now able to make web design accessible to both smaller companies and individuals further fueling the growth of the web development industry. As far as web development tools and platforms are concerned, there are many systems available to the public free of charge to aid in development. A popular example is the LAMP (Linux, Apache, MySQL, PHP), which is usually distributed free of charge. This fact alone has manifested into many people around the globe setting up new Web sites daily and thus contributing to increase in web development popularity. Another contributing factor has been the rise of easy to use WYSIWYG web development software, most prominently Adobe Dreamweaver or Microsoft Expression Studio (formerly Microsoft Frontpage) . Using such software, virtually anyone can develop a Web page in a matter of minutes. Knowledge of HyperText Markup Language (HTML), or other programming languages is not required, but recommended for professional results. The next generation of web development tools uses the strong growth in LAMP and Microsoft .NET technologies to provide the Web as a way to run applications online. Web developers now help to deliver applications as Web services which were traditionally only available as applications on a desk based computer. Instead of running executable code on a local computer, users are interacting with online applications to create new content. This has created new methods in communication and allowed for many opportunities to decentralize information and media distribution. Users
87 Compiled by Mrs. Wamwati Catherine

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES

are now able to interact with applications from many locations, instead of being tied to a specific workstation for their application environment. Examples of dramatic transformation in communication and commerce led by web development include e-commerce. Online auction sites such as eBay have changed the way consumers consume and purchase goods and services. Online resellers such as Amazon.com and Buy.com (among many, many others) have transformed the shopping and bargain hunting experience for many consumers. Another good example of transformative communication led by web development is the blog. Web applications such as MovableType and WordPress have created easily implemented blog environments for individual Web sites. Open source content systems such as Typo3, Xoops, Joomla!, and Drupal have extended web development into new modes of interaction and communication. Typical Areas Web Development can be split into many areas and a typical and basic web development hierarchy might consist of; Client Side Coding
• • • •

AJAX Provides new methods of using JavaScript, PHP and other languages to improve the user experience. Flash Adobe Flash Player is a ubiquitous client-side platform ready for RIAs. Flex 2 is also deployed to the Flash Player (version 9+) JavaScript Formally called EMCAScript, JavaScript is a ubiquitous client side programming tool. Microsoft Silverlight Microsoft's browser plugin that enables animation, vector graphics and high-definition video playback, programmed using XAML and .NET programming languages.

Client-side refers to operations that are performed by the client in a client–server relationship in a computer network. Typically, a client is a computer application, such as a web browser, that runs on a user's local computer or workstation and connects to a server as necessary. Operations may be performed client-side because they require access to information or functionality that is available on the client but not on the server, because the user needs to observe them or provide input, or because the server lacks the processing power to perform the operations in a timely manner for all of the clients it serves. Additionally, if operations can be performed by the client, without sending data over the network, they may take less time, use less bandwidth, and incur a lesser security risk. When the server serves data in a commonly used manner, for example according to the HTTP or FTP protocols, users may have their choice of a number of client programs (most modern web browsers can request and receive data using both of those protocols). In the case of more specialized applications, programmers may write their own server, client, and communications protocol, that can only be used with one another. Programs that run on a user's local computer without ever sending or receiving data over a network are not considered clients, and so the operations of such programs would not be considered client-side operations.
88 Compiled by Mrs. Wamwati Catherine

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES

Client-side scripting generally refers to the class of computer programs on the web that are executed client-side, by the user's web browser, instead of server-side (on the web server).[1] This type of computer programming is an important part of the Dynamic HTML (DHTML) concept, enabling web pages to be scripted; that is, to have different and changing content depending on user input, environmental conditions (such as the time of day), or other variables. Web authors write client-side scripts in languages such as JavaScript (Client-side JavaScript) and VBScript. Method Client-side scripts are often embedded within an HTML document (hence known as an "embedded script"), but they may also be contained in a separate file, which is referenced by the document (or documents) that use it (hence known as an "external script"). Upon request, the necessary files are sent to the user's computer by the web server (or servers) on which they reside. The user's web browser executes the script, then displays the document, including any visible output from the script. Client-side scripts may also contain instructions for the browser to follow in response to certain user actions, (e.g., clicking a button). Often, these instructions can be followed without further communication with the server. By viewing the file that contains the script, users may be able to see its source code. Many web authors learn how to write client-side scripts partly by examining the source code for other authors' scripts. In contrast, server-side scripts, written in languages such as Perl, PHP, and server-side VBScript, are executed by the web server when the user requests a document. They produce output in a format understandable by web browsers (usually HTML), which is then sent to the user's computer. The user cannot see the script's source code (unless the author publishes the code separately), and may not even be aware that a script was executed. Documents produced by server-side scripts may, in turn, contain client-side scripts. Client-side scripts have greater access to the information and functions available on the user's browser, whereas server-side scripts have greater access to the information and functions available on the server. Server-side scripts require that their language's interpreter be installed on the server, and produce the same output regardless of the client's browser, operating system, or other system details. Client-side scripts do not require additional software on the server (making them popular with authors who lack administrative access to their servers); however, they do require that the user's web browser understands the scripting language in which they are written. It is therefore impractical for an author to write scripts in a language that is not supported by popular web browsers. Due to security restrictions, client-side scripts may not be allowed to access the user's computer beyond the web browser application. Techniques like ActiveX controls can be used to sidestep this restriction. Unfortunately, even languages that are supported by a wide variety of browsers may not be implemented in precisely the same way across all browsers and operating systems.
89 Compiled by Mrs. Wamwati Catherine

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES

Authors are well-advised to review the behavior of their client-side scripts on a variety of platforms before they put them into use. Server Side Coding
• • • • • • • • • • • •

ASP (Microsoft proprietary) ColdFusion (Adobe proprietary, formerly Macromedia) CGI and/or Perl (open source) Java, e.g. J2EE or WebObjects Lotus Domino PHP (open source) Python, e.g. Django (web framework) (open source) Ruby, e.g. Ruby on Rails (open source) Smalltalk e.g. Seaside, AIDA/Web SSJS Server-Side JavaScript, e.g. Aptana Jaxer, Mozilla Rhino Websphere (IBM proprietary) .NET (Microsoft proprietary)

However lesser known languages like Ruby and Python are often paired with database servers other than MySQL (the M in LAMP). Below are example of other databases currently in wide use on the web. For instance some developers prefer a LAPR(Linux/Apache/PostrgeSQL/Ruby on Rails) setup for development. Server-side refers to operations that are performed by the server in a client–server relationship in computer networking. Typically, a server is a software program, such as a web server, that runs on a remote server, reachable from a user's local computer or workstation. Operations may be performed server-side because they require access to information or functionality that is not available on the client, or require typical behaviour that is unreliable when it is done client-side. Server-side operations also include processing and storage of data from a client to a server, which can be viewed by a group of clients. Advantage: This lightens the work of your client. Examples of server-side processing include the creation & adaptation of a database using MySQL Server-side scripting is a web server technology in which a user's request is fulfilled by running a script directly on the web server to generate dynamic web pages. It is usually used to provide interactive web sites that interface to databases or other data stores. This is different from client-side scripting where scripts are run by the viewing web browser, usually in JavaScript. The primary advantage to server-side scripting is the ability to highly customize the response based on the user's requirements, access rights, or queries into data stores. When the server serves data in a commonly used manner, for example according to the HTTP or FTP protocols, users may have their choice of a number of client programs (most modern web browsers can request and receive data using both of those protocols). In the case of more specialized applications, programmers may write their own server, client, and communications protocol that can only be used with one another.
90 Compiled by Mrs. Wamwati Catherine

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES

Programs that run on a user's local computer without ever sending or receiving data over a network are not considered clients, and so the operations of such programs would not be considered client-side operations.

Database Technology
• • • • • • • •

Apache Derby DB2 (IBM proprietary) Firebird Microsoft SQL Server MySQL Oracle PostgreSQL SQLite

In practice, many web developers will also have interdisciplinary skills / roles, including:
• • •

Graphic design / web design Information architecture and copywriting/copyediting with web usability, accessibility and search engine optimization in mind Project management, QA and other aspects common to IT development in general

The above list is a simple website development hierarchy and can be extended to include all client side and server side aspects. It is still important to remember that web development is generally split up into client side coding covering aspects such as the layout and design, then server side coding, which covers the website's functionality and back end systems.

91 Compiled by Mrs. Wamwati Catherine

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES

WEB HOSTING
Web hosting – Storing of Web sites: the business of supplying server space for storage of Web sites on the Internet, and sometimes the provision of ancillary services such as Web site creation A web hosting service is a type of Internet hosting service that allows individuals and organizations to make their own website accessible via the World Wide Web. Web hosts are companies that provide space on a server they own or lease for use by their clients as well as providing Internet connectivity, typically in a data center. Web hosts can also provide data center space and connectivity to the Internet for servers they do not own to be located in their data center, called collocation or Housing as it is commonly called in Latin America or France.. The scope of web hosting services varies greatly. The most basic is web page and smallscale file hosting, where files can be uploaded via File Transfer Protocol (FTP) or a Web interface. The files are usually delivered to the Web "as is" or with little processing. Many Internet service providers (ISPs) offer this service free to their subscribers. People can also obtain Web page hosting from other, alternative service providers. Personal web site hosting is typically free, advertisement-sponsored, or inexpensive. Business web site hosting often has a higher expense. Single page hosting is generally sufficient only for personal web pages. A complex site calls for a more comprehensive package that provides database support and application development platforms (e.g. PHP, Java, Ruby on Rails, ColdFusion, and ASP.NET). These facilities allow the customers to write or install scripts for applications like forums and content management. For e-commerce, SSL is also highly recommended. The host may also provide an interface or control panel for managing the Web server and installing scripts as well as other modules and service applications like e-mail. Some hosts specialize in certain software or services (e.g. e-commerce). They are commonly used by larger companies to outsource network infrastructure to a hosting company. Hosting reliability and uptime

Multiple racks of servers. Hosting uptime refers to the percentage of time the host is accessible via the internet. Many providers state that they aim for at least 99.9% uptime (roughly equivalent to 45 minutes of downtime a month, or less), but there may be server restarts and planned (or
92 Compiled by Mrs. Wamwati Catherine

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES

unplanned) maintenance in any hosting environment and this may or may not be considered part of the official uptime promise. Many providers tie uptime and accessibility into their own service level agreement (SLA). SLAs sometimes include refunds or reduced costs if performance goals are not met.

93 Compiled by Mrs. Wamwati Catherine

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES

Types of hosting

A typical server "rack," commonly seen in co-location centers. Internet hosting services can run Web servers; see Internet hosting services. Many large companies who are not internet service providers also need a computer permanently connected to the web so they can send email, files, etc. to other sites. They may also use the computer as a website host so they can provide details of their goods and services to anyone interested. Additionally these people may decide to place online orders.










Free web hosting service: offered by different companies with limited services, sometimes supported by advertisements, and often limited when compared to paid hosting. Shared web hosting service: one's website is placed on the same server as many other sites, ranging from a few to hundreds or thousands. Typically, all domains may share a common pool of server resources, such as RAM and the CPU. The features available with this type of service can be quite extensive. A shared website may be hosted with a reseller. Reseller web hosting: allows clients to become web hosts themselves. Resellers could function, for individual domains, under any combination of these listed types of hosting, depending on who they are affiliated with as a reseller. Resellers' accounts may vary tremendously in size: they may have their own virtual dedicated server to a colocated server. Many resellers provide a nearly identical service to their provider's shared hosting plan and provide the technical support themselves. Virtual Dedicated Server: also known as a Virtual Private Server (VPS), divides server resources into virtual servers, where resources can be allocated in a way that does not directly reflect the underlying hardware. VPS will often be allocated resources based on a one server to many VPSs relationship, however virtualization may be done for a number of reasons, including the ability to move a VPS container between servers. The users may have root access to their own virtual space. Customers are sometimes responsible for patching and maintaining the server. Dedicated hosting service: the user gets his or her own Web server and gains full control over it (user has root access for Linux/administrator access for Windows); however, the user typically does not own the server. Another type of Dedicated hosting is Self-Managed or Unmanaged. This is usually the least expensive for Dedicated plans. The user has full administrative access to the
94 Compiled by Mrs. Wamwati Catherine

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES









• •

server, which means the client is responsible for the security and maintenance of his own dedicated server. Managed hosting service: the user gets his or her own Web server but is not allowed full control over it (user is denied root access for Linux/administrator access for Windows); however, they are allowed to manage their data via FTP or other remote management tools. The user is disallowed full control so that the provider can guarantee quality of service by not allowing the user to modify the server or potentially create configuration problems. The user typically does not own the server. The server is leased to the client. Colocation web hosting service: similar to the dedicated web hosting service, but the user owns the colo server; the hosting company provides physical space that the server takes up and takes care of the server. This is the most powerful and expensive type of web hosting service. In most cases, the Colocation provider may provide little to no support directly for their client's machine, providing only the electrical, Internet access, and storage facilities for the server. In most cases for colo, the client would have his own administrator visit the data center on site to do any hardware upgrades or changes. Cloud Hosting: is a new type of hosting platform that allows customers powerful, scalable and reliable hosting based on clustered load-balanced servers and utility billing. A cloud hosted website may be more reliable than alternatives since other computers in the cloud can compensate when a single piece of hardware goes down. Also, local power disruptions or even natural disasters are less problematic for cloud hosted sites, as cloud hosting is decentralized. Cloud hosting also allows providers (such as Amazon) to charge users only for resources consumed by the user, rather than a flat fee for the amount the user expects they will use, or a fixed cost upfront hardware investment. Alternatively, the lack of centralization may give users less control on where their data is located which could be a problem for users with data security or privacy concerns. Clustered hosting: having multiple servers hosting the same content for better resource utilization. Clustered Servers are a perfect solution for high-availability dedicated hosting, or creating a scalable web hosting solution. A cluster may separate web serving from database hosting capability. (Usually Web hosts use Clustered Hosting for their Shared hosting plans, as there are multiple benefits to the mass managing of clients) Grid hosting: this form of distributed hosting is when a server cluster acts like a grid and is composed of multiple nodes. Home server: usually a single machine placed in a private residence can be used to host one or more web sites from a usually consumer-grade broadband connection. These can be purpose-built machines or more commonly old PCs. Some ISPs actively attempt to block home servers by disallowing incoming requests to TCP port 80 of the user's connection and by refusing to provide static IP addresses. A common way to attain a reliable DNS hostname is by creating an account with a dynamic DNS service. A dynamic DNS service will automatically change the IP address that a URL points to when the IP address changes.

HOSTING YOUR OWN WEB SITE Hosting your web site on your own server is always an option. Here are some points to consider:

95 Compiled by Mrs. Wamwati Catherine

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES

Hardware Expenses To run a "real" web site, you will have to buy some powerful server hardware. Don't expect that a low cost PC will do the job. You will also need a permanent (24 hours a day ) high-speed connection. Software Expenses Remember that server-licenses often are higher than client-licenses. Also note that server-licenses might have limits on number of users. Labor Expenses Don't expect low labor expenses. You have to install your own hardware and software. You also have to deal with bugs and viruses, and keep your server constantly running in an environment where "everything could happen". Using an Internet Service Provider Renting a server from an Internet Service Provider (ISP) is a common option. Most small companies store their web site on a server provided by an ISP. Here are some advantages: Connection Speed Most ISPs have very fast connections to the Internet. Powerful Hardware ISPs often have powerful web servers that can be shared by several companies. You can also expect them to have an effective load balancing, and necessary backup servers. Security and Stability ISPs are specialists on web hosting. Expect their servers to have more than 99% up time, the latest software patches, and the best virus protection.

Things to Consider with an ISP 24-hour supportMake sure your ISP offers 24-hours support. Don't put yourself in a situation where you cannot fix critical problems without having to wait until the next working day. Toll-free phone could be vital if you don't want to pay for long distance calls. Daily Backup Make sure your ISP runs a daily backup routine, otherwise you may lose some valuable data.
96 Compiled by Mrs. Wamwati Catherine

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES

Traffic Volume Study the ISP's traffic volume restrictions. Make sure that you don't have to pay a fortune for unexpected high traffic if your web site becomes popular. Bandwidth or Content Restrictions Study the ISP's bandwidth and content restrictions. If you plan to publish pictures or broadcast video or sound, make sure that you can. E-mail Capabilities Make sure your ISP supports the e-mail capabilities you need. Front Page Extensions If you use FrontPage to develop your web site, make sure your ISP supports FrontPage server extensions. Database Access If you plan to use data from databases on your web site, make sure your ISP supports the database access you need. A domain name is a unique name for your web site. Choosing a hosting solution should include domain name registration. Your domain name should be easy to remember and easy to type. What is the World Wide Web?
• • •

The Web is a network of computers all over the world. All the computers in the Web can communicate with each other. All the computers use a communication protocol called HTTP.

How does the WWW work?
• • • • •

Web information is stored in documents called web pages. Web pages are files stored on computers called web servers. Computers reading the web pages are called web clients. Web clients view the pages with a program called a web browser. Popular browsers are Internet Explorer and Firefox.

How does a Browser Fetch a Web Page?
• • •

A browser fetches a page from a web server by a request. A request is a standard HTTP request containing a page address. An address may look like this: http://www.example.com/default.htm.

How does a Browser Display a Web Page?
• •

All web pages contain instructions for display. The browser displays the page by reading these instructions.
97 Compiled by Mrs. Wamwati Catherine

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES

• •

The most common display instructions are called HTML tags. HTML tags look like this <p>This is a paragraph.</p>.

What is a Web Server?
• • • • •

The collection of all your web pages is called your web site. To let others view your web pages, you must publish your web site. To publish your work, you must copy your site to a web server. Your own PC can act as a web server if it is connected to a network. Most common is to use an Internet Service Provider (ISP).

What is an Internet Service Provider?
• • • •

ISP stands for Internet Service Provider. An ISP provides Internet Services. A common Internet service is web hosting. Web hosting means storing your web site on a public server.
• •

Web hosting normally includes email services. Web hosting often includes domain name registration

THE COMMON GATEWAY INTERFACE (CGI)
       What Is CGI? CGI Applications Some Working CGI Applications Internal Workings of CGI Configuring the Server Programming in CGI CGI Considerations

What is CGI? As you traverse the vast frontier of the World Wide Web, you will come across documents that make you wonder, “How did they do this?” These documents could consist of, among other things, forms that ask for feedback or registration information, image maps that allow you to click on various parts of the image, counters that display the number of users that accessed the document, and utilities that allow you to search databases for particular information. In most cases, you’ll find that these effects were achieved using the Common Gateway Interface, commonly known as CGI. One of the Internet’s worst-kept secrets is that CGI is astoundingly simple. That is, it’s trivial in design, and anyone with an iota of programming experience can write rudimentary scripts that work. It’s only when your needs are more demanding that you have to master the more complex workings of the Web. In a way, CGI is easy the same way cooking is easy: anyone can toast a muffin or poach an egg. It’s only when you want a Hollandaise sauce that things start to get complicated. CGI is the part of the Web server that can communicate with other programs running on the server. With CGI, the Web server can call up a program, while passing user-specific data to the program (such as what host the user is connecting from, or input the user has supplied using HTML form syntax). The program then processes that data and the server passes the program’s response back to the Web browser. CGI isn’t magic; it’s just
98 Compiled by Mrs. Wamwati Catherine

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES

programming with some special types of input and a few strict rules on program output. Everything in between is just programming. Of course, there are special techniques that are particular to CGI, and that’s what this book is mostly about. But underlying it all is the simple model shown in Figure.

Figure: Simple diagram of CGI WHAT IS CGI? CGI stands for Common Gateway Interface, which is a standard for external gateway programs to interface with information servers such as HTTP servers. CGI is not a program or a programming language. It is a collection of protocols (or rules) that allow Web clients to execute programs on a Web server and receive their output (if any). Usually, the most common way for CGI to work is that the Web client (users) enters input data (if needed, some CGI programs do not need any input, such as “Hello Somebody” example mentioned earlier) , which are transferred to the server based on some protocols. The server receives the input, then passes the input to the CGI program. Then the CGI program is executed (fore example, by either sending mail to somebody via Form-mail, or returning the search result back to the users if it is a search program...). The conceptual working of a form-based CGI query is illustrated in Figure 1.

Figure 1. Now let’s reference Figure 1. Take the above “Hello, somebody” CGI program as an example to see the exact steps to run a form based CGI program:
99 Compiled by Mrs. Wamwati Catherine

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES

1. The browser requests an HTML document from the server. When you click Run Hello somebody example the request is sent to the server. 2. The server sends the document, which includes a form-based form. This is what we received: Interactive CGI Example
This is an interactive CGI program, which is the well-known "hello, somebody!" program. Enter your name and click the submit button, see what will happen! Top of Form  Enter your Name:
  Send Reset

Bottom of Form

3. The reader enters the requested information into the form.Now you enter your name, for example “ Zhanshou Yu”. 4. When the reader clicks the submit button, the browser sends the data in the form field as well as the name of a CGI program to run. When you click the submit button, the data will be sent in the format of “name=Zhanshou+Yu” The server runs the CGI program passing it the form data. After receive the message, the server passes the input string “name=Zhanshou+Yu” to the CGI program “hello.pl” which is specified in the HTML FORM tag. Here “hello.pl” is the CGI program. 5. The CGI program processes the output and sends it back to the server, which in turn passes it back to the browser. The above output was sent back to the server, the server then sent it back to the Web browser (e.g., Netscape or Internet Explorer). The browser displays the final result; this is what we see on the screen: Hello! Zhanshou Yu How nice to see you right here! CGI APPLICATIONS CGI turns the Web from a simple collection of static hypermedia documents into a whole new interactive medium, in which users can ask questions and run applications. Let’s take a look at some of the possible applications that can be designed using CGI. Forms One of the most prominent uses of CGI is in processing forms. Forms are a subset of HTML that allow the user to supply information. The forms interface makes Web browsing an interactive process for the user and the provider. Figure shows a simple form. Figure: Simple form illustrating different widgets

100 Compiled by Mrs. Wamwati Catherine

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES

As can be seen from the figure, a number of graphical widgets are available for form creation, such as radio buttons, text fields, checkboxes, and selection lists. When the form is completed by the user, the Submit Order! button is used to send the information to the server, which executes the program associated with the particular form to “decode” the data. Generally, forms are used for two main purposes. At their simplest, forms can be used to collect information from the user. But they can also be used in a more complex manner to provide back-and forth interaction. For example, the user can be presented with a form listing the various documents available on the server, as well as an option to search for particular information within these documents. A CGI program can process this information and return document(s) that match the user’s selection criteria.

Gateways Web gateways are programs or scripts used to access information that is not directly readable by the client. For example, say you have an Oracle database that contains baseball statistics for all the players on your company team and you would like to provide this information on the Web. How would you do it? You certainly cannot point your client to the database file (i.e., open the URL associated with the file) and expect to see any meaningful data. CGI provides a solution to the problem in the form of a gateway. You can use a language such as oraperl or a DBI extension to Perl to form SQL queries to read the information contained within the database. Once you have the information, you can format and send it to the client. In this case, the CGI program serves as a gateway to the Oracle database, as shown in Figure. Figure: A gateway to a database

101 Compiled by Mrs. Wamwati Catherine

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES

Similarly, you can write gateway programs to any other Internet information service, including Archie, WAIS, and NNTP (Usenet News). Chapter, Gateways to Internet Information Servers, shows examples of interacting with other Internet services. In addition, you can amplify the power of gateways by using the forms interface to request a query or search string from the user to retrieve and display dynamic, or virtual, information. We will discuss these special documents next. Virtual Documents Virtual, or dynamic, document creation is at the heart of CGI. Virtual documents are created on the fly in response to a user’s information request. You can create virtual HTML, plain text, image, and even audio documents. A simple example of a virtual document could be something as trivial as this: Welcome to Shishir’s WWW Server! You are visiting from diamond.com. The load average on this machine is 1.25. Happy navigating! In this example, there are two pieces of dynamic information: the alphanumeric address (IP name) of the remote user and the load average on the serving machine. This is a very simple example, indeed! On the other hand, programs that use a combination of graphics libraries, gateways, and forms. As a more sophisticated example, say you are the manager of an art gallery that specializes in selling replicas of ancient Renaissance paintings and you are interested in presenting images of these masterpieces on the Web. You start out by creating a form that asks for user information for the purpose of promotional mailings, presents a search field for the user to enter the name of a painting, as well as a selection list containing popular paintings. Once the user submits the form to the server, a program can email the user information to a certain address, or store it in a file. And depending on the user’s selection, either a message stating that the painting does not exist or an image of the painting can be displayed along with some historical information located elsewhere on the Internet. Along with the picture and history, another form with several image-processing options to modify the brightness, contrast, and/ or size of the picture can be displayed. You can write another CGI program to modify the image properties on the fly using certain graphics libraries, such as gd, sending the resultant picture to the client. This is an example of a more complex CGI program using many aspects of CGI programming.

102 Compiled by Mrs. Wamwati Catherine

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES

Some Working CGI Applications What better way to learn about CGI than to see actual programs in action? Here are the locations of some of the more impressive CGI programs on the Web:  Lycos World Wide Web Search -Located at http://www.lycos.com, this server allows the user to search the Web for specific documents. Lycos returns a dynamic hypertext document containing the documents that match the user’s search criteria.  Coloring Book -An entertaining application that displays an image for users to color. It can be accessed at http://www.ravenna.com/coloring.  ArchiePlex Gateway -A gateway to the Archie search server. Allows the user to search for a specific string and returns a virtual hypertext document. This useful gateway is located at http://pubweb.nexor.co.uk/ public/archie/archieplex/archieplex.html. •  Guestbook with World Map -A guestbook is a forms-based application that allows users to leave messages for everyone to see. Though there are numerous guest books on the Web, this is one of the best. You can access it at http://www.cosy.sbg.ac.at/rec/guestbook.  Japanese <-> English Dictionary -A sophisticated CGI program that queries the user for an English word, and returns a virtual document with graphic images of an equivalent Japanese word, or vice versa. It can be accessed at http://www.wg.omron.co.jp/cgi-bin/je? SASE=jfiedl.html or at http://enterprise.ic.gc.ca/cgi-bin/j-e. Although most of these documents are curiosities, they illustrate the powerful aspects of CGI. The interface allows for the creation of highly effective virtual documents using forms and gateways. Internal working of CGI So how does the whole interface work? Most servers expect CHI programs and scripts to reside in a special directory, usually called cgi-bin, and/or to have a certain file extension. (These configuration parameters are discussed in the Configuring the Server section in this chapter.) When a user opens a URL associated with a CGI program, the client sends a request to the server asking for the file. For the most part, the request for a CGI program looks the same as it does for all Web documents. The difference is that when a server recognizes that the address being requested is a CGI program, the server does not return the file contents verbatim. Instead, the server tries to execute the program. Here is what a sample client request might look like: GET /cgi-bin/welcome.pl HTTP/1.0 Accept: www/source Accept: text/html Accept: image/gif User-Agent: Lynx/2.4 libwww/2.14 From: [email protected] This GET request identifies the file to retrieve as /cgi-bin/ welcome.pl. Since the server is configured to recognize all files inf the cgi-bin directory tree as CGI programs, it understands that it should execute the program instead of relaying it directly to the browser. The string HTTP/1.0 identifies the communication protocol to use. The client request also passes the data formats it can accept (www/ source, text/html, and image/gif), identifies itself as a Lynx client, and sends user information. All this information is made available to the CGI program, along with additional information from the server.
103 Compiled by Mrs. Wamwati Catherine

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES

The way that CGI programs get their input depends on the server and on the native operating system. On a UNIX system, CGI programs get their input from standard input (STDIN) and from UNIX environment variables. These variables store such information as the input search string (in the case of a form), the format of the input, the length of the input (in bytes), the remote host and user passing the input, and other client information. They also store the server name, the communication protocol, and the name of the software running the server. Once the CGI program starts running, it can either create and output a new document, or provide the URL to an existing one. On UNIX, programs send their output to standard output (STDOUT) as a data stream. The data stream consists of two parts. The first part is either a full or partial HTTP header that (at minimum) describes what format the returned data is in (e.g., HTML, plain text, GIF, etc.). A blank line signifies the end of the header section. The second part is the body, which contains the data conforming to the format type reflected in the header. The body is not modified or interpreted by the server in any way. A CGI program can choose to send the newly created data directly to the client or to send it indirectly through the server. If the output consists of a complete HTTP header, the data is sent directly to the client without server modification. (It’s actually a little more complicated than this, as we will discuss in Chapter Output from the Common Gateway Interface.) Or, as is usually the case, the output is sent to the server as a data stream. The server is then responsible for adding the complete header information and using the HTTP protocol to transfer the data to the client. Here is the sample output of a program generating an HTML virtual document, with the complete HTTP header: HTTP/1.0 200 OK Date: Thursday, 22-February-96 08:28:00 GMT Server: NCSA/1.4.2 MIME-version: 1.0 Content-type: text/html Content-length: 2000 <HTML> <HEAD><TITLE>Welcome to Shishir’s WWW Server! </TITLE></HEAD> <BODY> <H1>Welcome!</H1> </BODY> </HTML> The header contains the communication protocol, the date and time of the response, the server name and version, and the revision of the MIME protocol.[1] Most importantly, it also consists of the MIME content type and the number of characters (equivalent to the number of bytes) of the enclosed data, as well as the data itself. Now, the output with the partial HTTP header: [1] What is MIME and what does it stand for? MIME (Multipurpose Internet Mail Extensions) is a specification that was originally developed for sending multiple types of data through electronic mail. MIME types are used to identify types of data sent as content over the Web. Content-type: text/html <HTML> <HEAD><TITLE>Welcome to Shishir’s WWW Server!
104 Compiled by Mrs. Wamwati Catherine

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES

</TITLE></HEAD> <BODY> <H1>Welcome!</H1> </BODY> </HTML> In this instance, the only header line that is output is the Content type header, which describes the MIME format of the output. Since the output is in HTML format, text/html is the content type that is declared. Most CGI programmers prefer to supply only a partial header. It is much simpler to output the format and the data than to formulate the complete header information, which can be left to the server. However, there are times when you need to send the information directly to the client (by outputting a complete HTTP header). Configuring the server Before you can run CGI programs on your server, certain parameters in the server configuration files must be modified. If you are using either the NCSA or CERN HTTP server, you need to first set the Server Root directive in the httpd.conf file to point to the directory where the server software is located: ServerRoot /usr/local/etc/httpd Running CGI Scripts On the NCSA server, the ScriptAlias directive in the server resource map file (srm.conf) indicates the directory where the CGI scripts are placed. ScriptAlias /cgi-bin/ /usr/local/etc/httpd/cgi-bin/ For example, if a user accesses the URL: http://your_host.com/cgi-bin/welcome the local program: /usr/local/etc/httpd/cgibin/welcome will be executed by the server. You can have multiple directories to hold CGI scripts: ScriptAlias /cgi-bin/ /usr/local/etc/httpd/cgi-bin/ ScriptAlias /my-cgi-bin/ /usr/local/etc/httpd/my-cgibin/. You might wonder why all CGI programs must be placed in distinct directories. The most important reason for this is system security. By having all the programs in one place, a server administrator can control and monitor all the programs being run on the system. However, there are directives that allow programs to be run outside of these directories, based on the file extension. The following directives, when placed in the srm.conf configuration file, allow the server to execute files containing .pl, .sh, or .cgi extensions. AddType application/xhttpd-cgi .pl .sh .cgi However, this could be very dangerous! By globally enabling all files ending in certain extensions, there is a risk that novice programmers might write programs that violate system security (e.g., printing the contents of important system files to standard output). On the CERN server, setting up the CGI directory is done in the httpd.conf file, using the following syntax: Exec /cgi-bin/* /usr/local/etc/httpd/cgi-bin Programming in CGI You might wonder, “Now that I know how CGI works, what programming language can I use?” The answer to that question is very simple: You can use whatever language you want, although certain languages are more suited for CGI programming than others. Before choosing a language, you must consider the following features: • Ease of text manipulation • Ability to interface with other software libraries and utilities • Ability to access environment variables (in UNIX)
105 Compiled by Mrs. Wamwati Catherine

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES

Let’s look at each of these features in more detail. Most CGI applications involve manipulating text some way or another, so inherent pattern matching is very important. For example, form information is usually “decoded” by splitting the string on certain delimiters. The ability of a language to interface with other software, such as databases, is also very important. This greatly enhances the power of the Web by allowing you to write gateways to other information sources, such as database engines or graphic manipulation libraries. Finally, the last attribute that must be taken into account is the ease with which the language can access environmental variables. These variables constitute the input to the CGI program, and thus are very important. Some of the more popular languages for CGI programming include AppleScript. Here is a quick review of the advantages and, in some cases, disadvantages of each one. AppleScript (Macintosh Only) Since the advent of System 7.5, AppleScript is an integral part of the Macintosh operating system (OS). Though AppleScript lacks inherent pattern-matching operators, certain extensions have been written to make it easy to handle various types of data. AppleScript also has the power to interface with other Macintosh applications through Apple Events. For example, a Mac CGI programmer can write a program that presents a form to the user, decode the contents of the form, and query and search a Microsoft FoxPro database directly through AppleScript. C/C++ (UNIX, Windows, Macintosh) C and C++ are very popular with programmers, and some use them to do CGI programming. These languages are not recommended for the novice programmer; C and C++ impose strict rules for variable and memory declarations, and type checking. In addition, these languages lack database extensions and inherent pattern-matching abilities, although modules and functions can be written to achieve these functions. However, C and C++ have a major advantage in that you can compile your CGI application to create a binary executable, which takes up fewer system resources than using interpreters (like Perl or Tcl) to run CGI scripts.

C Shell (UNIX Only) C Shell lacks pattern-matching operators, and so other UNIX utilities, such as sed or awk, must be used whenever you want to manipulate string information. However, there is a software tool, called uncgi and written in C, that decodes form data and stores the information into shell environment variables, which can be accessed rather easily. Obviously, communicating with a database directly is impossible, unless it is done through a foreign application. Finally, the C Shell has some serious bugs and limitations that make using it a dangerous proposition for the beginner. Perl (UNIX, Windows, Macintosh) Perl is by far the most widely used language for CGI programming! It contains many powerful features, and is very easy for the novice programmer to learn. The advantages of Perl include:  It is highly portable and readily available.  It contains extremely powerful string manipulation operators, as well as functions to deal with binary data.  It contains very simple and concise constructs.
106 Compiled by Mrs. Wamwati Catherine

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES

It makes calling shell commands very easy, and provides some useful equivalents of certain UNIX system functions.  There are numerous extensions built on top of Perl for specialized functions; for example, there is oraperl(or the DBI Extensions), which contains functions for interfacing with the Oracle database. Because of these overwhelming advantages, Perl is the language used for most of the examples throughout this chapter. To whet your appetite slightly, here is an example of a CGI Perl program that creates the simple virtual document presented in the Virtual Documents section that appeared earlier in this chapter: #!/usr/local/bin/perl print “Content-type: text/plain”,”\n\n”; print “Welcome to Shishir’s WWW Server!”, “\n”; $remote_host = $ENV{‘REMOTE_HOST’}; print “You are visiting from “, $remote_host, “. “; $uptime = ‘/usr/ucb/uptime‘ ; ($load_average) = ($uptime =~ /average: ([^,]*)/); print “The load average on this machine is: “, $load_average, “.”,“\n”; print “Happy navigating!”, “\n”; exit (0);  The first line of the program is very important. It tells the server to run the Perl interpreter located in /usr/local/bin to execute the program. Simple print statements are used to display information to the standard output. This CGI program outputs a partial HTTP header (the one Content-type header). Since this script generates plain text and not HTML, the content type is text/plain. Two new lines (\n) are output after the header. This is because HTTP requires a blank line between the header and body. Depending on the platform, you may need to output two carriage return and new line combinations (\r\n\r\n). The first print statement after the header is a greeting. The second print statement after the header displays the remote host of the user accessing the server. This information is retrieved from the environmental variable REMOTE_HOST. As you peruse the next bit of code, you will see what looks like a mess! However, it is a combination of very powerful search operators, and is called a regular expression (or commonly known as regexp)—see the expression below. In this case, the expression is used to search the output from the UNIX command uptime for a numeric value that is located between the string “average:” and the next comma. Finally, the last statement displays a good luck message. Tcl (UNIX Only) Tcl is gaining popularity as a CGI programming language. Tcl consists of a shell, tclsh, which can be used to execute your scripts. Like Perl, tclsh also contains simple constructs, but is a bit more difficult to learn and use for the novice programmer. Like Perl, Tcl contains extensions to databases and graphic libraries. It also supports regular expressions, but is quite inefficient in handling these expressions at compile time, especially when compared to Perl. Visual Basic (Windows Only) Visual Basic is to Windows what AppleScript is to the Macintosh OS as far as CGI programming is concerned. With Visual Basic, you can communicate with other Windows applications such as databases and spreadsheets. This makes Visual Basic a very
107 Compiled by Mrs. Wamwati Catherine

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES

powerful tool for developing CGI applications on a PC, and it is very easy to learn. However, Visual Basic lacks powerful string manipulation operators. CGI Considerations Now that we have decided on a language for CGI programming, let’s look at some considerations that need to be taken to create effective virtual documents. First and most importantly, you need to understand what kind of information is to be presented. If it is plain text or HTML, there is no problem. However, if the data is unreadable by the client, a gateway has to be written to effectively translate that data. This leads to another important matter: The original (or “unreadable”) data has to be organized in such a way that it will be easy for the gateway to read from and write to the data source. Once you have the gateway and you can retrieve data, you can present it in numerous ways. For example, if the data is numerical in nature, you can create virtual graphs and plots using various utilities software. On the other hand, if the data consists of graphical objects, you can modify the information using numerous graphic manipulation tools. In summary, you need to think about what you want to present and how to prevent it long before the actual process of implementing CGI programs. This will ensure the creation of effective virtual documents.

HYPERTEXT TRANSFER PROTOCOL SECURE (HTTPS)
Hypertext Transfer Protocol Secure (HTTPS) is a combination of the Hypertext Transfer Protocol with the SSL/TLS protocol to provide encrypted communication and secure identification of a network web server. HTTPS connections are often used for payment transactions on the World Wide Web and for sensitive transactions in corporate information systems. HTTPS should not be confused with Secure HTTP (S-HTTP) specified in RFC 2660. Main idea The main idea of HTTPS is to create a secure channel over an insecure network. This ensures reasonable protection from eavesdroppers and man-in-the-middle attacks, provided that adequate cipher suites are used and that the server certificate is verified and trusted. The trust inherent in HTTPS is based on major certificate authorities which come preinstalled in browser software (this is equivalent to saying "I trust certificate authority (e.g. VeriSign/Microsoft/etc.) to tell me whom I should trust"). Therefore an HTTPS connection to a website can be trusted if and only if all of the following are true: 1. The user trusts that their browser software correctly implements HTTPS with correctly pre-installed certificate authorities. 2. The user trusts the certificate authority to vouch only for legitimate websites without misleading names. 3. The website provides a valid certificate (an invalid certificate shows a warning in most browsers), which means it was signed by a trusted authority. 4. The certificate correctly identifies the website (e.g. visiting https://example and receiving a certificate for "Example Inc." and not anything else [see above]). 5. Either the intervening hops on the Internet are trustworthy, or the user trusts the protocol's encryption layer (TLS or SSL) is unbreakable by an eavesdropper.
108 Compiled by Mrs. Wamwati Catherine

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES

Browser integration When connecting to a site with an invalid certificate, older browsers would present the user with a dialog box asking if they wanted to continue. Newer browsers display a warning across the entire window. Newer browsers also prominently display the site's security information in the address bar. Extended validation certificates turn the address bar green in newer browsers. Most browsers also display a warning to the user when visiting a site that contains a mixture of encrypted and unencrypted content.

Many web browsers, including Firefox (shown Most web browsers alert the user when here), use the address bar to tell the user that visiting sites that have invalid security their connection is secure, often by coloring certificates. This example is from Firefox. the background. The Electronic Frontier Foundation, opining that "[i]n an ideal world, every web request could be defaulted to HTTPS", has provided an add-on for the Firefox browser that does so for several frequently used websites.[1][2] Difference from HTTP As opposed to HTTP URLs which begin with "http://" and use port 80 by default, HTTPS URLs begin with "https://" and use port 443 by default. HTTP is unsecured and is subject to man-in-the-middle and eavesdropping attacks which can let attackers gain access to website accounts and sensitive information. HTTPS is designed to withstand such attacks and is considered secure against such attacks (with the exception of older deprecated versions of SSL). Network layers HTTP operates at the highest layer of the OSI Model, the Application layer; but the security protocol operates at a lower sublayer, encrypting an HTTP message prior to transmission and decrypting a message upon arrival. Strictly speaking, HTTPS is not a separate protocol, but refers to use of ordinary HTTP over an encrypted Secure Sockets Layer (SSL) or Transport Layer Security (TLS) connection. Server setup To prepare a web server to accept HTTPS connections, the administrator must create a public key certificate for the web server. This certificate must be signed by a trusted certificate authority for the web browser to accept it. The authority certifies that the
109 Compiled by Mrs. Wamwati Catherine

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES

certificate holder is indeed the entity it claims to be. Web browsers are generally distributed with the signing certificates of major certificate authorities so that they can verify certificates signed by them. Acquiring certificates Authoritatively signed certificates may be free or cost between US$13 and $1,500 per year. Organizations may also run their own certificate authority, particularly if they are responsible for setting up browsers to access their own sites (for example, sites on a company intranet, or major universities). They can easily add copies of their own signing certificate to the trusted certificates distributed with the browser. There also exists a peer-to-peer certificate authority, CACert. Use as access control The system can also be used for client authentication in order to limit access to a web server to authorized users. To do this, the site administrator typically creates a certificate for each user, a certificate that is loaded into his/her browser. Normally, that contains the name and e-mail address of the authorized user and is automatically checked by the server on each reconnect to verify the user's identity, potentially without even entering a password. In case of compromised private key A certificate may be revoked before it expires, for example because the secrecy of the private key has been compromised. Newer versions of popular browsers such as Firefox, Opera, and Internet Explorer on Windows Vista implement the Online Certificate Status Protocol (OCSP) to verify that this is not the case. The browser sends the certificate's serial number to the certificate authority or its delegate via OCSP and the authority responds, telling the browser whether or not the certificate is still valid.[10] Limitations SSL comes in two options, simple and mutual. The mutual flavor is more secure but requires the user to install a personal certificate in their browser in order to authenticate them. Whatever strategy is used (simple or mutual), the level of protection strongly depends on the correctness of the implementation of the web browser and the server software and the actual cryptographic algorithms supported. See list in HTTP_Secure#Main idea. SSL doesn't prevent the entire site from being indexed using a web crawler, and in some cases the URI of the encrypted resource can be inferred by knowing only the intercepted request/response size.[11] This allows an attacker to have access to the plaintext (the publicly-available static content), and the encrypted text (the encrypted version of the static content), permitting a cryptographic attack. Because SSL operates below HTTP and has no knowledge of higher-level protocols, SSL servers can only strictly present one certificate for a particular IP/port combination.[12]
110 Compiled by Mrs. Wamwati Catherine

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES

This means that, in most cases, it is not feasible to use name-based virtual hosting with HTTPS. A solution called Server Name Indication (SNI) exists which sends the hostname to the server before encrypting the connection, although many older browsers don't support this extension. Support for SNI is available since Firefox 2, Opera 8, and Internet Explorer 7 on Windows Vista.[13][14][15] If parental controls are enabled on Mac OS X, HTTPS sites must be explicitly allowed using the Always Allow list.[16] From an architectural point of view:

1. An SSL connection is managed by the first front machine which initiates the SSL

connection. If, for any reasons (routing, traffic optimization, etc.), this front machine is not the application server and it has to decipher data, solutions have to be found to propagate user authentication information or certificate to the application server which needs to know who is going to be connected. 2. For SSL with mutual authentication, the SSL session is managed by the first server which initiates the connection. In situations where encryption has to be propagated along chained servers, session timeout management becomes extremely tricky to implement. 3. With mutual SSL, security is maximal, but on the client-side, there is no way to properly end the SSL connection and disconnect the user except by waiting for the SSL server session to expire or closing all related client applications. 4. For performance reasons, static contents are usually delivered through a noncrypted front server or separate server instance with no SSL. As a consequence, these contents are usually not protected.

INTRODUCTION TO SSL
SSL (Secure Socket Layer) is a protocol layer that exists between the Network Layer and Application layer. As the name suggest SSL provides a mechanism for encrypting all kinds of traffic - LDAP, POP, IMAP and most importantly HTTP. The following is a over-simplified structure of the layers involved in SSL.

+-------------------------------------------+ | LDAP | HTTP | POP | IMAP | +-------------------------------------------+ | SSL | +-------------------------------------------+ | Network Layer | +-------------------------------------------+

HYPERTEXT TRANSFER PROTOCOL
111 Compiled by Mrs. Wamwati Catherine

Kimathi University College Of Technology INTRANET TECHNOLOGY NOTES

The Hypertext Transfer Protocol (HTTP) is a networking protocol for distributed, collaborative, hypermedia information systems. HTTP is the foundation of data communication for the World Wide Web. HTTP functions as a request-response protocol in the client-server computing model. In HTTP, a web browser, for example, acts as a client, while an application running on a computer hosting a web site functions as a server. The client submits an HTTP request message to the server. The server, which stores content, or provides resources, such as HTML files and images, or generates such content on the fly, or performs other functions on behalf of the client, returns a response message to the client. A response contains completion status information about the request and may contain any content requested by the client in its message body. A client is often referred to as a user agent (UA). A web crawler (spider) is another example of a common type of client or user agent. The HTTP protocol is designed to permit intermediate network elements to improve or enable communications between clients and servers. High-traffic websites often benefit from web cache servers that deliver content on behalf of the original, so-called origin server to improve response time. HTTP proxy servers at network boundaries facilitate communication when clients without a globally routable address are located in private networks by relaying the requests and responses between clients and servers. HTTP is an Application Layer protocol designed within the framework of the Internet Protocol Suite. The protocol definitions presume a reliable Transport Layer protocol for host-to-host data transfer.[2] The Transmission Control Protocol (TCP) is the dominant protocol in use for this purpose. However, HTTP has found application even with unreliable protocols, such as the User Datagram Protocol (UDP) in methods such as the Simple Service Discovery Protocol (SSDP). HTTP Resources are identified and located on the network by Uniform Resource Identifiers (URIs)—or, more specifically, Uniform Resource Locators (URLs)—using the http or https URI schemes. URIs and the Hypertext Markup Language (HTML), form a system of inter-linked resources, called hypertext documents, on the Internet, that led to the establishment of the World Wide Web in 1990 by English physicist Tim Berners-Lee. The original version of HTTP (HTTP/1.0) was revised in HTTP/1.1. HTTP/1.0 uses a separate connection to the same server for every request-response transaction, while HTTP/1.1 can reuse a connection multiple times, to download, for instance, images for a just delivered page. Hence HTTP/1.1 communications experience less latency as the establishment of TCP connections presents considerable overhead. The standards development of HTTP has been coordinated by the Internet Engineering Task Force (IETF) and the World Wide Web Consortium, culminating in the publication of a series of Requests for Comments (RFCs), most notably RFC 2616 (June 1999), which defines HTTP/1.1, the version of HTTP in common use.

112 Compiled by Mrs. Wamwati Catherine

Sponsor Documents

Or use your account on DocShare.tips

Hide

Forgot your password?

Or register your new account on DocShare.tips

Hide

Lost your password? Please enter your email address. You will receive a link to create a new password.

Back to log-in

Close