Templates

Published on December 2016 | Categories: Documents | Downloads: 51 | Comments: 0 | Views: 575
of 31
Download PDF   Embed   Report

Comments

Content

Templates Function templates Function templates are special functions that can operate with generic types. This allows us to create a function template whose functionality can be adapted to more than one type or class without repeating the entire code for each type. In C++ this can be achieved using template parameters. A template parameter is a special kind of parameter that can be used to pass a type as argument: just like regular function parameters can be used to pass values to a function, template parameters allow to pass also types to a function. These function templates can use these parameters as if they were any other regular type. The format for declaring function templates with type parameters is: template <class identifier> function_declaration; template <typename identifier> function_declaration; The only difference between both prototypes is the use of either the keyword class or the keyword typename. Its use is indistinct, since both expressions have exactly the same meaning and behave exactly the same way. For example, to create a template function that returns the greater one of two objects we could use: 1 template <class myType> myType GetMax (myType a, myType b) { 2 return (a>b?a:b); } 3 4

Here we have created a template function with myType as its template parameter. This template parameter represents a type that has not yet been specified, but that can be used in the template function as if it were a regular type. As you can see, the function template GetMax returns the greater of two parameters of this still-undefined type. To use this function template we use the following format for the function call: function_name <type> (parameters); For example, to call GetMax to compare two integer values of type int we can write:

1 int x,y; GetMax <int> (x,y); 2

When the compiler encounters this call to a template function, it uses the template to automatically generate a function replacing each appearance of myType by the type passed as the actual template parameter (int in this case) and then calls it. This process is automatically performed by the compiler and is invisible to the programmer. Here is the entire example: 1 // function template #include <iostream> 2 using namespace std; 3 template <class T> T GetMax (T a, T b) { 4 T result; result = (a>b)? a : b; 5 return (result); } 6 int main () { 7 int i=5, j=6, k; long l=10, m=5, n; 8 k=GetMax<int>(i,j); n=GetMax<long>(l,m); 9 cout << k << endl; cout << n << endl; 1 return 0; 0 } 1 1 1 2 1 3 1 6 10

4 1 5 1 6 1 7 1 8 1 9 2 0

In this case, we have used T as the template parameter name instead of myType because it is shorter and in fact is a very common template parameter name. But you can use any identifier you like. In the example above we used the function template GetMax() twice. The first time with arguments of type int and the second one with arguments of type long. The compiler has instantiated and then called each time the appropriate version of the function. As you can see, the type T is used within the GetMax() template function even to declare new objects of that type: T result; Therefore, result will be an object of the same type as the parameters a and b when the function template is instantiated with a specific type. In this specific case where the generic type T is used as a parameter for GetMax the compiler can find out automatically which data type has to instantiate without having to explicitly specify it within angle brackets (like we have done before specifying <int> and <long>). So we could have written instead: 1 int i,j;

GetMax (i,j); 2

Since both i and j are of type int, and the compiler can automatically find out that the template parameter can only be int. This implicit method produces exactly the same result: 1 // function template II #include <iostream> 2 using namespace std; 3 template <class T> T GetMax (T a, T b) { 4 return (a>b?a:b); } 5 int main () { 6 int i=5, j=6, k; long l=10, m=5, n; 7 k=GetMax(i,j); n=GetMax(l,m); 8 cout << k << endl; cout << n << endl; 9 return 0; } 1 0 1 1 1 2 1 3 1 4 1 5 1 6 10

6 1 7 1 8

Notice how in this case, we called our function template GetMax() without explicitly specifying the type between angle-brackets <>. The compiler automatically determines what type is needed on each call. Because our template function includes only one template parameter (class T) and the function template itself accepts two parameters, both of this T type, we cannot call our function template with two objects of different types as arguments: 1 int i; long l; 2 k = GetMax (i,l); 3

This would not be correct, since our GetMax function template expects two arguments of the same type, and in this call to it we use objects of two different types. We can also define function templates that accept more than one type parameter, simply by specifying more template parameters between the angle brackets. For example: 1 template <class T, class U> T GetMin (T a, U b) { 2 return (a<b?a:b); } 3 4

In this case, our function template GetMin() accepts two parameters of different types and

returns an object of the same type as the first parameter (T) that is passed. For example, after that declaration we could call GetMin()with: 1 int i,j; long l; 2 i = GetMin<int,long> (j,l); 3

or simply: i = GetMin (j,l); even though j and l have different types, since the compiler can determine the appropriate instantiation anyway. Class templates We also have the possibility to write class templates, so that a class can have members that use template parameters as types. For example: 1 template <class T> class mypair { 2 T values [2]; public: 3 mypair (T first, T second) { 4 values[0]=first; values[1]=second; } 5 }; 6 7 8 9

The class that we have just defined serves to store two elements of any valid type. For example, if we wanted to declare an object of this class to store two integer values of type int with the values 115 and 36 we would write: mypair<int> myobject (115, 36); this same class would also be used to create an object to store any other type: mypair<double> myfloats (3.0, 2.18); The only member function in the previous class template has been defined inline within the class declaration itself. In case that we define a function member outside the declaration of the class template, we must always precede that definition with the template <...> prefix: 1 // class templates #include <iostream> 2 using namespace std; 3 template <class T> class mypair { 4 T a, b; public: 5 mypair (T first, T second) {a=first; b=second;} 6 T getmax (); }; 7 template <class T> 8 T mypair<T>::getmax () { 9 T retval; retval = a>b? a : b; 1 return retval; 0 } 1 int main () { 1 mypair <int> myobject (100, 75); cout << myobject.getmax(); 1 return 0; 2 } 1 100

3 1 4 1 5 1 6 1 7 1 8 1 9 2 0 2 1 2 2 2 3 2 4 2 5 2 6

Notice the syntax of the definition of member function getmax:

1 template <class T> T mypair<T>::getmax () 2

Confused by so many T's? There are three T's in this declaration: The first one is the template parameter. The second T refers to the type returned by the function. And the third T (the one between angle brackets) is also a requirement: It specifies that this function's template parameter is also the class template parameter. Template specialization If we want to define a different implementation for a template when a specific type is passed as template parameter, we can declare a specialization of that template. For example, let's suppose that we have a very simple class called mycontainer that can store one element of any type and that it has just one member function called increase, which increases its value. But we find that when it stores an element of type char it would be more convenient to have a completely different implementation with a function member uppercase, so we decide to declare a class template specialization for that type: 1 // template specialization #include <iostream> 2 using namespace std; 3 // class template: template <class T> 4 class mycontainer { T element; 5 public: mycontainer (T arg) {element=arg;} 6 T increase () {return ++element;} }; 7 // class template specialization: 8 template <> class mycontainer <char> { 9 char element; public: 1 mycontainer (char arg) {element=arg;} 0 char uppercase () { 1 if ((element>='a')&&(element<='z')) 1 element+='A'-'a'; 8 J

return element; 1 } 2 }; 1 int main () { 3 mycontainer<int> myint (7); mycontainer<char> mychar ('j'); 1 cout << myint.increase() << endl; 4 cout << mychar.uppercase() << endl; return 0; 1 } 5 1 6 1 7 1 8 1 9 2 0 2 1 2 2 2 3 2 4 2 5 2 6

2 7 2 8 2 9 3 0 3 1 3 2 3 3 3 4

This is the syntax used in the class template specialization: template <> class mycontainer <char> { ... }; First of all, notice that we precede the class template name with an empty template<> parameter list. This is to explicitly declare it as a template specialization. But more important than this prefix, is the <char> specialization parameter after the class template name. This specialization parameter itself identifies the type for which we are going to declare a template class specialization (char). Notice the differences between the generic class template and the specialization: 1 template <class T> class mycontainer { ... }; template <> class mycontainer <char> { ... }; 2

The first line is the generic template, and the second one is the specialization. When we declare specializations for a template class, we must also define all its members, even those exactly equal to the generic template class, because there is no "inheritance" of members from the generic template to the specialization. Non-type parameters for templates Besides the template arguments that are preceded by the class or typename keywords , which represent types, templates can also have regular typed parameters, similar to those found in functions. As an example, have a look at this class template that is used to contain sequences of elements: 1 // sequence template #include <iostream> 2 using namespace std; 3 template <class T, int N> class mysequence { 4 T memblock [N]; public: 5 void setmember (int x, T value); T getmember (int x); 6 }; 7 template <class T, int N> void mysequence<T,N>::setmember (int x, 8 T value) { memblock[x]=value; 9 } 1 template <class T, int N> 0 T mysequence<T,N>::getmember (int x) { return memblock[x]; 1 } 1 int main () { 1 mysequence <int,5> myints; 2 mysequence <double,5> myfloats; myints.setmember (0,100); 1 myfloats.setmember (3,3.1416); 3 cout << myints.getmember(0) << '\n'; cout << myfloats.getmember(3) << '\n'; 1 return 0; 100 3.1416

4 } 1 5 1 6 1 7 1 8 1 9 2 0 2 1 2 2 2 3 2 4 2 5 2 6 2 7 2 8 2 9

3 0 3 1

It is also possible to set default values or types for class template parameters. For example, if the previous class template definition had been: template <class T=char, int N=10> class mysequence {..}; We could create objects using the default template parameters by declaring: mysequence<> myseq; Which would be equivalent to: mysequence<char,10> myseq;

Templates and multiple-file projects 1 From the point of view of the compiler, templates are not normal functions or classes. They are compiled on demand, meaning that the code of a template function is not compiled until an instantiation with specific template arguments is required. At that moment, when an instantiation is required, the compiler generates a function specifically for those arguments from the template. When projects grow it is usual to split the code of a program in different source code files. In these cases, the interface and implementation are generally separated. Taking a library of functions as example, the interface generally consists of declarations of the prototypes of all the functions that can be called. These are generally declared in a "header file" with a .h extension, and the implementation (the definition of these functions) is in an independent file with c++ code. Because templates are compiled when required, this forces a restriction for multi-file projects: the implementation (definition) of a template class or function must be in the same file as its declaration. That means that we cannot separate the interface in a separate header file, and that we must include both interface and implementation in any file that uses the

templates. Since no code is generated until a template is instantiated when required, compilers are prepared to allow the inclusion more than once of the same template file with both declarations and definitions in a project without generating linkage errors. Imagine that you are hired by company XYZ to organize all of their records into a computer database. The first thing you are asked to do is create a database of names with all the company's management and employees. To start your work, you make a list of everyone in the company along with their position.
Name Aaron Charles George Jack Janet John Kim Larry Martha Patricia Rick Sarah Susan Thomas Zack Position Manager VP Employee Employee VP President Manager Manager Employee Employee Secretary VP Manager Employee Employee

But this list only shows one view of the company. You also want your database to represent the relationships between management and employees at XYZ. Although your list contains both name and position, it does not tell you which managers are responsible for which workers and so on. After thinking about the problem for a while, you decide that a tree diagram is a much better structure for showing the work relationships at XYZ.

These two diagrams are examples of different data structures. In one of the data structures, your data is organized into a list. This is very useful for keeping the names of the employees in alphabetical order so that we can locate the employee's record very quickly. However, this structure is not very useful for showing the relationships between employees. A tree structure is much better suited for this purpose. In computer science, data structures are an important way of organizing information in a computer. Just like the diagrams above illustrate, there are many different data structures that programmers use to organize data in computers. Some data structures are similar to the tree diagram because they are good for representing relationships between data. Other

structures are good for ordering data in a particular way like the list of employees. Each data structure has unique properties that make it well suited to give a certain view of the data. During these lessons, you will learn how data structures are created inside a computer. You will find there is quite a difference between your mental picture of a data structure and the actual way a computer stores a data structure in memory. You will also discover that there are many different ways of creating the same data structure in a computer. These various approaches are tradeoffs that programmers must consider when writing software. Finally, you will see that each data structure has certain operations that naturally fit with the data structure. Often these operations are bundled with the data structure and together they are called a data type. By the end of this study, you should be able to do the following:  Show how data structures are represented in the computer,  Identify linear and nonlinear data structures,  Manipulate data structures with basic operations, and  Compare different implementations of the same data structure. To understand how a computer can represents large data structures like our tree diagram, we first need to understand some basic facts about computer memory. Every piece of data that is stored in a computer is kept in a memory cell with a specific address. We can think of these memory cells as being a long row of boxes where each box is labeled with an address. If you have ever used a computer spreadsheet before, you know that spreadsheets also have labeled boxes that can hold data. Computer memory is similar to this with the exception that computer memory is linear. That's why we think of computer memory as being organized in a row rather than a grid like a spreadsheet.

The computer can store many different types of data in its memory. You have already learned about some of the basic types of data the computer uses. These include integers, real numbers, and characters. Once the computer stores data in the memory cells, it can access the data by using the address of the data cells. For example, consider the following instructions for adding two integers together. Instructions Computer Memory 1 Store '1' in cell 2003.

2

Store '5' in cell 2004.

3 Add cells 2003 and 2004 and store the result in cell 2006. Notice how the computer performs operations by referring to the address of the memory cells. These addresses are a very important component in creating various data structures in computer memory. For example, suppose we want a data structure that can store a group of characters as a word. In many computer languages, this data structure is called a string. If we store the characters for the string 'apple' in the computer's memory, it might look something like this.

In order for the computer to recognize that 'apple' is a string, it must have some way of identifying the start and end of the characters stored in memory. This is why the addresses of the memory cells are important. By using the addresses to refer to a group of memory cells as a string, the computer can store many strings in a row to create a list. This is one way that we could create a data structure to represent our list of employees at company XYZ.

But what happens when we try to represent our tree diagram of company XYZ? It doesn't make sense to store the names one after the other because the tree is not linear. Now we have a problem. We want to represent a nonlinear data structure using computer memory that is linear. In order to do this, we are going to need some way of mapping nonlinear structures like trees or spreadsheet tables onto linear computer memory. In the next lesson w In our last lesson, we discovered a problem with representing data structures that are not linear. We need some way to map these data structures to the computer's linear memory. One solution is to use pointers. Pointers are memory locations that are stored in memory cells. By using a pointer, one memory cell can "point" to another memory cell by holding a memory address rather than data. Let's see how it works.

In the diagram above, the memory cell at address 2003 contains a pointer, an address of another cell. In this case, the pointer is pointing to the memory cell 2005 which contains the letter 'c'. This means that we now have two ways of accessing the letter 'c' as stored data.

We can refer to the memory cell which contains 'c' directly or we can use our pointer to refer to it indirectly. The process of accessing data through pointers is known as indirection. We can also create multiple levels of indirection using pointers. The diagram below shows an example of double indirection. Notice that we must follow two pointers this time to reach the stored data.

As you can see, pointers can become very complex and difficult to use with many levels of indirection. In fact, when used incorrectly, pointers can make data structures very difficult to understand. Whenever we use pointers in constructing data structures, we have to consider the tradeoff between complexity and flexibility. We will consider some examples of this tradeoff in the next few lessons. The idea of pointers and indirection is not exclusive to computer memory. Pointers appear in many different aspects of computer use. A good example is hyperlinks in web pages. This links are really pointers to another web page. Perhaps you have even experienced "double indirection" when you went to visit a familiar web site and found the site had moved. Instead of the page you expected, you saw a notice that the web pages had been moved and a link to the new site. Rather than clicking a single link, you had to follow two links or two pointers to reach the web page. In our previous lesson, we saw that it is very simple to create data structures that are organized similar to the way the computer's memory is organized. For example, the list of employee's from the XYZ company is a linear data structure. Since the computer's memory is also linear, it is very easy to see how we can represent this list with the computer. Any data structure which organizes the data elements one after the other is a linear data structure. So far we have seen two examples of linear data structures: the string data structure (a list of characters) and the XYZ company list (a list of strings).

Example String

Example List You may have noticed that these two examples of linear data structures resemble each other. This is because they are both really different kinds of lists. In general, all linear data structures look like a list. However, this does not mean that all linear data structures are

exactly the same. Suppose I want to design a list to store the names of the XYZ employees in the computer. One possible design is to organize the names similar to the example picture above. Another possible design is to use the pointers we learned about in the last lesson. While these two designs provide the same functionality (i.e. a list that can hold names), the way they are implemented in the computer is much different. This means that there is an abstract view of a list which is distinct from any particular computer implementation. We will return to this idea of an abstract view of a data structure in the next few lessons.
You may have also noticed that the example picture of the XYZ employees is not exactly the same as the original list. Take another look at the employee list to the right. When we make a list of names, we tend to organize this list in a column rather than a row. In this case, the conceptual or logical representation of a list is a column of names. However, the physical representation of the list in the computer's memory is a row of strings. For most data structures, the way that we think about them is far different from the way they are implemented in the computer. In other words, the physical representation is much different than the logical representation, especially in data structures that use pointers.
Name Aaron Charles George Jack Janet John Kim Larry Martha Patricia Rick Sarah Susan Thomas Zack

During the next few lessons, we will examine several different linear data structures with a focus on the following ideas:  The abstract view versus the implementation  The logical representation verses the physical representation  Comparison of various implementations The most common linear data structure is the list. By now you are already pretty familiar with the idea of a list and at least one way of representing a list in the computer. Now we are going to look at a particular kind of list: an ordered list. Ordered lists are very similar to the alphabetical list of employee names for the XYZ company. These lists keep items in a specific order such as alphabetical or numerical order. Whenever an item is added to the list, it is placed in the correct sorted position so that the entire list is always sorted. Before we consider how to implement such a list, we need to consider the abstract view of an ordered list. Since the idea of an abstract view of a list may be a little confusing, let's think about a more familiar example. Consider the abstract view of a television. Regardless of who makes a television, we all expect certain basic things like the ability to change channels and adjust the volume. As long as these operations are available and the TV displays the shows we want to view, we really don't care about who made the TV or how they chose to construct it. The circuitry inside the TV set may be very different from one brand to the next, but the functionality remains the same. Similarly, when we consider the abstract view of an ordered list, we don't worry about the details of implementation. We are only concerned with what the list does, not how it does it. Suppose we want a list that can hold the following group of sorted numbers: [2 4 6 7]. What are some things that we might want to do with our list? Well, since our list is in order, we will need some way of adding numbers to the list in the proper place, and we will need some way of deleting numbers we don't want from the list. To represent these operations, we will use the following notation: AddListItem(List, Item) RemoveListItem(List, Item)

Each operation has a name and a list of parameters the operation needs. The parameter list for the AddListItem operation includes a list (the list we want to add to) and an item (the item we want to add). The RemoveListItem operation is very similar except this time we specify the item we want to remove. These operations are part of the abstract view of an ordered list. They are what we expect from any ordered list regardless of how it is implemented in the computer. this lesson, we are going to look at two different ways of creating an ordered list data structure to hold the following list [2 4 6 7]. First, we will create a list using an array of memory cells. Next, we will create the the same list using pointers. Finally, we will compare these two approaches to see the advantages and disadvantages. Array Implementation One approach to creating a list is simply to reserve a block of adjacent memory cells large enough to hold the entire list. Such a block of memory is called an array. Of course, since we will want to add items to our list, we need to reserve more than just four memory cells. For now, we will make our array large enough to hold six numbers. The animation below shows a graphical representation of our array in memory with the list numbers. Follow the directions in the animation to learn how the list operations AddListItem and RemoveListItem work. In the animation, you saw that there were two disadvantages to using an array to implement an ordered list. First, you saw that the elements in the list must be kept in sequence, that is, there must not be gaps in the list. If gaps are allowed, the computer will not be able to determine which items are part of the list and which items are not. For this reason, the ordered list structures that are implemented with arrays are known as sequential lists. The second disadvantage that you saw was that arrays have a fixed size and therefore limit the number of items the list can contain. Of course we could try to increase the size of the array, but it may not always be the case that the adjacent memory cells in the computer are available. They could be in use by some other program. However, it is quite likely that the computer does have available memory at some other non-adjacent location. To take advantage of this memory, we need to design our list so that the list items do not have to be adjacent. Pointer Implementation A second approach to creating a list is to link groups of memory cells together using pointers. Each group of memory cells is called a node. With this implementation every node contains a data item and a pointer to the next item in the list. You can picture this structure as a chain of nodes linked together by pointers. As long as we know where the chain begins, we can follow the links to reach any item in the list. Often this structure is called alinked list.

Notice that the last memory cell in our chain contains a symbol called "Null". This symbol is a special value that tells us we have reached the end of our list. You can think of this symbol as a pointer that points to nothing. Since we are using pointers to implement our list, the list operations AddListItemand RemoveListItem will work differently than they did for sequential lists. The animation below shows how these operations work and how they provide a solution for the two problems we had with arrays. this lesson, we are going to look at two different ways of creating an ordered list data structure to hold the following list [2 4 6 7]. First, we will create a list using an array of memory cells. Next, we will create the the same list using pointers. Finally, we will compare these two approaches to see the advantages and disadvantages. Array Implementation One approach to creating a list is simply to reserve a block of adjacent memory cells large enough to hold the entire list. Such a block of memory is called an array. Of course, since we will want to add items to our list, we need to reserve more than just four memory cells. For now, we will make our array large enough to hold six numbers. The animation below shows a graphical representation of our array in memory with the list numbers. Follow the directions in the animation to learn how the list operations AddListItem and RemoveListItem work. In the animation, you saw that there were two disadvantages to using an array to implement an ordered list. First, you saw that the elements in the list must be kept in sequence, that is, there must not be gaps in the list. If gaps are allowed, the computer will not be able to determine which items are part of the list and which items are not. For this reason, the ordered list structures that are implemented with arrays are known as sequential lists. The second disadvantage that you saw was that arrays have a fixed size and therefore limit the number of items the list can contain. Of course we could try to increase the size of the array, but it may not always be the case that the adjacent memory cells in the computer are available. They could be in use by some other program. However, it is quite likely that the computer does have available memory at some other non-adjacent location. To take

advantage of this memory, we need to design our list so that the list items do not have to be adjacent. Pointer Implementation A second approach to creating a list is to link groups of memory cells together using pointers. Each group of memory cells is called a node. With this implementation every node contains a data item and a pointer to the next item in the list. You can picture this structure as a chain of nodes linked together by pointers. As long as we know where the chain begins, we can follow the links to reach any item in the list. Often this structure is called alinked list.

Notice that the last memory cell in our chain contains a symbol called "Null". This symbol is a special value that tells us we have reached the end of our list. You can think of this symbol as a pointer that points to nothing. Since we are using pointers to implement our list, the list operations AddListItemand RemoveListItem will work differently than they did for sequential lists. The animation below shows how these operations work and how they provide a solution for the two problems we had with arrays. As we did with the ordered list, we are going to look at two implementations of a stack. The first implementation uses an array to create the stack data structure, and the second implementation uses pointers. Array Implementation In order to implement a stack using an array, we need to reserve a block of memory cells large enough to hold all the items we want to put on the stack. The picture below shows an array of six memory cells that represent our stack. Notice that we have one other memory cell called a stack pointer that holds the location of the top of our stack. As the stack grows and shrinks, this pointer is updated so that it always points to the top item of the stack.

Notice that our array implementation retains one of the problems we saw with the array implementation of an ordered list. Since our array is a fixed size, our stack can only grow to a certain size. Once our stack is full, we will have to use the PopStackItem operation before we can push any more items onto the stack. To make the size of our stack more flexible, we can use pointers to implement the stack. Pointer Implementation In order to implement a stack using pointers, we need to link nodes (groups of memory cells) together just like we did for the pointer implementation of a list. Each node contains a stack item and a pointer to the next node. We also need a special pointer to keep track of the top of our stack.

Notice that the stack operations can get a little tricky when we use pointers. To push an item onto the stack, we need to find a free memory location, set the pointer of the new location to the top of the stack, and finally set the stack pointer to the new location. The order of these operations is very important. If we set the stack pointer to the location of the new memory first, we will lose the location of the top of our stack. This example shows the same tradeoff that we saw earlier with the ordered list implementations. While the array implementation is simpler, the added complexity of the pointer implementation gives us a more flexible stack. The final linear data structure that we will examine is the queue. Like the stack, the queue is a type of restricted list. However, instead of restricting all the operations to one end of the list as a stack does, the queue allows items to be added at one end of the list and removed at the other end. The animation below should give you a good idea of the abstract view of a queue. Follow the directions to manipulate a simple queue and learn about the operations that a queue provides.

The restrictions placed on a queue cause this structure to be a "first-in, first-out" or FIFO structure. This idea is similar to customer lines at a grocery store. When customer X is ready to check out, he or she enters the tail of the waiting line. When the preceding customers have paid, then customer X pays and exits the head of the line. The check-out line is really a queue that enforces a "first come, first serve" policy. Now let's take another look at the operations that can be performed on a queue. We will represent these two operations with the following notation: Item EnqueueItem(Queue, Item) Item DequeueItem(Queue) These two operations are very similar to the operations we learned for the stack data structure. Although the names are different, the logic of the parameters is the same. The EnqueueItem operation takes the Item parameter and adds it to the tail of Queue. The DequeueItem operation removes the head item of Queue and returns this as Item. Notice that we represent the returned item with a keyword located to the left of the operation name. These two operations are part of the abstract view of a queue. Regardless of how we choose to implement our queue on the computer, the queue must support these two operations. When we looked at the ordered list and stack data structures, we saw two different ways to implement each one. Although the implementations were different, the data structure was still the same from the abstract point of view. We could still use the same operations on the data structures regardless of their implementations. With the queue, it is also possible to have various implementations that support the operations EnqueueItem andDequeueItem. However, in this lesson, we are only going to focus on one implementation in order to highlight another distinction: the distinction between the logical representation of a queue and the physical representation of a queue. Remember that the logical representation is the way that we think of the data being stored in the computer. The physical representation is the way the data is actually organized in the memory cells. To implement our queue, we will use an array of eight memory cells and two pointers to keep track of the head and tail of the queue. The diagram below shows a snapshot of a queue in the computer's memory. The queue currently contains five letter items with 'L' at the head of the queue and 'O' at the tail of the queue.

Now let's consider how the EnqueueItem and DequeueItem operations might be implemented. To enqueue letters into the queue, we could advance the tail pointer one location and add the new letter. To dequeue letters, we could remove the head letter and increase the head pointer one location. While this approach seems very straightforward, it

has a serious problem. As items are added and removed, our queue will march straight through the computer's entire memory. We have not limited the size of our queue. Perhaps we could limit the size of the queue by not allowing the tail pointer to advance beyond a certain location. This implementation would stop the queue from traversing the entire memory, but it would only allow us to fill the queue one time. Once the head and tail pointers reached the stop location, our queue would no longer work. What we really need is a way to make our array circular. Of course, we know that the computer's memory is linear, so we can't change the physical representation of the data. However, we can implement our operations in such a way that our queue acts like it was a ring or a circle. In other words, we are going to create a logical representation that is different from the physical representation in memory. The applet below shows these two representations along with the abstract view of the queue. Click the button below to start the applet. The applet will open in a new window along with instructions for using the queue. Another common nonlinear data structure is the tree. We have already seen an example of a tree when we looked at the employee hierarchy from the XYZ company. Let's take another look at this diagram with some of the important features of trees highlighted.

In this diagram, we can see that the starting point, or the root node, is circled in blue. A node is a simple structure that holds data and links to other nodes. In this case, our root node contains the data string "John" and three links to other nodes. Notice that the group of nodes circled in red do not have any links. These nodes are at the ends of the branches and they are appropriately called leaves or leaf nodes. In our diagram, the nodes are connected with solid black lines called arcs or edges. These edges show the relationships between nodes in the tree. One important relationship is the parent/child relationship. Parent nodes have at least one edge to a node lower in the tree. This node is called the child node. Nodes can have more than one child, but children can only have a single parent. Notice that the root node has no parent, and the leaf nodes have no children. The final feature to note in our diagram is the subtree. At each level of the tree, we can see that the tree structure is repeated. For example, the two nodes representing "Charles" and "Rick" compose a very simple tree with "Charles" as the root node and and "Rick" as a single leaf node. Now let's examine one way that trees are implemented in the computer's memory. We will begin by introducing a simple tree structure called a binary tree. Binary trees have the restriction that nodes can have no more than two children. With this restriction, we can easily determine how to represent a single binary node in memory. Our node will need to

reserve memory for data and two pointers.

Using our binary node, we can construct a binary tree. In the data cell of each node, we will store a letter. The physical representation of our tree might look something like this:

Although the diagram above represents a tree, it doesn't look much like the tree we examined from the XYZ company. Because our tree uses pointers, the physical representation is much different than the logical representation. Starting with the root node of the binary tree (the node that contains 'H'), see if you can draw a sketch of the logical representation of this tree. Once you are finished, you may view the answer§. Consider the following three examples. What do they all have in common? Chocolate Cream Pie
1 Heat milk, marshmallows and chocolate in 3-quart saucepan over low heat, stirring constantly, until chocolate and marshmallows are melted and blended. Refrigerate about 20 minutes, stirring occasionally until mixture mounds slightly when dropped from a spoon. 2 Beat whipping cream in chilled small bowl with electric mixer on high speed until soft peaks form. Fold chocolate mixture into whipped cream. Pour into pie shell. Refrigerate uncovered about 8 hours or until set. Garnish with milk chocolate curls and whipped cream.

Directions to John's House
From the Quik Mart, you should follow Saddle road for four miles until you reach a stoplight. Then make a left-hand turn at the stop light. Now you will be on Hollow street. Continue driving on Hollow street for one mile. You should drive past four blocks until you reach the post office. Once you are at

the post office, turn right onto Jackson road. Then stay on Jackson for about 10 miles. Eventually you will pass the Happy Meadow farm on your right. Just after Happy Meadow, you should turn left onto Brickland drive. My house is the first house on your left.

How to change your motor oil
3 4 5 6 7 8 9 Place the oil pan underneath the oil plug of your car. Unscrew the oil plug. Drain oil. Replace the oil plug. Remove the oil cap from the engine. Pour in 4 quarts of oil. Replace the oil cap.

Each of these examples are algorithms, a set of instructions for solving a problem. Once we have created an algorithm, we no longer need to think about the principles on which the algorithm is based. For example, once you have the directions to John's house, you do not need to look at a map to decide where to make the next turn. The intelligence needed to find the correct route is contained in the algorithm. All you have to do is follow the directions. This means that algorithms are a way of capturing intelligence and sharing it with others. Once you have encoded the necessary intelligence to solve a problem in an algorithm, many people can use your algorithm without needing to become experts in a particular field. Now try creating an algorithm of your own to solve the problem of putting letters and numbers in order. Follow the instructions below. Algorithms are especially important to computers because computers are really general purpose machines for solving problems. But in order for a computer to be useful, we must give it a problem to solve and a technique for solving the problem. Through the use of algorithms, we can make computers "intelligent" by programming them with various algorithms to solve problems. Because of their speed and accuracy, computers are wellsuited for solving tedious problems such as searching for a name in a large telephone directory or adding a long column of numbers. However, the usefulness of computers as problem solving machines is limited because the solutions to some problems cannot be stated in an algorithm. Much of the study of computer science is dedicated to discovering efficient algorithms and representing them so that they can be understood by computers. During our study of algorithms, we will discuss what defines an algorithm, how to represent algorithms, and what makes algorithms efficient. Along the way we will illustrate these concepts by introducing several algorithms for sorting. By the end of our study, you should be able to do the following:  Write some simple algorithms,  Sort numbers using three basic sorting algorithms, and  Compare the sorting algo
n the introduction, we gave an informal definition of an algorithm as "a set of instructions for solving a

problem" and we illustrated this definition with a recipe, directions to a friend's house, and instructions for changing the oil in a car engine. You also created your own algorithm for putting letters and numbers in order. While these simple algorithms are fine for us, they are much too ambiguous for a computer. In order for an algorithm to be applicable to a computer, it must have certain characteristics. We will specify these characteristics in our formal definition of an algorithm. An algorithm is a well-ordered collection of unambiguous and effectively computable operations that when executed produces a result and halts in a finite amount of time [Schneider and Gersting 1995]. With this definition, we can identify five important characteristics of algorithms. 1 Algorithms are well-ordered. 2 Algorithms have unambiguous operations. 3 Algorithms have effectively computable operations. 4 Algorithms produce a result. 5 Algorithms halt in a finite amount of time. These characteristics need a little more explanation, so we will look at each one in detail. Algorithms are well-ordered Since an algorithm is a collection of operations or instructions, we must know the correct order in which to execute the instructions. If the order is unclear, we may perform the wrong instruction or we may be uncertain which instruction should be performed next. This characteristic is especially important for computers. A computer can only execute an algorithm if it knows the exact order of steps to perform. Algorithms have unambiguous operations Each operation in an algorithm must be sufficiently clear so that it does not need to be simplified. Given a list of numbers, you can easily order them from largest to smallest with the simple instruction "Sort these numbers." A computer, however, needs more detail to sort numbers. It must be told to search for the smallest number, how to find the smallest number, how to compare numbers together, etc. The operation "Sort these numbers" is ambiguous to a computer because the computer has no basic operations for sorting. Basic operations used for writing algorithms are known as primitive operations or primitives. When an algorithm is written in computer primitives, then the algorithm is unambiguous and the computer can execute it. Algorithms have effectively computable operations Each operation in an algorithm must be doable, that is, the operation must be something that is possible to do. Suppose you were given an algorithm for planting a garden where the first step instructed you to remove all large stones from the soil. This instruction may not be doable if there is a four ton rock buried just below ground level. For computers, many mathematical operations such as division by zero or finding the square root of a negative number are also impossible. These operations are not effectively computable so they cannot be used in writing algorithms. Algorithms produce a result In our simple definition of an algorithm, we stated that an algorithm is a set of instructions for solving a problem. Unless an algorithm produces some result, we can never be certain whether our solution is correct. Have you ever given a command to a computer and discovered that nothing changed? What was your response? You probably thought that the computer was malfunctioning because your command did not produce any type of result. Without some visible change, you have no way of determining the effect of your command. The same is true with algorithms. Only algorithms which produce results can be verified as either right or wrong. Algorithms halt in a finite amount of time Algorithms should be composed of a finite number of operations and they should complete their execution in a finite amount of time. Suppose we wanted to write an algorithm to print all the integers greater than 1. Our steps might look something like this: 6 Print the number 2. 7 Print the number 3. 8 Print the number 4. .

. . While our algorithm seems to be pretty clear, we have two problems. First, the algorithm must have an infinite number of steps because there are an infinite number of integers greater than one. Second, the algorithm will run forever trying to count to infinity. These problems violate our definition that an algorithm must halt in a finite amount of time. Every algorithm must reach some operation that tells it to stop.

When writing algorithms, we have several choices of how we will specify the operations in our algorithm. One option is to write the algorithm using plain English. An example of this approach is the directions to John's house given in the introduction lesson. Although plain English may seem like a good way to write an algorithm, it has some problems that make it a poor choice. First, plain English is too wordy. When we write in plain English, we must include many words that contribute to correct grammar or style but do nothing to help communicate the algorithm. Second, plain English is too ambiguous. Often an English sentence can be interpreted in many different ways. Remember that our definition of an algorithm requires that each operation be unambiguous. Another option for writing algorithms is using programming languages. These languages are collections of primitives (basic operations) that a computer understands. While programming languages avoid the problems of being wordy and ambiguous, they have some other disadvantages that make them undesirable for writing algorithms. Consider the following lines of code from the programming language C++.
a = 1; b = 0; while (a <= 10) { b = b + a; a++; } cout << b;

This algorithm sums the numbers from 1 to 10 and displays the answer on the computer screen. However, without some special knowledge of the C++ programming language, it would be difficult for you to know what this algorithm does. Using a programming language to specify algorithms means learning special syntax and symbols that are not part of standard English. For example, in the code above, it is not very obvious what the symbol "++" or the symbol "<<" does. When we write algorithms, we would rather not worry about the details of a particular programming language. What we would really like to do is combine the familiarity of plain English with the structure and order of programming languages. A good compromise is structured English. This approach uses English to write operations, but groups operations by indenting and numbering lines. An example of this approach is the directions for changing motor oil in the introduction lesson. Each operation in the algorithm is written on a separate line so they are easily distinguished from each other. We can easily see the advantage of this organization by comparing the structured English algorithm with the plain English algorithm. How to change your motor oil

Plain English First, place the oil pan underneath the oil plug of your car. Next, unscrew the oil plug and drain the oil. Now, replace the oil plug. Once the old oil is drained, remove the oil cap from the engine and pour in 4 quarts of oil. Finally, replace the oil cap on the engine.

Structured English 1 Place the oil pan underneath the oil plug of your car. 2 Unscrew the oil plug. 3 Drain oil. 4 Replace the oil plug. 5 Remove the oil cap from the engine. 6 Pour in 4 quarts of oil. 7 Replace the oil cap.

ow that we have a definite way of writing our algorithms, let's look at some algorithms for solving the problem of sorting. Sorting is a very common problem handled by computers. For example, most graphical email programs allow users to sort their email messages in several ways: date received, subject line, sender, priority, etc. Each time you reorder your email messages, the computer uses a sorting algorithm to sort them. Since computers can compare a large number of items quickly, they are quite good at sorting. During the next few lessons, you will learn three different algorithms for sorting: the Simple Sort, the Insertion Sort, and the Selection Sort. These algorithms will give us a basis for comparing algorithms and determining which ones are the best. We will illustrate each algorithm by sorting a hand of playing cards like the ones below. Traditionally, a group of playing cards is called a "hand" of cards. In the next few lessons, we will use the term "hand" to refer to a group of seven playing cards.

Then we will show how the algorithm also applies to the problem of sorting a list of numbers in a computer. Our list of numbers will be stored in an array of memory cells like the diagram below.

It is important to note that we will be using the same algorithm to sort both playing cards and numbers. One important quality of a good algorithm is that it solves a class of problems and not just one particular problem. A good sorting algorithm should provide a solution to the problem of sorting for many types of items. Imagine if you were trying to design an email program that allowed users to sort their messages by date received, subject line, and sender. Would you want to write a new algorithm for each different sort, or would you prefer to write a single algorithm that handled all three? Of course you would prefer the latter approach, so when you designed your algorithm, you would design it to solve a class of problems (e.g. sorting) rather ow that we have a definite way of writing our algorithms, let's look at some algorithms for

solving the problem of sorting. Sorting is a very common problem handled by computers. For example, most graphical email programs allow users to sort their email messages in several ways: date received, subject line, sender, priority, etc. Each time you reorder your email messages, the computer uses a sorting algorithm to sort them. Since computers can compare a large number of items quickly, they are quite good at sorting. During the next few lessons, you will learn three different algorithms for sorting: the Simple Sort, the Insertion Sort, and the Selection Sort. These algorithms will give us a basis for comparing algorithms and determining which ones are the best. We will illustrate each algorithm by sorting a hand of playing cards like the ones below. Traditionally, a group of playing cards is called a "hand" of cards. In the next few lessons, we will use the term "hand" to refer to a group of seven playing cards.

Then we will show how the algorithm also applies to the problem of sorting a list of numbers in a computer. Our list of numbers will be stored in an array of memory cells like the diagram below.

It is important to note that we will be using the same algorithm to sort both playing cards and numbers. One important quality of a good algorithm is that it solves a class of problems and not just one particular problem. A good sorting algorithm should provide a solution to the problem of sorting for many types of items. Imagine if you were trying to design an email program that allowed users to sort their messages by date received, subject line, and sender. Would you want to write a new algorithm for each different sort, or would you prefer to write a single algorithm that handled all three? Of course you would prefer the latter approach, so when you designed your algorithm, you would design it to solve a class of problems (e.g. sorting) rather

2

Sponsor Documents

Or use your account on DocShare.tips

Hide

Forgot your password?

Or register your new account on DocShare.tips

Hide

Lost your password? Please enter your email address. You will receive a link to create a new password.

Back to log-in

Close