Education Fundamentals Of Data Structures By Horowitz And Sahni Pdf


Tuesday, July 2, 2019

Fundamentals of Data Structures - Ellis Horowitz, Sartaj - Ebook download as PDF File .pdf), Text File .txt) or read book online. data structure. Fundamentals of Data Structures - Ellis Horowitz & Sartaj Sahni - Ebook download as PDF File .pdf), Text File .txt) or read book online. Fundamentals: APPENDIX A: SPARKS .. DATA REPRESENTATIONS FOR STRINGS PATTERN.

Fundamentals Of Data Structures By Horowitz And Sahni Pdf

Language:English, Spanish, Indonesian
Genre:Personal Growth
Published (Last):18.10.2015
ePub File Size:26.73 MB
PDF File Size:16.82 MB
Distribution:Free* [*Regsitration Required]
Uploaded by: MELLISSA

the captain standing on the bridge, could press a button and-presto! to live with ' day-tight compartments' as the most Fundamentals of Data Structures – Ellis. Branch: master. interview/Data Structures and Algorithm/Algorithm Books/ Fundamentals of Computer Algorithm by Horowitz and Find file Copy path. PDF | On Jan 1, , Ellis Horowitz and others published Fundamentals of Data Structure in C++ Sartaj Sahni at University of Florida.

In general To clarify some of these ideas. Below is a table which summarizes the frequency counts for the first three cases. A complete set would include four cases: None of them exercises the program very much.

Both commands in step 9 are executed once. We can summarize all of this with a table. At this point the for loop will actually be entered.

These may have different execution counts. Step Frequency Step Frequency 1 1 9 2 2 1 10 n 3 1 11 n-1 4 0 12 n-1 5 1 13 n-1 6 0 14 n-1 7 1 15 1 file: Though 2 to n is only n. Steps 1. Execution Count for Computing Fn Each statement is counted once. O n2 is called quadratic. If we have two algorithms which perform the same task. The reason for this is that as n increases the time for the second algorithm will get far worse than the time for the first.

When we say that the computing time of an algorithm is O g n we mean that its execution takes no more than a constant times g n. If an algorithm takes time O log n it is faster. For example n might be the number of inputs or the number of outputs or their sum or the magnitude of one of them.

O n is called linear. O log n. For example. We will often write this as O n. O n3 is called cubic. The for statement is really a combination of several statements. We write O 1 to mean a computing time which is a constant. These seven computing times. This notation means that the order of magnitude is proportional to n. O n log n is better than O n2 but not as good as O n. Given an algorithm. For small data sets. Using big-oh notation. We will see cases of this in subsequent chapters.

Then a performance profile can be gathered using real time calculation. For exponential algorithms. Another valid performance measure of an algorithm is the space it requires. Often one can trade space for time. For large data sets. Figures 1. On the other hand. This shows why we choose the algorithm with the smaller order of magnitude. Notice how the times O n and O n log n grow much more slowly than the others..

An algorithm which is exponential will work only for very small inputs. In practice these constants depend on many factors. When n is odd H.. A magic square is an n x n matrix of the integers 1 to n2 such that the sum of every row. Coxeter has given a simple rule for generating a magic square: The statement i.

The file: It emphasizes that the variables are thought of as pairs and are changed as a unit. ACM Computing Surveys. For a discussion of tools and procedures for developing very large software systems see Practical Strategies for Developing Large Software Systems. The Elements of Programming Style by B. Since there are n2 positions in which the algorithm must place a number.

For a discussion of the more abstract formulation of data structures see "Toward an understanding of data structures" by J. Thus each statement within the while loop will be executed no more than n2. The magic square is represented using a two dimensional array having n rows and n column. For this application it is convenient to number the rows and columns from zero to n.

For a discussion of good programming techniques see Structured Programming by O. Academic Press. Kernighan and P. For a further discussion of program proving see file: Fundamental Algorithms. Special Issue: The while loop is governed by the variable key which is an integer variable initialized to 2 and increased by one each time through the loop. Both do not satisfy one of the five criteria of an algorithm. Describe the flowchart in figure 1. Can you think of a clever meaning for S.

Concentrate on the letter K first. American Mathematical Society.

Discuss how you would actually represent the list of name and telephone number pairs in a real machine. Consider the two statements: Which criteria do they violate? Look up the word algorithm or its older form algorism in the dictionary.

Can you do this without using the go to? Now make it into an algorithm. How would you handle people with the same last name. Determine how many times each statement is executed. Determine when the second becomes larger than the first.. If x occurs. For instance. Given n boolean variables x1.

Try writing this without using the go to statement. Implement these procedures using the array facility The rule is: String x is unchanged. What is the computing time of your method? Strings x and y remain unchanged. NOT X:: Prove by induction: Trace the action of the procedure below on the elements 2. Using the notation introduced at the end of section 1. List as many rules of style in programming that you can think of that you would be willing to follow yourself.

Represent your answer in the array ANS 1: Take any version of binary search. If S is a set of n elements the powerset of S is the set of all possible subsets of S. This function is studied because it grows very fast for small values of m and n.

Write a recursive procedure for computing this function. Write a recursive procedure to compute powerset S. Tower of Hanoi There are three towers and sixty four disks of different diameters placed on the first tower. Monks were reputedly supposed to move the disks from tower 1 to tower 3 obeying the rules: Write a recursive procedure for computing the binomial coefficient as defined in section 1.

Analyze the time and space requirements of your algorithm.. Given n. Ackermann's function A m. The pigeon hole principle states that if a function f has n distinct inputs but less than n distinct outputs then there exists two inputs a. Analyze the computing time of procedure SORT as given in section 1.

Then write a nonrecursive algorithm for computing Ackermann's function. The disks are in order of decreasing diameter as one scans up the tower.

Give an algorithm which finds the values a. Write a recursive procedure which prints the sequence of moves which accomplish this task. Therefore it deserves a significant amount of attention. It is true that arrays are almost always implemented by using consecutive memory. The array is often the only means for structuring data which is provided in a programming language. This is unfortunate because it clearly reveals a common point of confusion. For each index which is defined.

If one asks a group of programmers to define an array. In mathematical terms we call this a correspondence or a mapping. For arrays this means we are concerned with only two operations which retrieve and store values. Using our notation this object can be defined as: STORE is used to enter new index-value pairs.

In section ARRAYS second axiom is read as "to retrieve the j-th item where x has already been stored at index i in A is equivalent to checking if i and j are equal and if so. There are a variety of operations that are performed on these lists.

Notice how the axioms are independent of any representation scheme. If we restrict the index values to be integers. These operations include: Ace or the floors of a building basement. If we consider an ordered list more abstractly. If we interpret the indices to be n-dimensional. It is only operations v and vi which require real effort. It is not always necessary to be able to perform all of these operations. In the study of data structures we are interested in ways of representing ordered lists so that these operations can be carried out efficiently.

By "symbolic. Let us jump right into a problem requiring ordered lists which we will solve by using one dimensional arrays. This problem has become the classical example for motivating the use of list processing techniques which we will see in later chapters.

This we will refer to as a sequential mapping.. We can access the list element values in either direction by changing the subscript values in a controlled way The problem calls for building a set of subroutines which allow for the manipulation of symbolic polynomials.

See exercise 24 for a set of axioms which uses these operations to abstractly define an ordered list.. Insertion and deletion using sequential allocation forces us to move some of the remaining elements so the sequential mapping is preserved in its proper form.. Perhaps the most common way to represent an ordered list is by an array where we associate the list element ai with the array index i.

It is precisely this overhead which leads us to consider nonsequential mappings of ordered lists into arrays in Chapter 4. This gives us the ability to retrieve or modify the values of random elements in the list in a constant amount of time. A complete specification of the data structure polynomial is now given. We will also need input and output routines and some suitable format for preparing polynomials as input.

When defining a data object one must decide what functions will be available. However this is not an appropriate definition for our purposes.. For a mathematician a polynomial is a sum of terms where each term has the form axe. The first step is to consider how to define polynomials as a computer structure.

MULT poly. Then we would write REM P. Notice the absense of any assumptions about the order of exponents.. Suppose we wish to remove from P those terms having exponent one. These assumptions are decisions of representation.

COEF B. These axioms are valuable in that they describe the meaning of each operation concisely and without implying an implementation. Now we can make some representation decisions. Exponents should be unique and in decreasing order is a very reasonable first decision. Note how trivial the addition and multiplication operations have become. EXP B. Now assuming a new function EXP poly exp which returns the leading exponent of poly.

EXP B file: B REM B. We have avoided the need to explicitly store the exponent of each term and instead we can deduce its value by knowing our position in the list and the degree. But are there any disadvantages to this representation? Hopefully you have already guessed the worst one. The case statement determines how the exponents are related and performs the proper action.. With these insights.

EXP B: EXP A. EXP A end end insert any remaining terms in A or B into C The basic loop of this algorithm consists of merging the terms of the two polynomials. Since the tests within the case statement require two terms. COEF A.

EXP A.. This representation leads to very simple algorithms for addition and multiplication. But scheme 1 could be much more wasteful. The first entry is the number of nonzero terms. It will require a vector of length In the worst case. As for storage. Then for each term there are two entries representing an exponent-coefficient pair. Is this method any better than the first scheme?

In general. Suppose we take the polynomial A x above and keep only its nonzero coefficients. Basic algorithms will need to be more complex because we must check each exponent before we handle its coefficient. If all of A's coefficients are nonzero. The assignments of lines 1 and 2 are made only once and hence contribute O 1 to the overall computing time. This is a practice you should adopt in your own coding. The procedure has parameters which are polynomial or array names.

The code is indented to reinforce readability and to reveal more clearly the scope of reserved words. Statement two is a shorthand way of writing r Notice how closely the actual program matches with the original design.

Comments appear to the right delimited by double slashes. Three pointers p. Blocks of statements are grouped together using square brackets. The basic iteration step is governed by a while loop. It is natural to carry out this analysis in terms of m and n. To make this problem more concrete. Returning to the abstract object--the ordered list--for a moment.

These are defined by the recurrence relation file: A two dimensional array could be a poor way to represent these lists because we would have to declare it as A m.

Fundamentals of Data Structures - Ellis Horowitz, Sartaj Sahni.pdf

This hypothetical user may have many polynomials he wants to compute and he may not know their sizes.. This worst case is achieved. Suppose in addition to PADD. Consider the main routine our mythical user might write if he wanted to compute the Fibonacci polynomials.

In this main program he needs to declare arrays for all of his polynomials which is reasonable and to declare the maximum size that every polynomial might achieve which is harder and less reasonable. Taking the sum of all of these steps. He would include these subroutines along with a main procedure he writes himself.

This example shows the array as a useful representational form for ordered lists Each iteration of this while loop requires O 1 time.

In particular we now have the m lists a If he declares the arrays too large. Instead we might store them in a one dimensional array and include a front i and rear i pointer for the beginning and end of each list. Since the iteration terminates when either p or q exceeds 2m or 2n respectively. We are making these four procedures available to any user who wants to manipulate polynomials. At each iteration. For example F 2.

Suppose the programmer decides to use a two dimensional array to store the Fibonacci polynomials. Then the following program is produced. If we made a call to our addition routine. Let's pursue the idea of storing all polynomials in a single array called POLY. Exponents and coefficients are really different sorts of numbers. If the result has k terms. Also we need a pointer to tell us where the next free location is.

The array is usually a homogeneous collection of data which will not allow us to intermix data of different types. Then by storing all polynomials in a single array.

A much greater saving could be achieved if Fi x were printed as soon as it was computed in the first loop. Different types of data cannot be accommodated within the usual array concept.. This example reveals other limitations of the array as a means for data representation.

When m is equal to n. We could write a subroutine which would compact the remaining polynomials. It is very natural to store a matrix in a two dimensional array. A general matrix consists of m rows and n columns of numbers as in figure 2. Such a matrix is called sparse.. We might then store a matrix as a list of 3-tuples of the form i. There may be several such polynomials whose space can be reused. On most computers today it would be impossible to store a full X matrix in the memory at once.

There is no precise definition of when a matrix is sparse and when it is not. Then we can work with any element by writing A i. This comes about because in practice many of the matrices we want to deal with are large. As computer scientists. A sparse matrix requires us to consider an alternate form of representation. Even worse. As we create polynomials. Such a matrix has mn elements. Now if we look at the second matrix of figure 2.

Now we have localized all storage to one array. Example of 2 matrices The first matrix has five rows and three columns. This demands a sophisticated compacting routine coupled with a disciplined use of names for polynomials. When this happens must we quit? We must unless there are some polynomials which are no longer needed. Figure 2. But this may require much data movement.

In Chapter 4 we will see an elegant solution to these problems. The alternative representation will explicitly store only the nonzero elements. Each element of a matrix is uniquely characterized by its row and column position.

Sparse matrix stored as triples The elements A 0. This is where we move the elements so that the element in the i. The transpose of the example matrix looks like 1. Another way of saying this is that we are interchanging rows and columns. We can go one step farther and require that all the 3-tuples of any row be stored so that the columns are increasing..

The elements on the diagonal will remain unchanged. Now what are some of the operations we might want to perform on these matrices? One operation is to compute the transpose matrix. If we just place them consecutively.

In our example of figure 2. We can avoid this data movement by finding the elements in the order we want them. Since the rows are originally in order. Let us write out the algorithm in full. Since the rows of B are the columns of A. The variable q always gives us the position in B where the next term in the transpose is to be inserted. The assignment in lines takes place exactly t times as there are only t nonzero terms in the sparse matrix being generated.

How about the computing time of this algorithm! For each iteration of the loop of lines On the first iteration of the for loop of lines all terms from column 1 of A are collected. Lines take a constant amount of time. This is precisely what is being done in lines The total time for the algorithm is therefore O nt.

Since the number of iterations of the loop of lines is n. This computing time is a little disturbing since we know that in case the matrices had been represented as two dimensional arrays. In addition to the space needed for A and B.. The terms in B are generated by rows. The algorithm for this takes the form: The statement a. We now have a matrix transpose algorithm which we believe is correct and which has a computing time of O nt.

It is not too difficult to see that the algorithm is correct. This is worse than the O nm time using arrays. This gives us the number of elements in each row of B. We can now move the elements of A one by one into their correct position in B. This algorithm. From this information. Each iteration of the loops takes only a constant amount of time.

T j is maintained so that it is always the position in B where the next element in row j is to be inserted.. The computation of S and T is carried out in lines Hence in this representation. When t is sufficiently small compared to its maximum of nm. In lines the elements of A are examined one by one starting from the first and successively moving to the t-th element.

This is the same as when two dimensional arrays were in use. MACH m 6. Associated with each machine that the company produces. If we try the algorithm on the sparse matrix of figure 2. Suppose now you are working for a machine manufacturer who is using a computer to do inventory control T i points to the position in the transpose where the next element of row i is to be stored.

Each part is itself composed of smaller parts called microparts. The product of two sparse matrices may no longer be sparse. Regarding these tables as matrices this application leads to the general definition of matrix product: We want to determine the number of microparts that are necessary to make up each machine.

Once the elements in row i of A and column j of B have been located.

Before we write a matrix multiplication procedure. To compute the elements of C row-wise so we can store them in their proper place without moving previously computed elements. To avoid this. Consider an algorithm which computes the product of two sparse matrices represented as an ordered list instead of an array.

This sum is more conveniently written as If we compute these sums for each machine and each micropart then we will have a total of mp values which we might store in a third table MACHSUM m. An alternative approach is explored in the exercises. Its i. This enables us to handle end conditions i. C and some simple variables.

In addition to the space needed for A. We leave the correctness proof of this algorithm as an exercise. The total maximum increments in i is therefore pdr.

It makes use of variables i. In each iteration of the while loop of lines either the value of i or j or of both increases by 1 or i and col are reset. In addition to all this.

Horowitz Sahni - Data Structure

The while loop of lines is executed at most m times once for each row of A. When this happens Let us examine its complexity. The variable r is the row of A that is currently being multiplied with the columns of B.

The maximum total increment in j over the whole loop is t2. The maximum number of iterations of the while loop of lines file: At the same time col is advanced to the next column. If dr is the number of terms in row r of A then the value of i can increase at most dr times before i moves to the next row of A. Since the number of terms in a sparse matrix is variable. It introduces some new concepts in algorithm analysis and you should make sure you understand the analysis.

As in the case of polynomials. The classical multiplication algorithm is: Once again. Lines take only O dr time. There are. MMULT will be slower by a constant factor.

Re: Fundamentals of Data Structure by (Horowitz & Sahni) pdf/ebook

A and B are sparse. MMULT will outperform the above multiplication algorithm for arrays.. This would enable us to make efficient utilization of space. Since t1 nm and t2 np. These difficulties also arise with the polynomial representation of the previous section and will become apparent when we study a similar representation for multiple stacks and queues section 3.

If an array is declared A l1: If we have the declaration A 4: Recall that memory may be regarded as one dimensional with words numbered from 1 to m. This is necessary since programs using arrays may. We see that the subscript at the right moves the fastest. While many representations might seem plausible. In addition to being able to retrieve array elements easily. Assuming that each array element requires only one word of memory.

Then using row major order these elements will be stored as A Then A 4. To simplify the discussion we shall assume that the lower bounds on each dimension li are 1. Another synonym for row major order is lexicographic order. Knowing the address of A i. The general case when li can be any integer is discussed in the exercises.

Sequential representation of A u1. Suppose A 4. In general.. To begin with. A u1 address: This formula makes use of only the starting address of the array plus the declared dimensions.

Sequential representation of A 1: In a row major representation. From the compiler's point of view If is the address of A 1. Before obtaining a formula for the case of an n-dimensional array. These two addresses are easy to guess..

Repeating in this way the address for A i This array is interpreted as u1 2 dimensional arrays of dimension u2 x u3. Generalizing on the preceding discussion.

From this and the formula for addressing a 2 dimensional array. An alternative scheme for array representation.. The address for A i Each 2-dimensional array is represented as in Figure 2. By using a sequential mapping which associates ai of a1. To review. The address of A i1.

To locate A i. However several problems have been raised. If is the address for A 1. In all cases we have been able to move the values around. Assume that n lists.. The i-th list should be maintained as sequentially stored. For these polynomials determine the exact number of times each statement will be executed.. The functions to be performed on these lists are insertion and deletion. What is the computing time of your procedure? How much space is actually needed to hold the Fibonacci polynomials F0.

Write a procedure which returns Assume you can compare atoms ai and bj. What can you say about the existence of an even faster algorithm?

Try to minimize the number of operations. The band includes a. Obtain an addressing formula for elements aij in the lower triangle if this lower triangle is stored by rows in an array B 1: Another kind of sparse matrix that arises often in numerical analysis is the tridiagonal matrix.

What is the relationship between i and j for elements in the zero part of A? Let A and B be two lower triangular matrices. Devise a scheme to represent both the triangles in an array C 1: What is the computing time of your algorithm? Tridiagonal matrix A If the elements in the band formed by these three diagonals are represented rowwise in an array. When all the elements either above or below the main diagonal of a square matrix are zero.

For large n it would be worthwhile to save the space taken by the zero entries in the upper triangle. In this square matrix. Define a square band matrix An. How much time does it take to locate an arbitrary element A i.

B which determines the value of element aij in the matrix An. A variation of the scheme discussed in section 2. Thus A4. A generalized band matrix An. The band of An. How much time does your algorithm take? Assume a row major representation of the array with one word per element and the address of A l1. Do exercise 20 assuming a column major representation. Obtain an addressing formula for the element A i1.

Consider space and time requirements for such operations as random access. B where A and B contain real values. How many values can be held by an array with dimensions A 0: The figure below illustrates the representation for the sparse matrix of figure 2. In this representation. In addition. Write a program which computes the product of two complex valued matrices A.. A complex-valued matrix X is represented by a pair of matrices A.

An m X n matrix is said to have a saddle point if some entry A i.. Use a minimal amount of storage.

Given an array A 1: One possible set of axioms for an ordered list comes from the six operations of section 2. The bug wanders possibly in search of an aspirin randomly from tile to tile throughout the room.

Hard as this problem may be to solve by pure probability theory techniques. One such problem may be stated as follows: A drunken cockroach is placed on a given square in the middle of a tile floor in a rectangular room of size n x m tiles. All the cells of this array are initialized to zero. The position of the bug on the floor is represented by the coordinates IBUG.

There are a number of problems. JBUG and is initialized by a data card. Assuming that he may move from his present tile to any of the eight tiles surrounding him unless he is against a wall with equal probability. The problem may be simulated using the following method: The technique for doing so is called "simulation" and is of wide-scale use in industry to predict traffic-flow. All but the most simple of these are extremely difficult to solve and for the most part they remain largely unsolved.

Of course the bug cannot move outside the room. Each time a square is entered. Many of these are based on the strange "L-shaped" move of the knight. Your program MUST: Chess provides the setting for many fascinating diversions which are quite independent of the game itself. When every square has been entered at least once.. A classical example is the problem of the knight's tour. This will show the "density" of the walk. This assures that your program does not get "hung" in an "infinite" loop.

Have an aspirin This exercise was contributed by Olson. Write a program to perform the specified simulation experiment. A maximum of Warnsdorff in It is convenient to represent a solution by placing the numbers 1.

The ensuing discussion will be much easier to follow. Note that it is not required that the knight be able to reach the initial position by one more move. The goal of this exercise is to write a computer program to implement Warnsdorff's rule.

His rule is that the knight must always be moved to one of the squares from which there are the fewest exits to squares not already traversed. Perhaps the most natural way to represent the chessboard is by an 8 x 8 array B ARD as shown in the figure below. The most important decisions to be made in solving a problem of this type are those concerning how the data is to be represented in the computer..

One of the more ingenious methods for solving the problem of the knight's tour is that given by J. The eight possible moves of a knight on square 5. Briefly stated.. That is.

Below is a description of an algorithm for solving the knight's tour problem using Warnsdorff's rule. The data representation discussed in the previous section is assumed. J is located near one of the edges of the board.

Let NP S be the number of possibilities. J may move to one of the squares I. This exercise was contributed by Legenhausen and Rebman.

J denotes the new position of the knight. Go to Chapter 3 Back to Table of Contents file: The problem is to write a program which corresponds to this algorithm. J records the move in proper sequence. If this happens. In every case we will have 0 NP S 8.

Recall that a square is an exit if it lies on the chessboard and has not been previously occupied by the knight. The restrictions on a queue require that the first element which is inserted into the queue will be the first one to be removed.

They arise so often that we will discuss them separately before moving on to more complex objects. One natural example of stacks which arises in computer programming is the processing of subroutine calls and their returns. Both these data objects are special cases of the more general data object. The ai are referred to as atoms which are taken from some set.

E are added to the stack.. Equivalently we say that the last"element to be inserted into the stack will be the first to be removed. Figure 3. Suppose we have a main procedure and three subroutines as below: A queue is an ordered list in which all insertions take place at one end. A stack is an ordered list in which all insertions and deletions are made at one end. Thus A is the first letter to be removed. ADD i. Since returns are made in the reverse order of calls. A2 before A1. S which inserts the element i onto the stack S and returns the new stack.

For each subroutine there is usually a single location associated with the machine code which is used to retain the return address. This list operates as a stack since the returns will be made in the reverse order of the calls.

If we examine the memory while A3 is computing there will be an implicit stack which looks like q. Implementing recursion using a stack is discussed in Section 4. Whenever a return is made. This list of return addresses need not be maintained in consecutive locations.

Associated with the object stack there are several operations that are necessary: Different situations call for different decisions, but we suggest you eliminate the idea of working on both at the same time. If you do decide to scrap your work and begin again, you can take comfort in the fact that it will probably be easier the second time. In fact you may save as much debugging time later on by doing a new version now.

This is a phenomenon which has been observed in practice. The graph in figure 1. For each compiler there is the time they estimated it would take them and the time it actually took. For each subsequent compiler their estimates became closer to the truth, but in every case they underestimated.

Unwarrented optimism is a familiar disease in computing. But prior experience is definitely helpful and the time to build the third compiler was less than one fifth that for the first one. Figure 1. Verification consists of three distinct aspects: program proving, testing and debugging.

Each of these is an art in itself. Before executing your program you should attempt to prove it is correct. Proofs about programs are really no different from any other kinds of proofs, only the subject matter is different.

If a correct proof can be obtained, then one is assured that for all possible combinations of inputs, the program and its specification agree. Testing is the art of creating sample data upon which to run your program. If the program fails to respond correctly then debugging is needed to determine what went wrong and how to correct it. One proof tells us more than any finite amount of testing, but proofs can be hard to obtain. Many times during the proving process errors are discovered in the code.

The proof can't be completed until these are changed. This is another use of program proving, namely as a methodology for discovering errors.

Finally there may be tools available at your computing center to aid in the testing process. One such tool instruments your source code and then tells you for every data set: i the number of times a statement was executed, ii the number of times a branch was taken, iii the smallest and largest values of all variables.

As a minimal requirement, the test data you construct should force every statement to execute and every condition to assume the value true and false at least once. One thing you have forgotten to do is to document.

But why bother to document until the program is entirely finished and correct? Because for each procedure you made some assumptions about its input and output. If you have written more than a few procedures, then you have already begun to forget what those assumptions were. If you note them down with the code, the problem of getting the procedures to work together will be easier to solve.

The larger the software, the more crucial is the need for documentation. The previous discussion applies to the construction of a single procedure as well as to the writing of a large software system.

Let us concentrate for a while on the question of developing a single procedure which solves a specific task. The design process consists essentially of taking a proposed solution and successively refining it until an executable program is achieved. The initial solution may be expressed in English or some form of mathematical notation.

At this level the formulation is said to be abstract because it contains no details regarding how the objects will be represented and manipulated in a computer. If possible the designer attempts to partition the solution into logical subtasks. Each subtask is similarly decomposed until all tasks are expressed within a programming language. This method of design is called the top-down approach.

Inversely, the designer might choose to solve different parts of the problem directly in his programming language and then combine these pieces into a complete program. This is referred to as the bottom-up approach. Experience suggests that the top-down approach should be followed when creating a program.

However, in practice it is not necessary to unswervingly follow the method. A look ahead to problems which may arise later is often useful. Underlying all of these strategies is the assumption that a language exists for adequately describing the processing of data at several abstract levels. Let us examine two examples of top-down program development. Suppose we devise a program for sorting a set of n 1 distinct integers.

One of the simplest solutions is given by the following "from those integers which remain unsorted, find the smallest and place it next in the sorted list" This statement is sufficient to construct a sorting program. However, several issues are not fully specified such as where and how the integers are initially stored and where the result is to be placed.

One solution is to store the values in an array in such a way that the i-th integer is stored in the i-th array position, A i 1 i n. We are now ready to give a second refinement of the solution: for i 1 to n do examine A i to A n and suppose the smallest integer is at A j ; then interchange A i and A j.

There now remain two clearly defined subtasks: i to find the minimum integer and ii to interchange it with A i. Eventually A n is compared to the current minimum and we are done. Also, observe that when i becomes greater than q, A Hence, following the last execution of these lines, i. We observe at this point that the upper limit of the for-loop in line 1 can be changed to n - 1 without damaging the correctness of the algorithm.

From the standpoint of readability we can ask if this program is good.Define a square band matrix An. It emphasizes that the variables are thought of as pairs and are changed as a unit.

Which properties does it lack? These operations include: The previous discussion applies to the construction of a single procedure as well as to the writing of a large software system.