We now analyze the continuations that appear in ordinary Scheme programs. We shall see that terms like iteration and tail recursion can be given precise definitions.
Consider the usual recursive definition of factorial:
Now suppose we evaluate (fact 3). When the first recursive call to occurs, the current continuation can be described by the context(define fact (lambda (n) (if (zero? n) 1 (* n (fact (sub1 n))))))
(* 3 ¤ )
Now when (fact 2) arrives at its recursive call to (fact 1), the continuation for this call is the one obtained by extending the initial context to
(* 3 (* 2 ¤ ))Note that the new context arises from partially filling the hole of the original context with information obtained during the call to (fact 2).
The execution state of a computation at a given point in time is all the information needed to restart the computation from that point if execution had halted there. Each subexpression/context pair supplies this information for the point in time at which the subexpression is evaluated. The full sequence of pairs shows the execution history of computation. We shall use the term dynamics when refering to this sequence.
The following table shows the complete dynamics of (fact 3). Each IN line is a snapshot taken when a recursive call is entered, with the corresponding OUT line indicating when the call returns. Note that the context (i.e. continuation) grows at each entry until the recursion finally terminates at n = 0. The "answer", 1, returned at this point fills the hole of the final context, and each subsequent returned value fills the hole of the context at the corresponding entry point.
direction | expression | context | result |
---|---|---|---|
IN | (fact 3) | ¤ | ? |
IN | (fact 2) | (* 3 ¤ ) | ? |
IN | (fact 1) | (* 3 (* 2 ¤ )) | ? |
IN | (fact 0) | (* 3 (* 2 (* 1 ¤ ))) | ? |
OUT | (fact 0) | (* 3 (* 2 (* 1 1))) | 1 |
OUT | (fact 1) | (* 3 (* 2 1)) | 1 |
OUT | (fact 2) | (* 3 2) | 2 |
OUT | (fact 3) | 6 | 6 |
(define reverse (lambda (l) (cond [(null? l) ()] [else (append (reverse (cdr l)) (list (car l)))])))
(define islat (lambda (l) (cond [(null? l) #t] [(atom? (car l)) (islat (cdr l))] [else #f])))
How do the dynamics of either (islat '(a b c d)) or (islat '(a b (1 2))) differ from those of either (fact 3) or the program in part 1 of this exercise?
Consider a Scheme function F. A Tail call is defined as any call to a function G from within the body of F where, if executed, that call would be the last subexpression to be evaluated by that instance of F.
In other words, when G returns, there is nothing left for F to do but pass G's value back to F's caller. Tail recursion is used to describe recursive calls that are also tail calls. F is tail recursive if all recursive calls in F are tail recursive calls.
For example, the Little Schemer functions islat and member are tail recursive. Each instance makes a recursive call as the last action of that instance. However, neither rember nor fact (as defined in the previous section) are tail recursive. rember rebuilds the list with the expression (cons (car l) (rember a (cdr l))). Similarly, fact postpones all of its multiplications, doing them only as it returns from recursive calls.
This definition of a tail call is descriptive, but not very precise. Here is a more exact version.
Suppose function F, executing in some context C, makes a call to a function G. The call to G is a tail call if the context in which G executes is also C.
In other words, with a tail call the continuation does not grow.
Why are we interested in tail recursive calls? Because they are cheap. Consider the mechanics of executing (fact n). With each recursive call the context becomes bigger. That context must be saved somehow or else we lose the computation. The execution stack in your computer exists for exactly this purpose. Every time a function is called a new frame containing the current execution state is pushed on the stack. This frame is restored when control returns from the call. Altogether, the collection of frames on the stack constitutes the current continuation.
The following diagram shows how the stack models the context dynamics of (fact 3).
Now, consider a tail recursive program, like islat. If a frame is likewise pushed on the stack for each recursive call, you get something like this.
This outcome is consistent with the definition of tail recursion, but is clearly a waste of effort. In an efficient implementation, a tail recursive call ignores the stack and simply begins executing the code corresponding to the call. Such an implementation is called proper tail recursion.
Now compare once again the while loop and its recursive counterpart from before.
This gives us a precise definition of iteration: a repetitive program that does not create a growing context on the stack; in other words, a properly tail recursive program.
Now, let us return to the Scheme while loop defined above. Note that the recursive call to (loop) in the body of loop is in fact a tail recursive call. If Scheme is properly tail recursive, then this call is effectively a jump to the beginning of the loop code. Of course, Scheme is properly tail recursive (you probably guessed that by now!), and so any tail recursive procedure is actually fully iterative, in the sense of Pascal or C.