Recursion is the procedure of solving a problem by breaking it down into a smaller version of itself. It is similar to factorial notation of a nonnegative number in algebra. Below are the equations that define algebraic factorial notation.
0! = 1
n! = n * (n - 1)! If n > 0
These equations can be used to help define the concept of recursion in computer programming. A further explanation of them is needed to better understand how they can help. Equation 1 is known as the base case while equation 2 is known as the general case. Equation 1 contains no factorial notation while equation 2 is a smaller version of itself. These two equations are known as recursive definition. The following can be derived from recursive definition:
1. Every recursive definition must have one (or more) base cases.
2. The general case must eventually be reduced to a base case.
3. The base case stops the recursion. (Malik 357)
Using the information learned from above, a computer programmer can solve a problem by using a recursive algorithm or recursive function. A recursive algorithm has to have one or more base cases as well as the general solution being eventually reduced to a base case. A recursive function contains a statement that calls itself before the current call is completed. Recursive functions are used to implement recursive algorithms. Keep in mind that having a recursive function in a program allows the program to have an unlimited number of copies of that function. Also that every recursive call to a recursive function has its own set of variables and parameters as well as its own code. Finally remember that control goes back to the calling environment after it completes a recursive call. Before control can go back to the previous call, the current recursive call must finish executing. The previous call will resume execution at the point immediately following where the current recursive call has...
Please join StudyMode to read the full document