recursion in programming

Recursion in Programming and When to Use or Not to Use It

Estimated Read Time: 5 minute(s)
Common Topics: recursion, program, write, recursive, fortran

Recursion is quite simple. It’s a subroutine calling itself. It’s surprising but some problems that look quite hard can be trivial using recursion – but be wary – as I hope to explain there are traps you need to consider especially if working in a team environment.

When I studied Computer Science the first language I learned was FORTRAN. You can’t easily do recursion in FORTRAN (my lecturer went so far as to say it can not be done, but a Google search will show that’s not quite true – however for this article, I will assume you can’t). At the end of the FORTRAN part of the course, they devoted a couple of weeks to Pascal to explain its advantages and disadvantages compared to FORTRAN. Pascal does allow recursion so that’s where I first cut my teeth. The other major language in those far-off days was COBOL. Some versions allow it, but the general advice is, like FORTRAN – forget it. However, FORTRAN and COBOL are not modern languages – most modern languages allow it. To understand recursion you need to write an assembler program that uses it. When I did assembler I had to write what’s called the quick-sort.

After reading the Wikipedia article you are probably going yuck already. So let’s start with the usual first recursive program people write – The Towers of Hanoi.

Go ahead and write and run it. You will learn a lot. It was the first recursive program I wrote using PASCAL. Since then I have written many more and gained a lot of experience when and when not to use it. Wikipedia goes deeper into Tower Of Hanoi and will help in starting to understand the issues involved.

When you have a problem to solve, sometimes a recursive solution leaps out at you. But from long experience in that ultimate educational institution, the school of hard knocks, that is the point to stop and say – be careful. Here is an example – calculating factorial n ie n(n-1)(n-2)……1. A recursive solution is easy – simply have a subroutine Factorial(n). In structured English

Factorial (n)
If n = 1 return 1
else
return n*Factorial(n-1)

A little note to newish programmers. As you start your programming journey some textbooks get you to use flowcharts etc. IMHO structured English is preferable.

Eventually, you will progress to having a lot of programs and you pick the one closest to the one you want to write and then hack it. What was it one of my teachers said – first-year students – coders – you wrote directly in the language you are using, second-year – programmers – you wrote structured English – third and fourth year – hackers. But while the recursive code for factorial n is easy it’s also easy to write just using a loop. Go ahead and write a loop version in your favorite language, then compare the two solutions. What do you notice – the recursive solution is slower. Why is that? Well, it’s got to do with what happens when you call a subroutine. It needs to return the program to the state it was before calling the subroutine. That means it needs to store a lot of information before executing the subroutine and when you return restore it. That takes time a loop solution does not require.

In the version of Pascal, I used it get put in an area called the stack. Other things the program needs to store are put in an area called the heap. As you mucked around with recursion more and more you eventually get the dreaded message – stack overruns heap. You can guess what the compiler was doing – they had a program area – at the top, you had the stack with information going downwards – at the bottom, the heap had information going upwards – and if they collided – bye-bye program. To understand it you need to write a recursive program in assembler like I had to do with the quick-sort. As an aside, the lecturer was initially kind and gave us the Tower of Hanoi to write in assembler. Only something like 25% of students did the assignment, which made the lecturer mad, so we all had to do the quick sort – some lecturers are wonderful people.

Well, I eventually graduated and went out into the big wide world. I went to the Australian Federal Police and learned one thing very quickly – programmers want simple code and generally hated recursion. I had to write a program to scan links in an intelligence database. Normally most IT departments at that time used COBOL – so no recursion. But we had recently switched to a language that allowed recursion (NATURAL for those that know NATURAL/ADABAS from Software AG). Now I had the choice. It would be easy to simply scan the records and then recursively scan the links. Easy – but you have possible speed issues like the factorial program, and since most other programmers were used to COBOL possibly never even seen recursion before. So I decided on a loop-type solution. Later when I worked in another department there was a letter engine written by someone else that used recursion.

Nobody wanted to work on it – it always ended up on my desk. I gave it to one of my staff. He complained to my boss – it was too hard. Eventually, he went to work in another area and just wrote some simple reports. He was much happier. Why? Well, I remember a cartoon where a new staff member was being shown around by the IT department head. He came across this guy surrounded by heaps of printouts poring assiduously over code. The comment made by the manager was – to put him in front of a terminal and he is a genius but, otherwise, he is so boring and antisocial he will never break into management. No wonder some programmers want simple easy to understand programs not using advanced concepts – technical competence by itself may not be the ticket to advancement depending on the culture of where you work.

It can backfire too – but stories about that are way off-topic. Bottom line – best to keep it simple stupid. That’s often good advice regardless of what you do. Recursion can be hated by other programmers who later may have to maintain it, so my recommended rule of thumb is – only use it if you think it’s worth doing. That’s why the ancient bible on programming – The Art Of Computer Programming by Knuth has that critical word in it – ART.

Comment Thread

38 replies
  1. bhobba says:
    "
    TANY algorithm can be implemented with loops instead of recursive function calls, however some are easier to implement recursively (e.g. quicksort). Which runs faster and/or with less memory depends on a number of factors.
    "

    Yes. But it's the whole structured programming thing we had constantly drummed into us during my university days. The choice is made on which will be the most maintainable. Deciding that of course is both art and science. The last time I posted about it on another forum frequented by more modern programmers than me the response I got was – Give me that old-time religion :DD:DD:DD:DD:DD.

    Thanks
    Bill

  2. PeterDonis says:
    "
    ANY algorithm can be implemented with either loops or recursive function calls
    "
    Yes, I was being imprecise; by "loop" I meant a loop that does not have to add more information to some data structure on every iteration; it can update information on every iteration, but only if it throws away the old information so the total information kept track of is of (roughly) constant size.

    An example in Python would be a for loop using the range built-in iterator; this does not compute all of the numbers to be iterated over in advance, it only computes them one at a time as they are needed, and throws away the old one each time it computes a new one.

    You could of course implement something similar using recursion, as long as whatever language you were using could optimize away the parts that would require storing more data on each iteration (for example, tail call optimization to eliminate repeated pushing of parameters on the stack).

  3. PeterDonis says:
    "
    The alternative is to define a structure which has all the data elements needed to describe the state at each recursion reentry – the equivalent of all those local variables that are residing on the stack. Then pick a maximum recursion depth and declare an array of those structures of that size.
    "
    In many cases you don't even need to do that, since all you need is a single data structure, not an array of them, to keep track of the state. This fact is why so many algorithms can be implemented as loops instead of recursively. For example, to compute the nth Fibonacci number, you don't need n data structures; all you need to keep track of is the current loop index and the current partial product.
  4. jedishrfu says:
    Julia too is getting a lot of press. Its functional not OO but with function signature overloading. Its fast but theres a precompile cost thats frustrating. Its similar to matlab but faster for numerical work and it can iact as glue to ntegrate python, fortran and r into a collective application.
  5. Mark44 says:
    Interesting to hear about Lua and Pythonic++. When I get tired of playing with Erlang, I think I'll look into these languages. I'm always interested in speeding things up.
  6. bhobba says:
    "
    The Lua reference is pretty interesting in that it beat Numpy for both tasks.
    "

    Yes LUAJIT by itself is very nice.

    There is an even higher level language that compiles to LUA
    https://moonscript.org/

    For simpler programs I just write in Moonscript, knowing the LUAJIT will be nearly as fast a C++.

    Thanks
    Bill

  7. jedishrfu says:
    @bhobba The Lua reference is pretty interesting in that it beat Numpy for both tasks. I've played with Lua in the context of doing a 360 degree Rosette Graphing app on iOS Codea. Lua seems like a nicer version of Python although I only did one app so far.

    The Lua article makes me want to play with it again and maybe even get it working on my desktop linux box.

    Thanks.

  8. Mark44 says:
    "
    Here's the rosettacode.org Erlang version: URL='https://rosettacode.org/wiki/Pascal%27s_triangle#Erlang'%5Dhttps://rosettacode.org/wiki/Pascal's_triangle#Erlang[/URL]


    pascal trianlge in erlang

    -import(lists).
    -export([pascal/1]).
    
    pascal(1)-> [[1]];
    pascal(N) ->
        L = pascal(N-1),
        [H|_] = L,
        [lists:zipwith(fun(X,Y)->X+Y end,[0]++H,H++[0])|L].

    It looks like it has a tail recursion design.
    "I don't think so. After the recursive call to pascal(), the code still needs to do a couple operations.

  9. bhobba says:
    Nice seeing my little article on recursion with a bit of humour has created some interesting discussion.

    Previously I suggested programming in Python and using Juajit with some simple C glue if some parts needed speeding up.

    But things move on, and the one I now find interesting is TPtython++

    The latest version is even smarter. Instead of making some minor changes to the Python code, so those bits are compiled C++, it goes through your Python code, and the bits it can be easily done to automatically become compiled C++. Of course, you can still convert the code manually you want as C++ to speed things up further. Actually, it is not C++, but a Python-like version of C++ called Pythonic++

    Thanks
    Bill

  10. Mark44 says:
    "
    Interesting, I would have thought the compiler could have sorted it out (perhaps there is a higher level of optimization you can select) but I think you need


    Code

      true ->
        (N * (N - 1)) div ((N - K) * K) * combi(N - 2, K - 1)

    "
    I'm not familiar enough with the compiler and its settings to know if I can adjust this.

    Regarding your suggestion, I actually tried that, but got incorrect results, due to integer division being performed at the wrong time. For example, for ##~_5C_2## the factor to the left of the call to combi() is
    $$\frac{5 \cdot 4}{3 \cdot 2}= \frac {20} 6$$
    With integer division, this works out to 3. The other factor, from the call to combi(), is ##~_3C_1## or 3, so the result would be 3 * 3 = 9.

    If the factors are placed in the proper order, you get ##20 \cdot 3 \text{ div } 6##, or 10, the correct value. I haven't run across any information about operator precedence an associativity, but it's almost certainly the case that multiplication and division are left-associative.

  11. jedishrfu says:
    One function we wrote recursively was Ackermans function. Its a do nothing function but can really drag your machine down while doing a computation.

    https://en.wikipedia.org/wiki/Ackermann_function

    Here's the rosettacode.org Erlang version:

    https://rosettacode.org/wiki/Pascal's_triangle#Erlang


    pascal trianlge in erlang

    -import(lists).
    -export([pascal/1]).
    
    pascal(1)-> [[1]];
    pascal(N) ->
        L = pascal(N-1),
        [H|_] = L,
        [lists:zipwith(fun(X,Y)->X+Y end,[0]++H,H++[0])|L].

    It looks like it has a tail recursion design.

  12. Mark44 says:
    I've taught a lot of introductory programming classes over the years, mostly in C or C++, but also in other languages, such as Modula-2 (one time) and MIPS assembly (a half-dozen times). In all of these classes I've had students write programs that feature functions and in most I have assigned problems in which they write recursive functions.

    A few years back I got interested in Lisp and wrote several programs in that language. More recently I have developed an interest in functional programming, and have been learning Erlang, a language that was developed by Ericsson AB, a Swedish telecom company.

    In my classes, whenever I presented material on recursive functions, I tried to come up with examples other than the ubiquitous factorial function or the Fibonacci function, examples of which seem to appear in every programming textbook that covers recursion.

    Two examples that I think are interesting feature recursive functions that 1) convert a decimal integer into its hexadecimal form, and 2) calculate the binomial coefficient ##\binom {n}{k}##, also written as ##~_nC_k##. The binomial coefficient gives the number of combinations of n items taken k at a time.

    Here's the first of these, in Erlang. It converts a decimal integer to its hexadecimal form.
    View attachment 318524
    For some reason, the forum software wouldn't let me insert the code as text inside code tags, so I've attached the code as images. I've done a fair amount of testing, and this code seems to work well. Most of the work of recursion is done in the second toHexAsst() function, in which the last line calls the function recursively, making it tail-call recursive. In Erlang, a recursive function with tail-call recursion gets converted by the compiler into a loop, so there's not the problem of blowing up the stack if recursion goes too deep.

    The other example calculates the binomial coefficient, ##~_nC_k##. This is something I've never come across in my reading. I've omitted the comments I have in the code, but I'll provide a brief explanation here.

    If f(n, k) = ##~_nC_k##, a recurrence relation is f(n, k) = n*(n – 1) * f(n – 2, k – 1) / [ (n – k) * k].
    So, ##~_nC_k## = n * (n – 1) / [(n – k)*k] * ##~_{n – 2}C_{k – 1}##. The division here is integer division, which is guaranteed to be exact by virtue of the numbers involved.

    By symmetry ##~_nC_k## = ##~_nC_{n -k}##. For example, ##~_8C_5 =~_8C_{8 – 5} = ~_8C_3##. This is easy to confirm by writing a few rows of the Pascal Triangle, which contains the binomial coefficients.
    View attachment 318523
    This one isn't tail-call recursive, and I can't for the life of me figure out how to make it so. Previous attempts produced incorrect values due to the integer division being performed too early.

  13. PeterDonis says:
    "
    Although recursion can act like a loop, it can not be used as an alternative for a loop, the main reason is recursion can make a simple loop program complex and increase time and space complexity.
    "
    This is not always true. Some languages (for example, Lisp) implement loop constructs using recursion, and have optimizations for common cases of recursion that eliminate the increase in time and space complexity (for example, tail call optimization).

    A better general argument would be that loops are much easier for programmers to reason about than recursion, so that, even if your language implements loop constructs using recursion, it still benefits from having the loop construct because it is easier for programmers. In most cases these days, it is much more important to optimize programmer time than to optimize computation time, since the former is far more expensive than the latter.

  14. Stephen Tashi says:
    "
    Yes, a good example of this is when you make an expression parser.
    "

    That's a very good example because most programming languages that I have seen use recursion in defining what constitutes a valid statement in the language.

  15. Mark44 says:
    "
    Sadly, a lot of programming hacks (Z) take a slash and burn approach littered with commented out code and expired notes forever dotting the landscape with future programmers afraid to remove it for fear that it may be needed sometime once again.
    "Is the above relevant to anything in this thread? We seem to be getting far afield from the Insights article on recursion.
  16. jedishrfu says:
    Sadly, a lot of programming hacks (Z) take a slash and burn approach littered with commented out code and expired notes forever dotting the landscape with future programmers afraid to remove it for fear that it may be needed sometime once again.
  17. jedishrfu says:
    Yes, they had an area to save the registers prior to the call. Some registers were then loaded with argument pointers and upon return the registers were reloaded with their older values.
  18. bhobba says:
    The programming language I mostly used, called Natural, allowed recursion but few used it, and those that were called upon to maintain the few that did tried to avoid it like the plague; buck pass it whenever possible. Interestingly many programs that were put in the too hard basket were often solved using an array as a stack. One that did the rounds I originally wrote but ran too long. What it did was a heap of database searches to update things. It was possible to store a direct pointer to the database record rather than search for it, and if you do that it retrieved records much faster. So I figured maybe it was retrieving many of the same records a lot, and on a punt kept a copy of the records location in a large array I used as a stack. I searched that first before searching the database. Amazingly it cut run time to about an hour from all night.

    Thanks
    Bill

  19. jedishrfu says:
    I'm sure programmers before him used variations of the stack. He was likely the first to describe it clearly in the literature though.

    I remember implementing a data stack in programs I wrote in BASIC for a CS class on data structures. My computer could handle recursive calls in BASIC but not with data so I implemented an index that I incremented and decremented as I traversed the recursion. My data values were stored in arrays referenced by the index.

    FORTRAN on mainframes in the 1970s didn't have recursive capabilities. If you tried it you get caught in a loop.
    The reason was return address was a single location so sub A could call sub B and sub B could call sub C but if subs B or C called A then the return address would point back to them and not the original caller.

  20. neilparker62 says:
    One can easily experiment with recursive routines on Excel.

    Example 1: the Fibonacci sequence:

    Cell A1 = 1
    Cell A2 = 1
    Cell A3 = A1 + A2

    Then copy/paste the formula in Cell A3 to (n-3) cells below for n terms in the Fibonacci sequence.

    Example 2: generate sine and cosine (actually we'll use versin(x) which is 1 – cos(x)). Let x = pi/6

    Cell A1 = pi()/6*2^(-10), Cell B1 = (pi()/6*2^(-10))^2/2
    – these are small angle approximations for sine and versin the smaller the angle the better the approximation.
    Cell A2 = 2*A1*(1-B1), Cell B2 = 2*(A1)^2,
    – doubling algorithm.

    Copy/paste the doubling formulae in cells A2 and B2 to the next 9 rows to determine sin(pi/6) and versin(pi/6). Obviously cos(pi/6) is easily obtained from 1 – versin(pi/6).

    Example 3. Home loan balance given fixed interest rate and monthly payment.

    Cell A1 = 1e6
    Cell A2 = A1*(1 + 0.01) – 11010.86

    Copy / paste the formula in A2 to next 239 rows where n=240 is total no of months (period of loan). Graph your output to see the characteristic curve for loan balance.

  21. anorlunda says:
    "
    Recursion is often the result of more complicated situations than a function calling itself. For example …
    "
    That's an excellent point, although I always called that re-entrant rather than recursive. You can get the same thing with real time programs and with parallel processing.

    It would be clearer if the title emphasized recursive algorithms rather than recursive subroutine calls.

    "
    There are a couple of situations where recursion is prohibited. The first is related to safety standards …
    "
    That is also an excellent point. There are some analogous prohibitions on use of virtual memory in time-critical safety-related programming.

  22. DEvens says:

    OK, much as the idea of applying the quicksort algorithm to the Tower of Hanoi problem is intriguing… I still don’t know when and when not to apply recursion. I know two situations where it’s at least worrisome and should be carefully considered before applying.

    If each level of the problem has complicated, large, or difficult-to-initialize data, you may not want to use recursion. Unless there is some easy way to use the same copy of this stuff and somehow increment a counter or pointer or some such. You don’t want to make a brand new copy of your entire database at every level. It’s going to use way too much memory and way too much time.

    If the scheme can branch, especially if it can produce a huge tree, you may not want to use recursion. Maybe, for example, recursion is not the way to play chess. Even a few moves can produce an enormous tree. Or at least, not naive recursion. “Pruning” is one of those descriptive buzzwords that gets you on a potentially useful track. But even a pruned chess tree can be daunting.

    Another aspect of recursion is, you may want to have a helper function. Call the helper function to do the prep work such as checking ranges, pestering the calling function with error messages, etc. Then the recursion function can be simpler. In the factorial example, you would not need to check that the input value was greater than 0 in recursive routine, only in the helper routine. It checks that the argument is in a valid range then calls the recursive routine. Other possible uses for the helper function are quite simple to imagine.

    There are likely other situations I have not considered that will produce troublesome behavior with recursion.

  23. PeterDonis says:
    "
    CPython, the reference Python implementation, doesn't do tail call elimination and Python's inventor has said they have no intention of changing this, so there's little point in writing tail-recursive code in Python.
    "

    I think there have been some attempts to implement tail call elimination in CPython by bytecode manipulation. I don't know that I would recommend that in a production system, since bytecode manipulation is usually considered non-standard, but it's possible.

  24. anorlunda says:
    Fun article @bhobba. Thanks for sharing.

    Count me among the programmers who favored KISS over recursion. I never did find a case where I thought it was actually needed. That is until the day I first read about MapReduce.

    https://en.wikipedia.org/wiki/MapReduce
    When you introduce the complexity of parallel processing, and distributed clusters of computers, then a simple loop won't work, and the elegance of MapReduce greatly simplifies things.

  25. Mark44 says:
    "
    If you want to understand recursion better I suggest learning the programming language LISP.
    "Which is something I am attempting to do at the moment. My first experience with Lisp was in about 1982, with an implementation for the Apple II computer. I couldn't see the point of it at the time. Since then, I've realized that there are a lot of good reasons to become acquainted with this language.

    Since this is a thread about recursion, I should mention that virtually every discussion of recursion in any programming language that supports this feature includes the factorial function as an example. It's almost as if there were no other possible uses. Because I teach classes in C++ and MIPS assembly, I try to come up with examples other than those that typically are in textbooks.

    Here's an example in Lisp that I wrote that converts a nonnegative decimal integer to its hex form.


    Code

    (defun to-hex(num hexDigitsList)    
        (defvar quot -1)
        (defvar remainder -1)
    
        (setq quot (floor (/ num 16)))
        (setq remainder (mod num 16))
        (push remainder hexDigitsList)    
        (if (> quot 0)
            (to-hex quot hexDigitsList)
            (progn    
                (format t "Hex:    0x")
                (loop for hexDigit in hexDigitsList
                    do (format t "~X" hexDigit)))
                ))

    The underlying algorithm is this:
    Given a nonnegative decimal number,
    Divide it by 16 and save the (integer) quotient and remainder.
    If the quotient is positive, call the routine again with the quotient as the first argument,
    Otherwise, display the list of remainders.

    For example, if the function above is called with decimal 100 as the argument, the first call produces a quotient of 6 and a remainder of 4.
    In the second call, the new quotient is 0 (6 / 16 == 0 in integer division), and the remainder is 6.
    Since the new quotient is zero, we display the remainders in reverse order, resulting in the hex value of 0x64.

    A routine to convert decimal numbers to their binary form is nearly identical, with the main difference being that you divide by 2 each time rather than by 16.

    Any process that needs to produce results in the opposite order they were generated is really a natural for recursion. Other examples include displaying a string in reverse order and checking for palindromes.

  26. PeterDonis says:
    "
    I think it could be modified to do that though
    "

    With a simple refinement using Python's ability to define a default argument you can eliminate the need to include the "result" variable in the initial call:


    Python

    def factorial(n, result=1):
        if n == 0:
            return result
        return factorial(n - 1, result * n)

  27. bhobba says:
    "
    Said one mirror to her mother
    as they gazed at one another:
    I see an eternity unfolding
    Endlessly onward like no other.
    "

    If you want to understand recursion better I suggest learning the programming language LISP. It was the second language developed just after FORTRAN, but is influenced by different principles, namely the Lambda Calculus. For some info about LISP (including writing a Lisp interpreter) and the Lambda Calculus, plus some info on an interesting language called HASKELL see:
    https://crypto.stanford.edu/~blynn/lambda/
    For the seminal paper on LISP see (note only for advanced computer science students):
    https://aiplaybook.a16z.com/reference-material/mccarthy-1960.pdf
    For the average Hacker – yes I progressed from programmer to Hacker :cool::cool::cool::cool::cool::cool::cool: – the following is better:
    http://languagelog.ldc.upenn.edu/myl/ldc/llog/jmc.pdf
    I read somewhere of an interesting experiment where they asked programmers to solve problems in various programming languages to see which language got a working program fastest. Surprise surprise, the winner was the second oldest language – Lisp. Amazing. BTW I do not use LISP – I use PYTHON then tweak it for performance with LUAJIT if I have to:
    http://alexeyvishnevsky.com/2015/05/lua-wraped-python/
    Another good exercise is what my teacher made us do – write the Quick-Sort in assembler. Actually these days you would use C that allows the embedding of assembler. You learn a lot but I do not have fond memories of doing it – I avoid assembler whenever possible – I find LUAJIT plenty fast enough and much easier to understand and maintain.

    Actually these days I do very little programming. 30 years of it was enough for me. I remember when I started loved programming however slowly but surely it lost its 'charm'. Another reason to try and break into 'management', but that has its own issues that would be way off topic for this thread.

    Thanks
    Bull

  28. jedishrfu says:
    Yes, a good example of this is when you make an expression parser. You can use recursion by using the parser itself to evaluate parenthesized expressions to any degree limited only by your memory. As an example:

    The expression ((2+3)*4 + 5*(6 -7))/(5+7)
    ParseExpr sees ((2+3)*4+5*(6-7)) and (5+7)

    Called again for the left expression (2+3)*4 + 5*(6-7)

    Called again for 2+3 returns a 5
    Then evaluates 5*4 to get 20

    Called again on 6-7 returns a -1
    Then evaluates 5 times -1 to get -5
    Then evaluates 20. – 5 to return 15

    Called again for 5+7 returns 12

  29. fresh_42 says:
    I'm not sure where recursion comes in here, but I found the timely coincidence somehow fitting:

    Recursive language and modern imagination were acquired simultaneously 70,000 years ago
    https://archaeologynewsnetwork.blog…-language-and-modern.html#I63oCfC8CmXtWEhC.97https://riojournal.com/article/38546/
    Maybe I should have posted it in our lounge as your insight provoked some good jokes about recursion. Seems there is more to it than one might think.

  30. jedishrfu says:
    "
    Stack isn't a problem with many functional languages because of tail call elimination.
    "

    This only works for a certain type of recursive function that once found bubbles the answer to the top. In contrast, the factorial is not a tail recursion amenable function because it modifying the results as it bubbles back.


    Python

    de factorial(arg):
        if arg==0:
            return 1
        else:
            return arg * factorial(arg-1)

    I think it could be modified to do that though:


    Python

    de factorial(arg, result):
        if arg==0:
            return result
        else:
            factorial(arg-1, result*arg)

    to run:


    Python

    factorial(arg,1)

  31. jedishrfu says:

    Good article, Bill.

    I liked the mention of FORTRAN. It’s true though, recursion was not possible until Fortran-90 when they added the notion of a stack (to keep up with C, I suspect). The early Fortran I used called Fortran-Y by Honeywell circa 1976, had a pass-by-reference scheme where addresses to data were stored is a specific data block by the subroutine/function when called or in CPU addressing registers for speed. Consequently, if it called itself then it would overwrite that same block and upon return would return to itself in an endless loop.

    Also early Fortrans, because of the call-by-reference scheme allowed you to pass results back through your arguments which wasn’t a problem until you used a literal. So calling a subroutine with the literal 3 and with the subroutine changing the value to 6 meant that all through your program wherever a 3 was used now became a 6. That was a very tricky problem for the unwary programmer to debug.

    Later Fortrans, most notably Fortran-90, adopted the notion of the stack allocating the data block on the stack and a call-by-value scheme allowing you to implement recursive functions while protecting against changes to your arguments getting passed back up the line.

    As Mark said, recursion was cool but it tended to use a lot of stack space for deeply recursive calls imagine factorial(10000) which would call the factorial function about 10,000 times and returning intermediate results about 10,000 times back to the original call. In C code, the stack would grow into the heap space so if you allocated too much memory or you recursed too much they would collide and end in an explosive collision (actually your program would just crash).

    In our class, we implemented Ackerman’s function which was said to be a good memory tester and showed the dangers of recursion.

    https://en.wikipedia.org/wiki/Ackermann_function

    LISP and PROLOG used recursion built into their language. They even had to implement a speedup called tail-recursion for functions that had this behavior to get back faster instead of popping the stack and returning through each level. The factorial function can’t be optimized in this way. You can see an example in the link below where a max value once discovered is returned up the stack.

    https://www.csie.ntu.edu.tw/~course/10420/Resources/lp/node38.html

    Thanks for writing and sharing this article.

    Jedi

    PS: while researching my comments I found this reminiscence of the halcyon days of Fortran:

    https://hackaday.com/2015/10/26/this-is-not-your-fathers-fortran/

  32. Mark44 says:
    "
    I would often face a StackOverflow exception, presumably because the recursion didn't terminate as the base case was ill-defined.
    "This is a good lesson about using recursion — namely, that you need a clear-cut way for recursion to terminate. Not mentioned in the article was how recursion relies on the stack for parameter passing.

    One problem with recursive functions is that even if you have a proper terminating case, it's still possible to overflow the stack if the routine calls itself too many times. This is less of a problem with modern computers due to the much larger RAM available for stack storage.

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply