(* Blah blah blah, because new programmers are taught the magic of recursion and so they think recursively from an early start without being educated much on why recursion can be bad and when and how they should avoid it.) |
Um? How exactly do you figure that? In fact, I'd think most new programmers would tend toward an iterative approach first because they only look at a single instance for each method's implementation (a very bad approach to design patterns and internal architecture).
In addition, requinix, your approach processes everything in order going from all top-level elements, then all secondaries, and so on, however, it never once retains the current count on the level/generation of each element. So if someone wanted to use this algorithm to process every element of a tree except for items in the second level/generation, there would be no way for your approach to determine whether or not to process certain elements (unless each element had its own "level" or "generation" property built-in, which isnt very likely to be built-in to most data structures).
In addition, if any sort of fail-safe error handling is introduced to such an algorithm, and assuming an error is present on the 2nd top-level element, using your iterative method will halt all operations at the top level while never traversing the child elements or children of the child elements on the first top-level element.
but then you have to convert that to something you can actually use with real data and that usually means you have to convert it to an iterative rather than recursive form |
Do you at all see the use of pseudo-code as a giant clue that maybe this isnt directly focused on one application of such an algorithm? There is no reason why using "real data" would affect this design approach. There is no feasible way to account for all pieces of data when it is either unknown or it ranges in an undefined dimension.
In addition, how could the majority of "real data" (given that it is structured as a tree or nested lists) have
to be processed only in an iterative manner?
I have often wished for an optimizer that could automagically generate non-recursive code from a recursive source. |
But yes I would have to agree with you completely that it would be nice for this to be available, but I'd imagine the reason why this hasn't become effectively available is the fact that recursion cannot be predicted given a variable or undefined length of data. Of course, maybe in C# where something like MSIL runtime compiling functionality is present, this may be possible to generate the iterative version of a recursive function based on whatever data has been loaded at runtime, therefore reducing some excess overhead. However, again, this is algorithm here is taking a broader approach to such a design.
In short, recursion, though it may make a mess of stack management, implements a much clearer and more elegant design. So even if you can whip up an iterative version of a recursive function, it is much easier
to start off by implementing a recursive function and reducing it as necessary. However, for tree-based lists (which this entire thread has been posted for), it is almost more advantageous to use recursion as a method of increasing modularity in solving complex problems using data that is structured as a tree.
In addition, this whole thread is meant for conceptual analysis rather than saying that "this will not work for me with my application."