My experience with the Electrical Engineers in academics is that they usually view Computer Science in terms of circuits and details of how things work at the most tangible level. One hazard of using this approach in teaching is that many times you end up with some wrong core concepts of Computer Science. They enable your abstract level thinking at large. The worst part is, you don't even know that your concepts are incorrect yet you are very confident about them.

My undergrad CS program at FAST-NUCES was heavily dominated by dedicated and competent Electrical Engineers. That is why my classmates and I have really good C/C++ programming skills, great concepts about pointers, indepth understanding of microprocessors work, how implementations of Operating Systems take advantage of them at the very nitty gritty level and have strong knowledge of other implementation specific things. On the downside, I feel such students have some gaps left in their personalities towards the mathematical face of Computer Science.

Perhaps the most commonly misunderstood such topic is Recursion. For me, it was a shock to know, after about two years of my graduation, that what I knew about recursion was quite wrong. I, along with many others were taught (and are still being taught) that recursion is about a function calling itself. When a function calls itself, a separate call is made, the parameters and some other info is placed on stack and a new 'instance' of function takes over until it returns. If this process goes on infinitely, the program would end with an overflowed stack so there must be a base or terminating condition. The instructors would give us assignments in which recursion had to be "removed" by explicitly storing some state information on a stack so that a new call to the same function was not made. The emphasis of these assignments was that although "recursive" code is simple, recursion has a huge overhead which should be removed in most of the cases.

This is not recursion! This way of looking at recursion *might* be OK
for a low-level C/C++ programmer who wants to build his career coding
micro-controllers for the rest of his life. But these concepts of
recursion are certainly disastrous for a Computer Science Major who
wants to truly appreciate the beauty of algorithms.

I was lucky to learn Lisp after my graduation which gave enough abstraction from low level nitty gritty controls such as memory management and pointers as well as complex syntax, to let me focus on the problem itself rather than going into syntax.

What I learnt then is that recursion is solving a problem by deferring until it's subproblem and reconciling the solution of the problem with a simpler computation. The subproblem itself is solved in the same manner until the subproblem becomes so simple that it's solution is trivial. In mathematics, this is called inductive step.

This seems much like the former view but it's implication is quite different. The second definition of recursion implies that a function calling itself may not necessarily be forming a recursive solution. For example the following two implementations of factorial call themselves but the first one is recursive while the second one is iterative:

`; Recursive(define fact (lambda (n) (if (= n 0) 1 (* n (factorial (- n 1)))))) ; Iterative(define facti (lambda (n, c) (if (= n 0) c (facti (- n 1) (* c n))`

The reason facti is not recursive is that it is not deferring any computation.

The equal implementation of these in Python would be:

`# Recursivedef fact(n): if n == 0: return 1 return n * fact(n - 1) # Iterativedef facti(n, c=1): if n == 0: return c else: return facti(n - 1, c * n)`

Also, a function not calling itself may still be a recursive function. For example, an explicit stack storing different states to avoid function calling itself is still recursive!

PS: If you still don't get what recursion is, see "Recursion, Misunderstood" by Sharjeel Ahmed Qureshi.