# Functional Programming: Using Lazy Evaluation to Write More Modular Code

Many articles on Functional Programming claim that it makes code simpler and more maintainable, but the examples provided tend to either be too abstract or too difficult to understand if you don’t already know Functional concepts. This post provides a simple, specific example of how the concept of Lazy Evaluation lets you separate concerns and create modular code in a way that is very hard to do otherwise.

One common task in scientific computing is calculating numerical approximations to quantities that are hard to solve for exactly, such as complex integrals or square roots. This involves two related concepts: generating the next approximation, and figuring out when to stop. A standard, imperative solution has to entangle these concepts in one function. For example, one way to approximate an integral is to generate successive Riemann Sums until two approximations in a row fall are sufficiently close to each other:

*Update: I’ve cleaned up the code below, removing a few bugs and adding some tests*

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 |
def square(x): return x*x def cube(x): return x**3 def leftRiemannSum(f, start, end, steps): stepLength = float(end - start)/steps estimate = 0 for h in xrange(steps): estimate += f(start + h*stepLength)*stepLength return estimate assert leftRiemannSum(square, -1, 1, 1) == square(-1)*2 == 2 assert leftRiemannSum(square, 3, 7, 5) == square(3)*.8 + square(3.8)*.8 + square(4.6)*.8 + square(5.4)*.8 + square(6.2)*.8 == 89.76 assert leftRiemannSum(cube, -8, -2, 3) == cube(-8)*2 + cube(-6)*2 + cube(-4)*2 == -1584 def integrate(f, start, end, epsilon): steps = 1 difference = abs(epsilon) + 1 #Ensure that difference is greater than epsilon to start currentEstimate = leftRiemannSum(f, start, end, steps) while abs(difference) > epsilon: steps += 1 previousEstimate = currentEstimate currentEstimate = leftRiemannSum(f, start, end, steps) difference = currentEstimate - previousEstimate return currentEstimate #Test approximations are reasonably close to some known values assert integrate(cube, 1, 4, .001) - 255.0/4.0 < 1 assert integrate(square, -6, 3, .001) - 81 < 1 #Test approximations improve assert abs(integrate(cube, 1, 4, .001) - 255.0/4.0) > abs(integrate(cube, 1, 4, .0001) - 255.0/4.0) assert abs(integrate(square, -6, 3, .001) - 81) > abs(integrate(square, -6, 3, .0001) - 81) #Correct answer is 0.3333333333333 integrate(square, 0, 1, .001) >> 0.31190926275992437 integrate(square, 0, 1, .0001) >> 0.3263241420353105 |

The integrate function above intertwines the code to generate the next approximation and figure out when to stop, which results in several problems:

**Testability: w**e can’t test the stopping criterion and the approximation separately–if either one is broken, it will result in incorrect code, and finding the source of the problem is more difficult.**Extensibility: i**f we want to change the stopping criterion to look at more elements, we have to rewrite a significant part of the implementation. If we want to extend this code to allow an arbitrary stopping criterion, then we need to save every estimate in memory until the code is finished executing, since we don’t know how many estimates will be needed.**Modification: s**imilarly, changing our integral implementation to a more efficient algorithm such as Simpson’s rule would require also changing the stopping criterion and making sure nothing breaks**Reusability: w**e can’t easily reuse the stopping criterion for other approximation problems, such as calculating differential equations or square roots

So how can we solve these problems? Luckily Python has a concept called generators, which can basically let you create a list-like function that outputs its values only as they’re needed. For more on generators see here. This lets us rewrite the **integrate **function to separate concerns and produce more modular code:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
def integrationGenerator(f, start, end): steps = 1 while True: yield leftRiemannSum(f, start, end, steps) steps += 1 def approximateUntilEstimatesWithin(generator, threshold): previousEstimate = generator.next() currentEstimate = generator.next() while abs(currentEstimate - previousEstimate) > threshold: previousEstimate = currentEstimate currentEstimate = generator.next() return currentEstimate approximateUntilEstimatesWithin(integrationGenerator(square, 0, 1), .001) >> 0.31190926275992437 approximateUntilEstimatesWithin(integrationGenerator(square, 0, 1), .0001) >> 0.3263241420353105 |

This version solves all the problems listed above:

- We can write separate unit tests for
**integrationGenerator**and**approximateUntilEstimatesWithin**, which makes it easier to ensure correctness and debug any issues. - We can modify
**approximateUntilEstimatesWithin**to a new criterion without needing to change anything about integrationGenerator. Whether we want to compare two estimates in a row or fifty, we don’t need any changes to**integrationGenerator**. - The integral implementation can be changed without needing to look at or consider the approximation criterion.
- We can reuse
**approximateUntilEstimatesWithin**on other problems. For example, here’s Newton’s method for square root approximations: -
12345678910def squareRootApproximations(n):estimate = 1.0while True:estimate = .5 * (n/estimate + estimate)yield estimateapproximateUntilEstimatesWithin(squareRootApproximations(2), .001)>> 1.4142135623746899math.sqrt(2)>> 1.4142135623730951
- In particular, if we had 5 different approximations and 5 different stopping criteria, the new version lets us write 5 + 5 = 10 functions, while the previous version would have required 5*5 = 25 functions.

The interesting thing about the separation above is that this is **only **possible with lazy evaluation: without it, Python would attempt to evaluate an infinite number of successive approximations and crash.

This post was inspired by John Hughes’ Why Functional Programming Matters. If you liked it, check out the original paper!

**Categories: **Functional Programming, Productivity, Programming

Thanks for the post Satvik!

I’ve been thinking more about functional programming (and just programming in general) recently. Do you have any thoughts about D or Haskell? Those are my favorite languages right now for my imperative / functional needs, respectively.

I know python well too but never got in the swing of using the functional aspects it provides.