Write a program that prints the numbers from 1 to 100. But for multiples of three print “Fizz” instead of the number and for the multiples of five print “Buzz”. For numbers which are multiples of both three and five print “FizzBuzz”.

Thursday, March 29, 2012

#23 - Premature Optimization Part 2: Measure Twice Cut Once

Often when considering Knuth's "premature optimization" quote you will be tempted to take that as a warning to never optimize in code. Knuth's position wasn't that optimizations are evil, just doing them before you have any data or analysis to back them up is evil 97% of the time. What we thought may have been a quick optimization to reduce the number of modulus operations may actually backfire in many situations.

First though, we need to think through some of the numbers in the typical example of FizzBuzz. In that calculation there are potentially four modulus operations performed for each number. Multiply that by one hundred numbers and we get an upper bound of four hundred potential operations. The problem is that there are four different places that we may exit the loop, most of them result in less modulus operations. Let's consider the first exit point, numbers that are divisible by 3 and 5. In that case we exit the loop and we only do half as many operations. Multiply that by one hundred and we have a lower ideal bound of two hundred. So the total number of operations will be between two and four hundred.

Normally for a large program we cannot perfectly count the possible paths through the main calculation loop. Sometimes the choices are based on things external to the loop, like how much disk space we have left or how much data has been broadcast over the network. The next problem is usually that the problem space is so incredibly large there is no way we can possibly enumerate the entire solution. Usually a software engineer would have to resort to statistics to model what a typical scenario may be. For FizzBuzz all of loop has no external considerations, and the problem space is extraordinarily small for today's machines: only 100 different loops. For comparison your typical video game image has hundreds of calculations per pixel, and there nearly a million pixels on a screen. And at nearly 60 images per second one hundred iterations is tiny.

So for each of the steps of the loop we need to count how many times each is triggered. There are six different cases where the number is divisible by three and five. Then there are twenty cases where the number is divisible by five, but six have already been counted leaving us with fourteen. Then there are thirty-three numbers between one and one hundred divisible by three, discounting the six that are also divisible by five leaves us with twenty-seven cases remaining. The rest of number account for fifty-three numbers.
For each step a different amount of operations is also performed. For the first case, there are two in the unoptimized form. For the second, three, and for the third and fourth there are four.

6*2 + 14*3 + 27*4 + 53*4 = 374 operations.

Considering the case where we changed the first modulus to be fifteen we drop one modulus from each of the operations, yielding 100 less operations for a total of 274 operations, trimming 28% of the operations, but still 37% away from the ideal.

The irony of this premature operation is that if we let the compiler and the runtime do the optimizations and just code the initial version the compiler would have found the optimal solution requiring 200 calculations. The Java compiler that initially compiles the code will totally miss the optimization. But Java has two compilers, one that runs directly against the code and one that runs as late as possible, or Just In Time (the compiler can run even later than that, but the how and why are topics that are much to advanced for simple FizzBuzz solutions).

- |- -| - +

In this implementation of FizzBuzz we will recreate one possible compiler optimizations for FizzBuzz called a "Common Subexpression Elimination." Don't worry, this wont be on the test.

public class TypicalCommonSubexpression {

public static void main(String[] args) {
  for (int i = 1; i <= 100; i++) {
    boolean fizz = i % 3 == 0;
    boolean buzz = i % 5 == 0;
    if (fizz && buzz) {
      System.out.println("FizzBuzz");
    } else if (fizz) {
      System.out.println("Fizz");
    } else if (buzz) {
      System.out.println("Buzz");
    } else {
      System.out.println(i);
    }
  }
}

}

When Java's second "JIT" compiler does run it looks at every conceivable permutation of the code and tries to remove redundancies and re-order operations to make the microchip happy and zippy. One of the checks it does is looking for things that can never change, but are calculated multiple times. When it finds one of those cases it will remember the result of the calculation and use it again later. In this case if we had kept the original program then the compiler would have seen that we are calculating the modulus of 3 and 5, store the result, and then use it where it would have been before.

This program always executes both modulus operations per cycle through the loop, resulting in a total of 200 operations, which was my ideal minimum. By taking a clever shortcut early in the coding process and replacing the first test with a modulus of fifteen the developer would have actually prevented the compiler from doing an optimization that would have resulted in an optimal solution. The optimization made at the last possible moment would have been the correct one.

Tuesday, March 27, 2012

#22 - Premature Optimization Part 1: Math in my Head

If you have spent any time programming professionally you have undoubtedly heard the phrase "premature optimization is the root of all evil." The quote in all likelihood originated with a 1974 paper written by Donald Knuth. But that quote can be somewhat taken out of context, he isn't saying that all optimizations are evil, just 97% of optimizations. And in those remaining 3% of cases, optimization is essential.

Of course that may need to be taken with a grain of salt, because this was a paper in which he was rehabilitating the use of the "go to" statement. While the goto command was a mainstay in 70's and 80's programming languages (all vestiges of earlier work) the command fell out of favor at the expense of the then current fashion called "structured programming," where every operation fit into a neat nested structure. With enough looping control structures like "for," "while," and "do ... while" and the ability to continue or stop loops higher up in the structure there is no demonstrable need for a goto command except for the most pathological cases. And even those cases can be solved without a single goto as long as readability, performance, and maintainability are not concerns.

This, however, does not stop programmers from optimizing their code. For every programmer that dutifully writes clean expressive code there is at least one other programmer who is trying to eek out every possible cycle of performance whenever they are at the keyboard.

To put it in Vegas terms 97% sounds like a surefire bet, as long as you are making only one bet. But if instead we said that 97% of all stove tops are at room temperature you wouldn't go out and habitually put your hand on one every time you see one, because eventually you will be burned.

So the typical difference here between the first and the second programmer is that the second programmer got burned badly by a performance issue.  So their response is that they are always looking to improve performance, and they are implementing those findings immediately. What keeps the first programmer in line is the jarring memory of the time they were served up a plate of spaghetti code that was optimized by the second programmer. This isn't to say that the first programmer never optimizes their code.  The first programmer optimizes where it is needed, which is rarely where you think it is.

When approaching FizzBuzz with an eye for optimizations there are several that you can do, and several that you have to do. The first optimization you need to do is to re-order the consideration of the output, basically in reverse order. You don't want to handle the general case of a problem until you are sure that you do not need to handle any of the special cases. The way the problem is stated, they are all special cases.

- |- -| - +

This implementation of FizzBuzz presents a presumptive optimization, collapsing the two modulus operations for the FizzBuzz case into a single modulus operation.

public class ThreeModuluses {

public static void main(String[] args) {
  for (int i = 1; i <= 100; i++) {
    if (i % 15 == 0) {
      System.out.println("FizzBuzz");
    } else if (i % 5 == 0) {
      System.out.println("Buzz");
    } else if (i % 3 == 0) {
      System.out.println("Fizz");
    } else {
      System.out.println(i);
    }
  }
}

}

If we do a little bit of math in our heads we can reason that any number that is divisible by 3 and 5 at the same time is also divisible by 15, so rather than checking both 3 and 5 twice, we can check them only once by only checking for 15. We still have to check for 3 and 5 later on, but in this case we get some occurrences that only have to do long division once.

Where we were doing possibly four modulus operations in what was considered unoptimized code, we are now doing at most 3 of those operations, so we can say we removed 25% of the wasted CPU cycles.  This isn't the best optimization out there, but it is a start.

Thursday, March 22, 2012

Fizz 21 - Duck Duck Mallard

It is notable in and of itself that Kotlin has language level suport for tuples.  But that alone didn't make it interesting.  What is interesting is how it is used in the when expressions pattern matching capabilities.

fun main(args : Array<String>) {
  for (i in 1..100) {
    println (
      when ( #(i % 3, i % 5) ) {
        #(0, 0) -> "FizzBuzz";
        is #(0, *) -> "Fizz";
        is #(*, 0) -> "Buzz";
        else -> i;
      }
    )
  }
}

The beginning of this solution is very pedestrian and is in fact has all been done in some fashion before in this blog.  We have a loop, we use when as an expression for println, we use a tuple as the when argument, a zero-zero tuple is a FizzBuzz.  Noting special.

Things change at line 6 and 7.  Intermixed with the standard switch handling is a pattern matching branch.  Pattern matching branches start out with the keyword is and allow for matching on types as well as tuples (negation with the !is keyword).

Using a pattern matching keyword allows us to use the * wildcard in the tuple in the branch condition.  Using a wildcard tells the compiler that (surprise!) we don't care what the value for that part of the tuple is.  So instead of having to enumerate out the full set of non-matching remainders we can just say * and not care.

This also points out the importance of order of execution.  You will not that the FizzBuzz case matches both the Fizz and the Buzz branches.  But since it is listed before those branch conditions it is evaluated first.

There are also some more interesting syntaxes planned with pattern matching for things such as Binding Patterns and Decomposer Functions but those are not implemented (yet).

Tuesday, March 20, 2012

Buzz 20 - In Some Particular Order

One of the features that Kotlin has borrowed from other languages is the notion of tuples.  For lack of a more precise and rigorous definition tuples are (sometimes) ordered lists of stuff.  You may say that your language already has ordered lists of stuff.  But this is special stuff... it's got.. specialness.

fun Int.isBuzz() : Boolean {
    return this % 5 == 0;
}

fun main(args : Array<String>) {
    for (i in 1..100) {
        println( when (#(i % 3, i.isBuzz())) {
            #(0, true) -> "FizzBuzz";
            #(0 ,false) -> "Fizz";
            #(1 ,true), #(2 ,true) -> "Buzz";
            else -> i;
        });
    }
}

The type of specialness that tuples have in are threefold.  Syntax, language integration, and generics.

First, Kotlin gives Tuples a special language syntax of their own.  A Hash followed by a parenthesized list of stuff  constitutes a tuple.  This is a shorthand notation for a tuple.  Tuples are usually significant in their order, but if you prefix the value with a label then that value gets a name and it can be accesses by that name.  It becomes a very lightweight way to define data structures that are only needed locally.

Second, there is a good deal of language integration with tuples.  The void equivalent return type Unit is actually a Tuple0 as well, which is a tuple with zero members.  And Kotlin uses the tuple mechanism to allow a function to return multiple types.  Even though the JVM and JavaScript VMs only allow a single return type, Kotlin does an end run by making the return a tuple.  While you could do this yourself with Object arrays and one off classes Kotlin integrates this practice so you can take only your favorite parts of the return in your code rather than swallowing the whole horse pill.

Finally, there is a good amount of generics integration with the tuple.  Since each position can be a different type Kotlin integrates the type of the position into the type signature of the particular tuple you are using.  This is more important when you consider that Kotlin generics are fully reifiable.

Of course, I haven't come up with a good way to demonstrate reifiable generics in a FizzBuzz solution.  And until someone does we should not expect the typical developer to get it right.

Thursday, March 15, 2012

#19 - But All My Friends Have Trampolines

Lets step away from Kotlin for a minute and revisit Groovy. Let's also revisit an alternate solution to the looping problem, recursion. To really demonstrate this feature, I am going to need to show you two solutions. First the recursive Groovy solution.

def fb
fb = {i, s = "" ->
    s = ((i%3)
        ? ((i%5)?i:"Buzz")
        : (i%5?"Fizz":"FizzBuzz")) + "\n" + s
    i <= 1 ? s : fb(i - 1, s)
}

fb(100)

This is a very simple recursive solution.  Base case happens on line 6 if i is one or less. Lines 3-6 determines the fizzy and buzziness, and line 9 sets the limit with the first call in. It works, trust me. In fact, try it out! You can download Groovy if you don't already have it, or you can Etry it out online.

Really. Go try it. Right now. I'll wait. It's important. No, seriously. Do it.

Tried it already? Great, now go change line 7 so that it will calculate FizzBuzz from one to one thousand. Did it work? Great let's move on. Wait, what? It didn't work? Something about a StackOverflowError? Awesome, you did it right. Let's continue the discussion.

The principal problem with deeply recursive solutions is the sheer amount of bookkeeping needed to keep the recursion straight.  Since this is a deep rather than a broad recursion each and every step in the iteration takes up space on the stack.  Each iterative step costs memory, and memory is limited unlike clock cycles.

So how do we fix this? Install a trampoline, invite the neighbor kids over, and see who you can chain-bounce onto the roof.

def fb
fb = {i, s = "" ->
    s = ((i%3)
        ? ((i%5)?i:"Buzz")
        : (i%5?"Fizz":"FizzBuzz")) + "\n" + s
    i <= 1 ? s : fb.trampoline(i - 1, s)
}.trampoline()

fb(100)

This is the actual solution for this week.  With the simple addition of two calls to the trampoline method in lines 6 and 7 this recursive solution now uses a form of a technique called tail recursion, where instead of layering on another level of memory for the stack we take the existing layer wipe it clean, and just jump back to the beginning of the method without a subroutine. What was once a gosub is now a goto. And we are no no longer beholden to the size of the stack but instead to the size of a String.  This isn't always appropriate in all situations, you need to be sure you are not preserving information from one recursive call to the next.

The thing about trampolines is that trying to chain-jump onto the roof is how bones tend to get broken.  Which is why we don't have a trampoline, even though all of my children's friends do.

[code samples edited for line width]

Tuesday, March 13, 2012

Fizz 18 - Getting the Monkey Off of Your Patch

When talking about functions and methods there really isn't much difference between the two at first.  The real meaning and value doesn't start to show up until other esoteric features like  class inheritance, subtyping, polymorphism, late binding, multiple dispatch, meta-object protocols, category theory, and on and on and on.  That is the territory where PhDs are minted.  Without all that messy class theory the main difference between a method and a function are access to protected fields and an implicit this parameter.  Methods are also typically also declared inside the structure declaring the class.  And methods for a particular class are almost always declared by the original implementer of the class.  If you need a new method in a class, you nearly always had to sub-class the class.  Nearly.

fun Int.isFizz() : Boolean {
    return this % 3 == 0
}

fun Int.isBuzz() : Boolean {
    return this % 5 == 0
}

fun main(args : Array<String>) {
    for (i:Int in 1..100) {
        when {
            i.isFizz() && i.isBuzz() -> println("FizzBuzz")
            i.isFizz() -> println("Fizz")
            i.isBuzz() -> println("Buzz")
            else -> println (i)
        }
    }
}

Kotlin let you, after a fashion, declare methods that look like they belong in a class.

On line 10 we declare the looping variable i to be of type Int. This is a class provided by Kotlin and is one that is not open to extension. Remember, all classes are closed by default in Kotlin. But look at lines 12, 13, and 14. We are calling an isFizz and isBuzz method in the Int class. Surely they are not just juicing up the standard library to look good in a standard demo, are they?

It's not the Kotlin team that is juicing things up. It is the program itself. Lines 1-7 are declaring extension functions on the Int class. This is how we are adding what look like methods onto the already closed class Int. Really, these are not all that different from Python methods except for some syntactic sugar low-carb coding where the explicit self parameter becomes an implicit this parameter, whose type is explicitly determined by convention of the function name: <class name>.<method name>.

This addition, however, does not literally add a method to the class it is extending.  The method is not subject to any late binding or multiple dispatch.  The particular method is entirely determined at compile time.  It also must either be present in the package at compile time, or explicitly imported in the file in which it is used. So a developer can write an awesome set of String extension functions, like String.invertEveryThirdWord() and not have to involuntarily subject the world to their awesomeness by declaring them in the org.example.of.category.theory package, and only let the 1337 few who are worthy of it's power bask in it's glory.

Wednesday, March 7, 2012

#17 - Putting the Meth in Method

Functions and procedures are old hat. Those ideas were conceived of not only before I started earning a paycheck in Software, but before I myself was even conceived. Modern programmers use more current techniques like Object Oriented programming. A technique that was conceived of over a decade after functions and procedures were, and was still, sadly, before I was conceived. It's not modern, it's just the modern fashion.

class FizzBuzzer(max: Int) {
    var i:Int = 0
    val max:Int = max

    fun hasNext() : Boolean {
        return i < max
    }

    fun nextValue() : String {
        i++
        return when (i%15) {
            0 -> "FizzBuzz"
            5, 10 -> "Buzz"
            3, 6, 9, 12 -> "Fizz"
            else -> i.toString()
        }
    }
}

fun main(args : Array<String>) {
    val fizzer : FizzBuzzer = FizzBuzzer(100)
    while (fizzer.hasNext()) {
        println(fizzer.nextValue())
    }
}

The implementation for this solution is one of an iterator. But we won't dwell on that since that isn't what we are looking at in this post. Similarly to talk about methods we have to embed them in a class, which is something we will just wave our hands over now and come back to later, because the true power of OO actually can be used to solve FizzBuzz. So I will wave my hands over that and move on to what you can't get enough of: methods.

The first thing you should notice about Kotlin classes is that the constructors are for the most part integrated into the class body. There is a notion of a primary constructor which is in place to discourage the proliferation of constructors. When combined with default parameters it handles nearly all the reasons for constructor proliferation, leaving for the most part only weird stuff like copy and pseudo-copy constructors. It is unclear in the current docs if there is any allowance for other constructors. I am not sure if this is a bad thing.

Next, we look at the member variables. In Kotlin all fields are properties by default. You can change this with a private modifier. Also, because fields are properties by default each property either must be initialized or have a get defined as part of the declaration. What is good practice in C (undefined fields can be practically random and a security risk), and implied in Java (zero, false, or null as appropriate) is compulsory in Kotlin.

When it come to actually instantiating an instance of the object on line 21 we see that the new operator is not required.  In fact, it is not even allowed! Calls to create objects look just like regular function calls. While you may argue that this limits the namespace of functions I counter by saying this is a good thing, since functions named like objects usually create bugs. Add in the practice that  class names generally get the German Noun treatment distinguishing a constructor call from a method call should be trivial.

Finally, an implementation note. It may look like the main function doesn't belong to a class. But because of the structure of the JVM it does belong in a class, as a static method of that class. The class is (currently) named namespace and it is declared in the package of the code it is written in.

Be careful when picking up Object Oriented programming. You may think you can stop anytime you want. But you won't ever want to. And when you try to stop, your language won't let you.  You will be coding methods for the rest of your life.  Even languages like Clojure are fooling you.  It's all methods all the time.