Write a program that prints the numbers from 1 to 100. But for multiples of three print “Fizz” instead of the number and for the multiples of five print “Buzz”. For numbers which are multiples of both three and five print “FizzBuzz”.

Thursday, March 29, 2012

#23 - Premature Optimization Part 2: Measure Twice Cut Once

Often when considering Knuth's "premature optimization" quote you will be tempted to take that as a warning to never optimize in code. Knuth's position wasn't that optimizations are evil, just doing them before you have any data or analysis to back them up is evil 97% of the time. What we thought may have been a quick optimization to reduce the number of modulus operations may actually backfire in many situations.

First though, we need to think through some of the numbers in the typical example of FizzBuzz. In that calculation there are potentially four modulus operations performed for each number. Multiply that by one hundred numbers and we get an upper bound of four hundred potential operations. The problem is that there are four different places that we may exit the loop, most of them result in less modulus operations. Let's consider the first exit point, numbers that are divisible by 3 and 5. In that case we exit the loop and we only do half as many operations. Multiply that by one hundred and we have a lower ideal bound of two hundred. So the total number of operations will be between two and four hundred.

Normally for a large program we cannot perfectly count the possible paths through the main calculation loop. Sometimes the choices are based on things external to the loop, like how much disk space we have left or how much data has been broadcast over the network. The next problem is usually that the problem space is so incredibly large there is no way we can possibly enumerate the entire solution. Usually a software engineer would have to resort to statistics to model what a typical scenario may be. For FizzBuzz all of loop has no external considerations, and the problem space is extraordinarily small for today's machines: only 100 different loops. For comparison your typical video game image has hundreds of calculations per pixel, and there nearly a million pixels on a screen. And at nearly 60 images per second one hundred iterations is tiny.

So for each of the steps of the loop we need to count how many times each is triggered. There are six different cases where the number is divisible by three and five. Then there are twenty cases where the number is divisible by five, but six have already been counted leaving us with fourteen. Then there are thirty-three numbers between one and one hundred divisible by three, discounting the six that are also divisible by five leaves us with twenty-seven cases remaining. The rest of number account for fifty-three numbers.
For each step a different amount of operations is also performed. For the first case, there are two in the unoptimized form. For the second, three, and for the third and fourth there are four.

6*2 + 14*3 + 27*4 + 53*4 = 374 operations.

Considering the case where we changed the first modulus to be fifteen we drop one modulus from each of the operations, yielding 100 less operations for a total of 274 operations, trimming 28% of the operations, but still 37% away from the ideal.

The irony of this premature operation is that if we let the compiler and the runtime do the optimizations and just code the initial version the compiler would have found the optimal solution requiring 200 calculations. The Java compiler that initially compiles the code will totally miss the optimization. But Java has two compilers, one that runs directly against the code and one that runs as late as possible, or Just In Time (the compiler can run even later than that, but the how and why are topics that are much to advanced for simple FizzBuzz solutions).

- |- -| - +

In this implementation of FizzBuzz we will recreate one possible compiler optimizations for FizzBuzz called a "Common Subexpression Elimination." Don't worry, this wont be on the test.

public class TypicalCommonSubexpression {

public static void main(String[] args) {
  for (int i = 1; i <= 100; i++) {
    boolean fizz = i % 3 == 0;
    boolean buzz = i % 5 == 0;
    if (fizz && buzz) {
      System.out.println("FizzBuzz");
    } else if (fizz) {
      System.out.println("Fizz");
    } else if (buzz) {
      System.out.println("Buzz");
    } else {
      System.out.println(i);
    }
  }
}

}

When Java's second "JIT" compiler does run it looks at every conceivable permutation of the code and tries to remove redundancies and re-order operations to make the microchip happy and zippy. One of the checks it does is looking for things that can never change, but are calculated multiple times. When it finds one of those cases it will remember the result of the calculation and use it again later. In this case if we had kept the original program then the compiler would have seen that we are calculating the modulus of 3 and 5, store the result, and then use it where it would have been before.

This program always executes both modulus operations per cycle through the loop, resulting in a total of 200 operations, which was my ideal minimum. By taking a clever shortcut early in the coding process and replacing the first test with a modulus of fifteen the developer would have actually prevented the compiler from doing an optimization that would have resulted in an optimal solution. The optimization made at the last possible moment would have been the correct one.

Tuesday, March 27, 2012

#22 - Premature Optimization Part 1: Math in my Head

If you have spent any time programming professionally you have undoubtedly heard the phrase "premature optimization is the root of all evil." The quote in all likelihood originated with a 1974 paper written by Donald Knuth. But that quote can be somewhat taken out of context, he isn't saying that all optimizations are evil, just 97% of optimizations. And in those remaining 3% of cases, optimization is essential.

Of course that may need to be taken with a grain of salt, because this was a paper in which he was rehabilitating the use of the "go to" statement. While the goto command was a mainstay in 70's and 80's programming languages (all vestiges of earlier work) the command fell out of favor at the expense of the then current fashion called "structured programming," where every operation fit into a neat nested structure. With enough looping control structures like "for," "while," and "do ... while" and the ability to continue or stop loops higher up in the structure there is no demonstrable need for a goto command except for the most pathological cases. And even those cases can be solved without a single goto as long as readability, performance, and maintainability are not concerns.

This, however, does not stop programmers from optimizing their code. For every programmer that dutifully writes clean expressive code there is at least one other programmer who is trying to eek out every possible cycle of performance whenever they are at the keyboard.

To put it in Vegas terms 97% sounds like a surefire bet, as long as you are making only one bet. But if instead we said that 97% of all stove tops are at room temperature you wouldn't go out and habitually put your hand on one every time you see one, because eventually you will be burned.

So the typical difference here between the first and the second programmer is that the second programmer got burned badly by a performance issue.  So their response is that they are always looking to improve performance, and they are implementing those findings immediately. What keeps the first programmer in line is the jarring memory of the time they were served up a plate of spaghetti code that was optimized by the second programmer. This isn't to say that the first programmer never optimizes their code.  The first programmer optimizes where it is needed, which is rarely where you think it is.

When approaching FizzBuzz with an eye for optimizations there are several that you can do, and several that you have to do. The first optimization you need to do is to re-order the consideration of the output, basically in reverse order. You don't want to handle the general case of a problem until you are sure that you do not need to handle any of the special cases. The way the problem is stated, they are all special cases.

- |- -| - +

This implementation of FizzBuzz presents a presumptive optimization, collapsing the two modulus operations for the FizzBuzz case into a single modulus operation.

public class ThreeModuluses {

public static void main(String[] args) {
  for (int i = 1; i <= 100; i++) {
    if (i % 15 == 0) {
      System.out.println("FizzBuzz");
    } else if (i % 5 == 0) {
      System.out.println("Buzz");
    } else if (i % 3 == 0) {
      System.out.println("Fizz");
    } else {
      System.out.println(i);
    }
  }
}

}

If we do a little bit of math in our heads we can reason that any number that is divisible by 3 and 5 at the same time is also divisible by 15, so rather than checking both 3 and 5 twice, we can check them only once by only checking for 15. We still have to check for 3 and 5 later on, but in this case we get some occurrences that only have to do long division once.

Where we were doing possibly four modulus operations in what was considered unoptimized code, we are now doing at most 3 of those operations, so we can say we removed 25% of the wasted CPU cycles.  This isn't the best optimization out there, but it is a start.

Thursday, March 22, 2012

Fizz 21 - Duck Duck Mallard

It is notable in and of itself that Kotlin has language level suport for tuples.  But that alone didn't make it interesting.  What is interesting is how it is used in the when expressions pattern matching capabilities.

fun main(args : Array<String>) {
  for (i in 1..100) {
    println (
      when ( #(i % 3, i % 5) ) {
        #(0, 0) -> "FizzBuzz";
        is #(0, *) -> "Fizz";
        is #(*, 0) -> "Buzz";
        else -> i;
      }
    )
  }
}

The beginning of this solution is very pedestrian and is in fact has all been done in some fashion before in this blog.  We have a loop, we use when as an expression for println, we use a tuple as the when argument, a zero-zero tuple is a FizzBuzz.  Noting special.

Things change at line 6 and 7.  Intermixed with the standard switch handling is a pattern matching branch.  Pattern matching branches start out with the keyword is and allow for matching on types as well as tuples (negation with the !is keyword).

Using a pattern matching keyword allows us to use the * wildcard in the tuple in the branch condition.  Using a wildcard tells the compiler that (surprise!) we don't care what the value for that part of the tuple is.  So instead of having to enumerate out the full set of non-matching remainders we can just say * and not care.

This also points out the importance of order of execution.  You will not that the FizzBuzz case matches both the Fizz and the Buzz branches.  But since it is listed before those branch conditions it is evaluated first.

There are also some more interesting syntaxes planned with pattern matching for things such as Binding Patterns and Decomposer Functions but those are not implemented (yet).

Tuesday, March 20, 2012

Buzz 20 - In Some Particular Order

One of the features that Kotlin has borrowed from other languages is the notion of tuples.  For lack of a more precise and rigorous definition tuples are (sometimes) ordered lists of stuff.  You may say that your language already has ordered lists of stuff.  But this is special stuff... it's got.. specialness.

fun Int.isBuzz() : Boolean {
    return this % 5 == 0;
}

fun main(args : Array<String>) {
    for (i in 1..100) {
        println( when (#(i % 3, i.isBuzz())) {
            #(0, true) -> "FizzBuzz";
            #(0 ,false) -> "Fizz";
            #(1 ,true), #(2 ,true) -> "Buzz";
            else -> i;
        });
    }
}

The type of specialness that tuples have in are threefold.  Syntax, language integration, and generics.

First, Kotlin gives Tuples a special language syntax of their own.  A Hash followed by a parenthesized list of stuff  constitutes a tuple.  This is a shorthand notation for a tuple.  Tuples are usually significant in their order, but if you prefix the value with a label then that value gets a name and it can be accesses by that name.  It becomes a very lightweight way to define data structures that are only needed locally.

Second, there is a good deal of language integration with tuples.  The void equivalent return type Unit is actually a Tuple0 as well, which is a tuple with zero members.  And Kotlin uses the tuple mechanism to allow a function to return multiple types.  Even though the JVM and JavaScript VMs only allow a single return type, Kotlin does an end run by making the return a tuple.  While you could do this yourself with Object arrays and one off classes Kotlin integrates this practice so you can take only your favorite parts of the return in your code rather than swallowing the whole horse pill.

Finally, there is a good amount of generics integration with the tuple.  Since each position can be a different type Kotlin integrates the type of the position into the type signature of the particular tuple you are using.  This is more important when you consider that Kotlin generics are fully reifiable.

Of course, I haven't come up with a good way to demonstrate reifiable generics in a FizzBuzz solution.  And until someone does we should not expect the typical developer to get it right.

Thursday, March 15, 2012

#19 - But All My Friends Have Trampolines

Lets step away from Kotlin for a minute and revisit Groovy. Let's also revisit an alternate solution to the looping problem, recursion. To really demonstrate this feature, I am going to need to show you two solutions. First the recursive Groovy solution.

def fb
fb = {i, s = "" ->
    s = ((i%3)
        ? ((i%5)?i:"Buzz")
        : (i%5?"Fizz":"FizzBuzz")) + "\n" + s
    i <= 1 ? s : fb(i - 1, s)
}

fb(100)

This is a very simple recursive solution.  Base case happens on line 6 if i is one or less. Lines 3-6 determines the fizzy and buzziness, and line 9 sets the limit with the first call in. It works, trust me. In fact, try it out! You can download Groovy if you don't already have it, or you can Etry it out online.

Really. Go try it. Right now. I'll wait. It's important. No, seriously. Do it.

Tried it already? Great, now go change line 7 so that it will calculate FizzBuzz from one to one thousand. Did it work? Great let's move on. Wait, what? It didn't work? Something about a StackOverflowError? Awesome, you did it right. Let's continue the discussion.

The principal problem with deeply recursive solutions is the sheer amount of bookkeeping needed to keep the recursion straight.  Since this is a deep rather than a broad recursion each and every step in the iteration takes up space on the stack.  Each iterative step costs memory, and memory is limited unlike clock cycles.

So how do we fix this? Install a trampoline, invite the neighbor kids over, and see who you can chain-bounce onto the roof.

def fb
fb = {i, s = "" ->
    s = ((i%3)
        ? ((i%5)?i:"Buzz")
        : (i%5?"Fizz":"FizzBuzz")) + "\n" + s
    i <= 1 ? s : fb.trampoline(i - 1, s)
}.trampoline()

fb(100)

This is the actual solution for this week.  With the simple addition of two calls to the trampoline method in lines 6 and 7 this recursive solution now uses a form of a technique called tail recursion, where instead of layering on another level of memory for the stack we take the existing layer wipe it clean, and just jump back to the beginning of the method without a subroutine. What was once a gosub is now a goto. And we are no no longer beholden to the size of the stack but instead to the size of a String.  This isn't always appropriate in all situations, you need to be sure you are not preserving information from one recursive call to the next.

The thing about trampolines is that trying to chain-jump onto the roof is how bones tend to get broken.  Which is why we don't have a trampoline, even though all of my children's friends do.

[code samples edited for line width]

Tuesday, March 13, 2012

Fizz 18 - Getting the Monkey Off of Your Patch

When talking about functions and methods there really isn't much difference between the two at first.  The real meaning and value doesn't start to show up until other esoteric features like  class inheritance, subtyping, polymorphism, late binding, multiple dispatch, meta-object protocols, category theory, and on and on and on.  That is the territory where PhDs are minted.  Without all that messy class theory the main difference between a method and a function are access to protected fields and an implicit this parameter.  Methods are also typically also declared inside the structure declaring the class.  And methods for a particular class are almost always declared by the original implementer of the class.  If you need a new method in a class, you nearly always had to sub-class the class.  Nearly.

fun Int.isFizz() : Boolean {
    return this % 3 == 0
}

fun Int.isBuzz() : Boolean {
    return this % 5 == 0
}

fun main(args : Array<String>) {
    for (i:Int in 1..100) {
        when {
            i.isFizz() && i.isBuzz() -> println("FizzBuzz")
            i.isFizz() -> println("Fizz")
            i.isBuzz() -> println("Buzz")
            else -> println (i)
        }
    }
}

Kotlin let you, after a fashion, declare methods that look like they belong in a class.

On line 10 we declare the looping variable i to be of type Int. This is a class provided by Kotlin and is one that is not open to extension. Remember, all classes are closed by default in Kotlin. But look at lines 12, 13, and 14. We are calling an isFizz and isBuzz method in the Int class. Surely they are not just juicing up the standard library to look good in a standard demo, are they?

It's not the Kotlin team that is juicing things up. It is the program itself. Lines 1-7 are declaring extension functions on the Int class. This is how we are adding what look like methods onto the already closed class Int. Really, these are not all that different from Python methods except for some syntactic sugar low-carb coding where the explicit self parameter becomes an implicit this parameter, whose type is explicitly determined by convention of the function name: <class name>.<method name>.

This addition, however, does not literally add a method to the class it is extending.  The method is not subject to any late binding or multiple dispatch.  The particular method is entirely determined at compile time.  It also must either be present in the package at compile time, or explicitly imported in the file in which it is used. So a developer can write an awesome set of String extension functions, like String.invertEveryThirdWord() and not have to involuntarily subject the world to their awesomeness by declaring them in the org.example.of.category.theory package, and only let the 1337 few who are worthy of it's power bask in it's glory.

Wednesday, March 7, 2012

#17 - Putting the Meth in Method

Functions and procedures are old hat. Those ideas were conceived of not only before I started earning a paycheck in Software, but before I myself was even conceived. Modern programmers use more current techniques like Object Oriented programming. A technique that was conceived of over a decade after functions and procedures were, and was still, sadly, before I was conceived. It's not modern, it's just the modern fashion.

class FizzBuzzer(max: Int) {
    var i:Int = 0
    val max:Int = max

    fun hasNext() : Boolean {
        return i < max
    }

    fun nextValue() : String {
        i++
        return when (i%15) {
            0 -> "FizzBuzz"
            5, 10 -> "Buzz"
            3, 6, 9, 12 -> "Fizz"
            else -> i.toString()
        }
    }
}

fun main(args : Array<String>) {
    val fizzer : FizzBuzzer = FizzBuzzer(100)
    while (fizzer.hasNext()) {
        println(fizzer.nextValue())
    }
}

The implementation for this solution is one of an iterator. But we won't dwell on that since that isn't what we are looking at in this post. Similarly to talk about methods we have to embed them in a class, which is something we will just wave our hands over now and come back to later, because the true power of OO actually can be used to solve FizzBuzz. So I will wave my hands over that and move on to what you can't get enough of: methods.

The first thing you should notice about Kotlin classes is that the constructors are for the most part integrated into the class body. There is a notion of a primary constructor which is in place to discourage the proliferation of constructors. When combined with default parameters it handles nearly all the reasons for constructor proliferation, leaving for the most part only weird stuff like copy and pseudo-copy constructors. It is unclear in the current docs if there is any allowance for other constructors. I am not sure if this is a bad thing.

Next, we look at the member variables. In Kotlin all fields are properties by default. You can change this with a private modifier. Also, because fields are properties by default each property either must be initialized or have a get defined as part of the declaration. What is good practice in C (undefined fields can be practically random and a security risk), and implied in Java (zero, false, or null as appropriate) is compulsory in Kotlin.

When it come to actually instantiating an instance of the object on line 21 we see that the new operator is not required.  In fact, it is not even allowed! Calls to create objects look just like regular function calls. While you may argue that this limits the namespace of functions I counter by saying this is a good thing, since functions named like objects usually create bugs. Add in the practice that  class names generally get the German Noun treatment distinguishing a constructor call from a method call should be trivial.

Finally, an implementation note. It may look like the main function doesn't belong to a class. But because of the structure of the JVM it does belong in a class, as a static method of that class. The class is (currently) named namespace and it is declared in the package of the code it is written in.

Be careful when picking up Object Oriented programming. You may think you can stop anytime you want. But you won't ever want to. And when you try to stop, your language won't let you.  You will be coding methods for the rest of your life.  Even languages like Clojure are fooling you.  It's all methods all the time.

Tuesday, March 6, 2012

#16 - Putting the Fun in Function

Kotlin has functions. Whodathunk? You've been seeing them since the first code samples but perhaps we should put it to real use.

fun fizzBuzz(i : Int) : String {
    return when (i%15) {
      0 -> "FizzBuzz";
      5,10 -> "Buzz";
      3,6,9,12 -> "Fizz";
      else -> i.toString();
    }
}

fun main(args : Array<String>) {
  for (i in 1..100) {
    println (fizzBuzz(i));
  }
}

The function deceleration syntax is very reminiscent of Pascal, the third programming language I ever learned (various BASICs and Action! were the first two). These differ from C[#/++] and Java in several ways.

One difference is that there are only three letters for fun, rather than all 8 for function. Even with IDE code completion, f [ctrl+space] [Enter] is still three keystrokes. I guess they could put in an extra space to save one, but then I'de be compelled to count [ctrl+space] as two keystrokes (even though it is one cord).

The next difference is that the type follows the symbol name. This is more of a western approach to surnames than the typical braces and semicolons approach which places the type before the symbol, if you consider the type of the symbol to be the family name of the symbol. In a sense, placing the identity of the symbol above that of it's role.

One other quirk is that there are no primitive types in Kotlin, or there are no object types of primitives (via the Int type deceleration on the first line). Actually there are object primitive types, but they are the nullable primitive types. Except that you use a capitalized name, which implies they are Object types, which they are unless they happen to have a nullable modifier on the object type, which makes it a different type rather than a modification of the original type. So to express this in a particle physics metaphor this is an anti-quirk to a Java quirk, which means they annihilate each other. Except they are quirks not quarks, so a strange quark, or quirk, is left over. Which only matters when comparing it to a particular implementation platform. Charming.

Finally, unlike Pascal there are no procedures. Everything is a function, even if it returns Nothing. The closest analog to a procedure would be a function with no declared return type, which is then implied to mean Tuple0, a tuple with zero members, which appears as of the writing of this post to be disctinct from Nothing. Remember, {ε} ≠ ∅. Isn't math fun?!

Tuesday, February 28, 2012

FizzBuzz 15 - Technically Correct (But Oh So Wrong)

I am going to keep with my theme idea for "FizzBuzz" counts.  These solutions are the ones that make your eyes bleed or make you brace for impact.  Technically this solution is correct, but it is oh so wrong.  This solution is in Text.  Not Tex, Text.

1
2
Fizz
4
Buzz
Fizz
7
8
Fizz
Buzz
11
Fizz
13
14
FizzBuzz
16
17
Fizz
19
Buzz
Fizz
22
23
Fizz
Buzz
26
Fizz
28
29
FizzBuzz
31
32
Fizz
34
Buzz
Fizz
37
38
Fizz
Buzz
41
Fizz
43
44
FizzBuzz
46
47
Fizz
49
Buzz
Fizz
52
53
Fizz
Buzz
56
Fizz
58
59
FizzBuzz
61
62
Fizz
64
Buzz
Fizz
67
68
Fizz
Buzz
71
Fizz
73
74
FizzBuzz
76
77
Fizz
79
Buzz
Fizz
82
83
Fizz
Buzz
86
Fizz
88
89
FizzBuzz
91
92
Fizz
94
Buzz
Fizz
97
98
Fizz
Buzz

If you ever interview with me and attempt this answer, be forewarned: I will make you present a complete solution.  Even if it takes the whole time I have allotted.  The only recovery would be to change your answer to a one liner.

However, if you use a specific "interpreter" this is, indeed, source code that will solve the problem. On Linux and Mac one popular solution is cat. More advanced candidates would use mode or less. Those candidates I would send to the IT department to vet (since this is evidence of command line aptitude) but I keep them off of the programming team. The interpreter on Windows is type, and I am not sure where I would send candidates who knew that answer.

Friday, February 24, 2012

#14 - Express Yourself

When creating a new language the designers really need to look at the cool features from current languages and steal the best parts. But you need to be cautious about not just picking up fashions. Remember: fashion is what you look at 20 years later and say "That looks so ugly."

But the reality is that only time will tell what is simply fashion and what is not. One of the cooler features of Ruby is that everything is an expression. Control structures included. That may be why ruby lacks the three expression for loop. That and a mild aversion to semi-colons. I'de post this code in Ruby, except that the new kid on the block is Kotlin, and everyone loves to play with the new kid.

fun main(args : Array<String>) {
  for (i:Int in 1..100) {
    println (
      when (i%15) {
        0 -> "FizzBuzz" 
        5, 10 -> "Buzz"
        3, 6, 9, 12 -> "Fizz"
        else -> i
      })
  }
}

This is only a mild variation from the previous solution. Instead of printing out the results inside the when blocks the whole result of the when block is printed out. Structures like this figure into some of the design decisions like making the case statements only one expression. Evert expression on Kotlin returns a value. Even if it returns Nothing (and there is a language construct for Nothing).

Hopefully the "everything is an expression" thing isn't just fashion.  Ideally when you come back in 20 years (or 20 weeks) and see yourself using control structures as expressions you won't ask yourself  "What was I thinking?"

Tuesday, February 21, 2012

#13 - When in St. Petersberg

That was a short break. Basically I wasn't liking the structure I was forcing myself into. Once I get a large corpus of solutions I can look into formatting, but in the mean time I will focus on stuff that entertains me, free of schedule constraints or themes.

But last Tuesday JetBrains publicly released the beta of Kotlin, their entry into the alternative not-just-JVM languages. There are several very interesting features of this language. And best of all it runs in an IDE I use on a daily basis: IntelliJ Idea. Since this will be the first of several posts, I will first present the solution that most of them will build on.

fun main(args : Array<String>) {
    for (i in 1..100) {
        when (i%15) {
            0 -> println("FizzBuzz");
            5,10 -> println("Buzz");
            3,6,9,12 -> println("Fizz");
            else -> println(i);
        }
    }
}

Instead of a switch statement Kotlin has a when statement (lines 3-8). This is for the most part syntactic sugar, but real sugar not the high fructose code syrup sold to public school programming students. At first blush it looks like changing the word when for switch and some other changes in the case blocks. And in this example, it is all the same. But it is denser and hence reads better.

First, you will not that there is no implicit fall-through. Each case statement matches up to exactly one expression (and that expression can be a multi-expression block). To fall through the rest of the switch you must explicitly use the continue statement. And even then it will simply continue to evaluate itself against the conditions rather than jump to the next block. So the old-school switch drop throughs which were the source of some mysterious bugs are not longer explicitly allowed.

Second, because of the single use structure of the blocks, multiple cases are combined into one comma-separated expression. This is possible because there is no possibility of fall-through between each label.

Finally, the default case (denoted by the else case) is required except in instances when "the compiler can prove that all possible cases are covered with branch conditions".

For this solution, I took advantage of the 15 number pattern in FizzBuzz and simply switched on the mod of 15. There is one case for a FizzBuzz, two for a Buzz, and four for a Fizz. Everything else falls into the else block. This actually reads fairly close to the text description of the problem statement.

The IntelliJ team writing Kotlin is based mostly in Russia. The name Kotlin is from the name of an island near St. Petersberg, which is not really that close to Rome.

Monday, February 13, 2012

This isn't Dr Scholl's, going on hiatus.

This blog project isn't quite "gelling" like I expected it to mentally, so I am going to take a hiatus of indeterminate length. Too many different ideas that don't come down to a focal point just yet. I think the real problem is that I really need to plan all 100 entries into themed blocks, kind of like what I doing with repetition.

The whole Fizz for languages, Buzz for longer elaborations, and FizzBuzz for esoteric languages was a novel setup, the content of the number entries was letting me down.  So if and when I come back to this, I will at least go at it with an entire roadmap (if not text and worked solutions) rather than a rough idea.  However I see this more as a fast paced conference presentation about "Everething you need to know about programming I can show you in FizzBuzz" with 100 variations in 50 minutes, across multiple languages.  But that may be because I prefer speaking to writing.

I do have some code in the hopper that abuses Java language features.  I will post some of those in April as it keeps with the American theme of April Fools Day.

And the title is a reference to a really annoying shoe inserts ad.

Thursday, February 9, 2012

Fizz 12 - A Snake, Hiding in the Whitespace (Python)

For this Fizz, I have decided to tackle one of the other preferred languages at Google: Python. Since there is enough to talk about with Python I decided to do a relatively straight port of the first FizzBuzz I posted.

for i in range(1, 101) :
    if i%15 == 0 :
     print "FizzBuzz"
    elif i%5 == 0 :
     print "Buzz"
    elif i%3 == 0 :
     print "Fizz"
    else :
     print i


Like most "scripting" languages there is not a whole lot of overhead and ceremony in this code sample.  Lots of items are implied via their absence.  The namespace has a nice default, the main procedure to call is simply the statements at the root of the parse tree.  And there is a large set of functions available in the default namespace.

Another unique feature of Python that is white space is significant.  Very significant.  There are no opening and closing braces to group the control structures.  You must indent deeper than that parent control structure and maintain that indentation for multiple statements as a part of that structure.  In essence, the "good practice" of indenting nested control structures is hard wired as a requirement into the grammar of the language.

But the most relevant part of Python in relation to the current subject of discussion is how it loops.  The calculations and decisions in this example are merely syntactical variations of what we have seen before.  Python lacks a three expression loop seen in braces-and-semi-colon languages.  Instead it relies on an "iterator" based loop where the decisions about what the next item is and deciding when to stop is encapsulated inside an object called an Iterator.  In this case, the range function returns an iterator that returns the numbers from one to one hundred in order.  Ignore the fact that the second argument in the function is actually greater than one hundred.  Software is rife with off by one errors.

On last point.  The indentations must be consistent within the control structure.  In other words if you start indenting four more spaces than you had before then each statement in the control structure must have the same amount of indentation.  Whether they be tabs or any number of spaces.  And you can even mix tabs and spaces, if you are consistent.  This example mixes tabs and spaces for the print statements.  So, applying the FizzBuzz hypothesis we can conclude that typical programmers can do indentation consistently.  The real question is if they will (and they won't because everyone has seen enough evidence to the contrary).

Tuesday, February 7, 2012

#11 - Define Recursion: See Recursion

There is one last principal form of repetition remaining to be discussed. This form of repetition is the bane of many first year Computer Science majors and dreaded only more by the professors and TAs who must answer these questions that are best paraphrased as "I just don't get it." That principle? Recursion. I have found no more precise and snarky answer than found in this stackoverflow answer:
If you understand recursion, you're done. Otherwise go read this StackOverflow question...
The link, of course, is back to the very same question.
public class FizzBuzz11 {

  public static void main(String[] arg) {
    recurse(100);
  }

  public static void recurse(int num) {
    if (num > 0) {
      recurse(num - 1);
      if (num % 15 == 0) {
        System.out.println("FizzBuzz");
      } else if (num % 5 == 0) {
        System.out.println("Buzz");
      } else if (num % 3 == 0) {
        System.out.println("Fizz");
      } else {
        System.out.println(num);
      }
    }
  }
}

There are no control structures traditionally associated with repetition in this sample.  No for loops of any stripe, no while or do/while.  Not even a GOTO statement (that would be harmful). What does do the repetition is the call to the recurse method at line 9. That line, of course, is in the very method that it is calling. A snake eating it's own tail.

Of course, a snake will never be able to fully consume itself.  This is assuming it could get over the pain of biting it's own tail, which is about as bad as the anguish felt by many students experiencing their first semester of programming.   At some point things will either hurt too much or the snake's stomach will get too full.  This is called the "base case" where a simple answer is known and we can unwind the stack.

There are a lot of really neat intricacies that can be used with recursion, but this task is much to simple to demonstrate them in this form.  The form used for this recursion is to count backwards (line 10) from the last answer needed (line 4), stop when we hit zero (line 9), and when unwinding the stack calculate the answer for the number at hand (lines 11 to 18).  There is no work done both before and after the recursion, only after.  There is no multiple winding and unwinding of the stack, just one dive in and out.  And there is no use of the returned values from the recursion, in fact there is no return value at all.

All of this aligns with my experience with recursion seen 'in the field.'  Simple recursion can be handled by anyone deserving to be paid market rates.  But you throw in the more interesting cases (as a computer scientist would consider interesting) you are better off shipping it in a binary compiled format.  Because if the typical developer is given a chance to mess with the source code, they will break something.  Until the point they suddenly understand it.

Thursday, February 2, 2012

Buzz 10 - How Many Times?

I'm going to be changing up my plan for Buzz entries.  I had intended on writing 500-1500 essays tangentially related to the FizzBuzz at hand (much like my tiny e-book on Amazon) but I see several problems with it. First, it can ruin the natural progression of solutions if I hold out one solution for a particular post I have planned.  Second, it will ruin the the general feel of this blog, going from technical to random life experiences and then back.  Third, writing that much text on a deadline is kinda hard with my current life balance.  Really, should 1st graders have homework?  Even if it is reading with a parent 20 minutes a night?

So here is the new Buzz plan... for each Buzz I will revisit a language introduced in a previous Fizz entry, and re-work a number entry not in that language using some of the unique features of the language.  For this first entry the selection is limited, but actually relevant to the current numeric theme, especially since Groovy has so many ways to do repitition.

100 times {
  int i = it+1
  if ((i % 3 == 0) && (i % 5 == 0)) {
    println "FizzBuzz"
  } else if (i % 3 == 0) {
    println "Fizz"
  } else if (i % 5 == 0) {
    println "Buzz"
  } else {
    println i
  }
}

Lines one and two are the focus for this post, because the rest of it is a simple transliteration to standard Groovy.

On first reading, it does what it says. Repeat the enclosed block 100 times. But in the syntax sugar and details of the API are some common assumptions. First, what the current count is matters, but there is no explicit variable for that. Each closure without an argument block gets a single variable it that you don't need to declare and just exists. Kind of like this in methods. (And when is some programming language going to introduce that?)

The next assumption is what number to count from. You would think that counting from 1 would be sensible. But Groovy uses zero based arrays, like nearly all popular languages, so instead it starts all couting from zero. Line 2 is an optimization so we don't have to extend our math into the rest of the method.

The rest is pure syntactic sugar. When a symbol that could be a method trails an object with no punctuation it is presumed to be a method call in Groovy. And when a single closure follows a method call, whether or not it is in the argument list, it is added to the argument list of the method. The integer 100 is boxed up to a java.lang.Integer class. And via the Meta-Object-Protocol we can graft in a random times method onto the Ingeger class even though it doesn't exist in the plain JDK.

The end result is something that looks a little bit more like a spoken sentence, but not too close.

Tuesday, January 31, 2012

Fizz 9 - Fire Ready Aim (Dart)

For this Fizz I choose one of the most recent languages: Google's Dart.  While trolling Google Plus I found this solution by Yosuke Sato.

// fizz buzz
main() {
  for(int i=0;i++<100;)
    print((i%3==0?"Fizz":"")+(i%5==0?"Buzz":"")||i);
}

This is a fairly dense rendering of FizzBuzz and gets right to the point without much ceremony: one loop, one single print statement, and a minimum of braces and parenthesis based on operator order of precedence.

Things get interesting when we encounter the short-circuit or operator.  On the left we have a string type, and on the right we have an integer type.  Java would reject this out of hand because short-circuit logical operations can be applied only to boolean primitive types.  Dart, however, has more in common with JavaScript, and uses an entirely different approach.  First the left hand side is evaluated, and what matters is if it is false-equivilant value:  false, null, zero, or the empty string.  The && operator will return the left side if it is negative, and the || operator will return the right hand side if the left is negative.  With boolean values you get simple solutions.  With string and integer types you get more interesting uses.  Such as using || to resolve defaults and the && operator to null-check complex objects.

As an aside, the dartlang.org sample compiler will refuse to run this FizzBuzz in checked mode.  In unchecked mode it will complain about the types.  Perhaps it does have more in common with Java than I realized at first.

Thursday, January 26, 2012

#8 - Do while I'm doing

Now that we have examined the while loop in C style languages, it's time to consider the while loops delinquent little brother: the do-while loop.  This is a fairly unique control structure that requires a keyword both at the beginning and the end of it's structure.  Kind of like the try-catch clause, except that you only try once but you do the loop at least once, and then consider whether you continue or not.  This does have one slight advantage in readability:  if you decide not to continue you are right at the code block you will evaluate next.

public class FizzBuzz08 { 
  public static void main(String[] args) { 
    int i = 1; 
    do {
      if ((i % 3 == 0) && (i % 5 == 0)) { 
        System.out.println("FizzBuzz"); 
      } else if (i % 5 == 0) { 
        System.out.println("Buzz"); 
      } else if (i % 3 == 0) { 
        System.out.println("Fizz"); 
      } else { 
        System.out.println(i); 
      } 
    } while (i++ < 100); 
  } 
}

This code isn't dramatically different from the previous entry where the while loop was considered.  The body is still on lines 5 through 13, and the first expression is on line 3.

The differences come in the second and third expression.  I combined both of them on line 14 for stylistic reasons.  First, the order of the expressions would not have been maintained since the increment would occur before the test.  I then moved the post increment into the while test expression so we can use the less than operator instead of the less than and equal.  One less keystroke.  Although since I am programming in Java I don't know why I worry about minimizing keystrokes.

This loop is not a direct decomposition of a for loop.  The loop body will always be evaluated once regardless of the truth of the expression.  So this is not a good case of the always evaluate logic.  So if a do-while smells like a simple while, typical developers can code it correctly.  However whether they can use the always evaluate aspect correctly is up for debate.

Tuesday, January 24, 2012

#7 - Decomposed Loop

It's hard to take a look at a programming solution to FizzBuzz from an absolute ground state.  There are at least three concepts that need to be working at a basic level for a solution to be sensible: repetition, decisions, and calculation.  To analyze any of those three aspects you need two of them standing fairly unchanged while one of them is decomposed.

First I shall decompose the nature of the solution by changing the basic ways a loop can be constructed.    Given the prevalence of C and Java as arguably the most popular language in programming language for over a quarter century the three expression for loop is something that any professional or accomplished amateur should understand instinctively.  But what does that three expression loop look like when decomposed in a semantically similar fashion??


public class FizzBuzz07 {
  public static void main(String[] args) {
    int i = 1;
    while (i <= 100) {
      if ((i % 3 == 0) && (i % 5 == 0)) {
        System.out.println("FizzBuzz");
      } else if (i % 5 == 0) {
        System.out.println("Buzz");
      } else if (i % 3 == 0) {
        System.out.println("Fizz");
      } else {
        System.out.println(i);
      }
      i++;
    }
  }
}

The three expression for loop actually has four parts.  The part that often gets ignored is the loop body, expressed by lines 5 through 13 and is not counted in the three expressions.  Since C and Java have zero based arrays, we could call this the zeroth expression.

The first expression sets initial values and variables and is seen in line 3.  Strictly speaking this is not a proper decomposition since the scope of the variable in this decomposition could extend beyond the loop body, if there were expressions other than the loop.  The second expression is a test, and is found inside the while expression on line 4.  The last expression is found on line 14, executed after the content of the body.

As an aside, the last entry on Ceylon didn't feature a three expression loop because Ceylon doesn't support the syntax.  That was the position Groovy once took but later relented when it added the "classic for loop" in it's 1.5 release.  However, Ceylon and Groovy both have a while loop construct.  So by applying the FizzBuzz Hypothesis we can see that programmers don't need a nice neat for loop expression.  But it sure looks prettier with one.

Thursday, January 19, 2012

Fizz 6 - This or That (Ceylon)

Ceylon is a new JVM language being created by Red Hat and Gavin King.  You may recognize that last name. Gavin also created Hibernate, a data library that underlies many e-commerce sites and is integrated (at the default option) into Grails.  Ceylon puts a lot of emphasis into how it handles types, so it seemed like a natural language feature to abuse.

interface Fizz {}
interface Buzz {}

class CFizz() satisfies Fizz {
  shared actual String string = "Fizz";
}

class CBuzz() satisfies Buzz {
  shared actual String string = "Buzz";
}

class CFizzBuzz() satisfies Fizz & Buzz {
  shared actual String string = "FizzBuzz";
}

Fizz|Buzz|Natural fizzIt(Natural i) {
  if ((i % 3 == 0) && (i % 5 == 0)) {
    return CFizzBuzz();
  } else if (i % 5 == 0) {
    return CBuzz();
  } else if (i % 3 == 0) {
    return CFizz();
  } else {
    return i;
  }
}

shared void FizzBuzz06() {
  for (i in 1..100) {
    print(fizzIt(i).string);
  }
}

The particular typing feature I am using is called "Union Types."  When you declare a type as TypeA|TypeB (note the pipe between the two types) you are telling the Ceylon compiler that you will have either TypeA or TypeB, it doesn't matter.  And you won't have TypeC (for this example TypeC is not a TypeA or TypeB).  Without this construct I would be forced to declare the return type of fizzIt to be an Object and hope no stray classes like Klang shows up.

There also exists Intersection Types (TypeA&TypeB) which means that the type is both of TypeA and TypeB without having to create some other interface that joins the two. Instead of createing a new interface RunnableFuture<V> you would just have Runnable&Future<V> without the need for a third class. Throw in stuff like RunnableScheduledFuture<V> which could have been declared as Runnable&Delayed&Future<V> and the code savings start piling up.

Tuesday, January 17, 2012

Buzz 5 - The FizzBuzz Hypothesis

For the Buzz entries I intend to write something more involved. These entries will have a FizzBuzz solution, but the text will only be about FizzBuzz. And if you check various dictionaries the act of talking 'about' FizzBuzz gives me a lot of latitude to talk about practically anything tangentially related.

Today, however, I intend to stay fairly close to FizzBuzz itself. What I want to talk about is something I will call "The FizzBuzz Hypothesis:"
Given a particular language feature or programming technique, if it cannot be shown as an integral part of a FizzBuzz solution then the typical programmers will not be able to use it correctly.
This statement is worth deconstructing from multiple angles. First there is the logical structure of the hypothesis:
¬r → ¬q
where r is the ability to express the programming language feature or technique in the form of FizzBuzz and q is typical programmers using it correctly. This also implies something by negation. If typical programmers are using a language feature correctly then there must be some way to make it integral to a FizzBuzz solution, and if it can be proven that there is no such FizzBuzz solution then clearly the typical programmer isn't using it correctly, all other evidence notwithstanding. Also note the word choice here. I did not say properly or effectively, I said correctly. There are proper and effective ways to do things that are not correct. I am reminded of this every day I watch my children do chores around the house.

There is also the question of what I mean by the typical programmer. Since there is a high amount of variability on the high end of the "mean average programmer" the mean pulls up the average such that more than half of the programmers are less than average. There is also the possibility of a "median" programmer, where half the programmers are better or worse. This too has its faults, because that means that half of the programmers may not be able to even program a FizzBuzz solution. Given a large enough sample set that may be true, but I don't think we are talking about anyone who has sat through a LOGO class in elementary school or hand coded some HTML. When I say "typical programmer" I mean someone within one or two standard deviation of the mean. I am also assuming that programming ability is a Poisson distribution. (I think I've just demonstrated everything I can remember from one semester of college statistics for technical majors.)

Now why do I express the law as a double negative implication? I blame the self-esteem driven coddling I received in public schooling. When you are failing you are "not succeeding" and when you are rebelling you are "not conforming." Everything is expressed as the desired outcome, and rather than using an antonym the desired word is used, with a negation in front of it. As a good Computer Science major I spent a few weeks in propositional logic. This math subset has a lot of theorems. And one of them is is Transposition, where we can swap the order of the implication as long as we negate both sides. Taking a desired statement, and adding two negations and still have it mean something is really cool.

This them begs the question, if a typical programmer is using a language feature or pattern correctly then it can be expressed as a fundamental part of a FizzBuzz solution then why don't I say it like that? The problem is we start with the hypothetical typical programmer. Next, we would need to see them use the language feature or pattern correctly. And only then can we conclude an integral FizzBuzz solution exists. I don't like this formulation because we start with end results looking backwards rather than a prediction looking forward. We would start with real programmers correctly using real technologies and concluding something arbitrary and contrived. But if we start with something arbitrary and contrived we can project onto real programmers and real situations without waiting for those real situations to occur. Much like concluding that a ball will hit the ground without dropping it. This is especially useful for technologies that have not seen widespread use. By starting with the technology we can then use this hypothesis as a lens to examine potential technologies a priori rather than as a post mortem.

The reason I came to formulating this hypothesis is because of the depths of thinking I went to while contemplating the various parts of this project. Part of that was reflecting on why it was a good interview question for testing programming skills. Not design, engineering, or architecture skills, but programming skills in the raw. It's a bit like the NFL combine. Before players are drafted by professional teams the teams get all of the top candidates together and test basic skills. How fast can you run, how much can you bench press. Let’s see you catch balls, see you throw balls, and see how smart are you. (No kidding, the NFL sits them down for an IQ test. And offensive tackles tend to be smarter than the quarterbacks, and way smarter than the wide receivers.)

But what makes FizzBuzz a good proxy for programming aptitude? FizzBuzz has three key aspects to most solutions: repetition, calculation, and decision. And these three aspects pretty much sums up most of what programming is about. There are some exceptions, and most of those interactions are hidden behind APIs calls. So if a programming language feature or pattern doesn't help you repeat, calculate, or decide, what good is it for programming?

For this entry I am going to use the FizzBuzz Hypothesis to answer a question that has bothered me since the Java programming language has been released: why is there both a short circuted and fully evaluated boolean operator in the Java Programming Language? And can you use both meaningfully?

public class FizzBuzz05 {

  static boolean lessThan100(int val) {
    return val < 100;
  }

  static boolean printNumber(int val) {
    System.out.print(val);
    return true;
  }

  static boolean maybeFizz(int val) {
    if (val % 3 == 0) {
      System.out.print("Fizz");
      return true;
    } else {
      return false;
    }
  }

  static boolean maybeBuzz(int val) {
    if (val % 5 == 0) {
      System.out.print("Buzz");
      return true;
    } else {
      return false;
    }
  }

  public static void main(String[] a) {
    int num = 0;
    while (lessThan100(num++) &&
      maybeFizz(num) | maybeBuzz(num) || printNumber(num)) {
      System.out.println();
    }
  }

}

Most of this solution is spent writing a method that makes the key point more readable. There are four different methods that do four separate things required by the solution. One method tests if the number is less than one hundred. One method prints out the number that is passed in. One method when passed in a number divisible by three prints out "Fizz." And there is one last helper method that does the same as the last, except testing for 5 and printing "Buzz." If the Fizz or Buzz methods print out a value, then they return true. If no value was printed, false is returned. None of these helper methods print a new line; that will be the task of the main method. You may be wondering where the helper method is that prints out "FizzBuzz". The reason we don’t need one is the point of this particular exercise.

The magic happens in line 33, a group of three of the helper methods joined by two different boolean or operators. The first one is the fully evaluated or "bitwise" version and the other is the short circuited or "logical" version. According to operator precedence we work with the bitwise operator and work that one first. When going through the loop both sides of this operator will be evaluated, regardless of the results. So in the cases where we have a number evenly divisible by three and five we will get both sides of the expression evaluated, even though we know after fizz that the result will be true.

The second operator, of lower precedence, is the logical or short circuit operator. If either of the methods on the left hand side had printed out their FizzBuzz value then we would be getting a true result on the left side. By short circuit rule we would not evaluate the right hand side and no value would have been printed. This is a good thing, because the method will always print a number regardless of what it is. If we print out Fizz or Buzz or FizzBuzz we are not supposed to print out a number at all. But if neither of the methods return true and no other values have been printed out then we need to print out the number. Since the left hand side is false in those cases the right hand side is evaluated. And as a side effect of the evaluation the number is printed.

So there, in one single expression, we have both a fully evaluated and short circuited logical or operator. It looks like typical developers can use this correctly.

Thursday, January 12, 2012

#4 - Wrap the Math in a Method

Sometimes I think I was born to program, other times I think it's a giant cosmic coincidence.  When I was in  fifth grade my school got lucky and someone donated 15 VIC-20 computers complete with cheapo CRTs. Because of this all of the fifth graders learned how to program basic that year.  All of the fifth graders, not just the nerds.  In junior high school there was a computer elective as well, which I took at the earliest possible time.  Then when I got into high school I was in for a rude shock:  you need to be a Junior to take a computer programming class.  Two whole years without official credit for goofing off on a computer.  What was more awesome was that it was an honest to goodness programming class instead of that survey class I had in junior high.

So I did what any other socially awkward nerd would do: I asked the teacher if I could get an exception.  And I got that exception, and took the class the second semester.  The class was well within my capabilities as I had to endure complaints from other students that I was a curve buster with my 105% score in the class (when adding in extra credit).  It was the only time I have ever been accused of "busting the curve."  But it wasn't all ice cream and lollipops.  The teacher didn't always like the way I coded the solutions, particularly the fact I never used procedures.  My programs were all one long stream of instructions.  Just like a VIC-20 basic program, except there were no line numbers.  My thinking was if you are only going to call a procedure at one point in the code then why type all those extra keystrokes?  Mr. Shepard was not impressed.

public class FizzBuzz04 {

  public static void main(String[] args) {
    for (int i = 1; i <= 100; i++) {
      System.out.println(theValue(i));
    }
  }

  public static String theValue(int i) {
    if ((i % 3 == 0) && (i % 5 == 0)) {
      return "FizzBuzz";
    } else if (i % 5 == 0) {
      return "Buzz";
    } else if (i % 3 == 0) {
      return "Fizz";
    } else {
      return Integer.toString(i);
    }
  }
  
}

I hope Mr. Shepard is impressed now.  This actually does, believe it or not, improve the readability and maintainability of the code.  It pulls out the biggest third of the problem (doing ugly math) and keeps the other two thirds (counting to one hundred and printing out stuff) in a nice neat package.

In the main method the function it calls is a wondrous and magical black box.  Data goes in, answers come out. This is a fundamental abstraction that actually allows a lot of magical code to work.  Consider encryption.  Do you know about congruence in number theory?  What?  Numbers have theories?!  And what's that about the Chinese remainders?  Don't know, and since I have a procedure I can call I don't care.

It's magic.

Tuesday, January 10, 2012

Fizz 3 - Elvis Has Left the Building (Groovy)

Whenever I count "Fizz" in this series I will examine a language that has yet to be examined before. There is no shortage of candidates but the trick will be to keep the languages choices relevant and readily verifyable (this could be tricky).  Ideally I will be able to consider a feature that is relatively unique to that language or family of languages.

For this post I will examine the Groovy programming language.  Picking a relatively unique feature for Groovy is a fairly hard prospect since it tries to pull all the best parts from all other languages.  It is based on Java, so it's a C like curly brace language.  It compiles to JVM byte code.  It is object oriented, but strives to delay it's method dispatch as late as possible, enabling multiple dispatch method invocations as seen in Common Lisp.  It uses closure like blocks much like you would see in Ruby.

There are a lot of neat tricks in Groovy so I see myself coming back to this language later in this series.  But perhaps the most unique feature of Groovy, or the one feature that it has gone the furthest to popularize, then it would be the "Elvis Operator:" ?: a C style ternary operator with the middle operand removed.

for (int i = 1; i <= 100; i++) {
  String answer = ""

  if (i%3 == 0) {
    answer += "Fizz"
  }

  if (i%5 == 0) {
    answer += "Buzz"
  }
  
  println answer ?: i
}

This isn't the most idiomatic Groovy code for this solution. It is intended to demonstrate just the Elvis operator with all other awesomeness removed.  I have a three statement solution that shows other Groovy features such as blocks and ranges. But for the purpose of this post we should focus just on the unique feature. There, on line 12 of the verbosely expressed code, is the Elvis operator. Why is it called Elvis? If you rotate it clockwise the question mark kind of forms Elvis Presley's hair just above his eyes.

But what does it do?  It clearly has no hips to shake suggestively since he is being expressed from the nose up.  Elvis picks the first "true" item it sees and returns only that item.  Mode technically, the left hand side of the operator is evaluated for "groovy truth," and if it returns negative (null, false, zero, or empty string in most cases) then the right hand side of the operator is returned. If the left had side matches any other value, then the left hand side of the operator is returned.

This roughly equates to the C style statement x ? x : y. If the middle operator leaves the building, along with the white space, you are left with the Elvis operator: x ?: y. Even though Elvis has left the building, his legacy remains.

Thursday, January 5, 2012

#2 - Is That Your Final Answer?

When it comes to minor tweaks in the presentation of the problem, there is more than one way to make "the smallest change." Rather than printing out the result immediately we can defer the printing and keep an answer in a variable, changing it as needed.

public class FizzBuzz02 {
 public static void main(String[] args) {
  for (int i = 1; i <= 100; i++) {
   String answer = Integer.toString(i);
   if (i % 3 == 0) {
     answer = "Fizz";
   }
   if (i % 5 == 0) {
     answer = "Buzz";
   }
   if ((i % 3 == 0) && (i % 5 == 0)) {
     answer = "FizzBuzz";
   }
   System.out.println(answer);
  }
 }
}

This example allows two important changes, actually. The first is the introduction of a variable, and the changing of the variable Once we are done deciding what the result should be we can then print out the result of the variable.

The second important change is that we can consider each condition on its own.  Since the most general cases are considered first and then the most specific are considered later.  Since the most specific one is considered last, it can overwrite the previous answer.  When you add these two items together, you can see that the variable will be set once, twice, or four times.  Never three.

Another way to think of this is a game show.  Rather than playing Jeopardy, where only the first answer given is accepted, we are playing Who Wants to Be a Millionaire, where the last answer given is accepted.  Contestants routinely change their answers but the only answer that matters is the last answer.

Tuesday, January 3, 2012

#1 - Common But Boring

Over the next year or so I intend on blogging 100 solutions to the FizzBuzz problem.  Each solution will vary in either technique or language, sometimes both.  The various "FizzBuzz" posts will also have various themes based on where the number lands.  The "number" days will simply be a solution with the prose around it focusing on the solution itself.

To start things off I thought it was best to focus on one of the more common answers.  This solution will be written in Java, which as of this writing was the #1 language on the TIOBE index in December of 2011.  This is merely evidence that it one of the most common languages.

public class FizzBuzz01 {
 public static void main(String[] args) {
  for (int i = 1; i <= 100; i++) {
   if ((i % 3 == 0) && (i % 5 == 0)) {
    System.out.println("FizzBuzz");
   } else if (i % 5 == 0) {
    System.out.println("Buzz");
   } else if (i % 3 == 0) {
    System.out.println("Fizz");
   } else {
    System.out.println(i);
   }
  }
 }
}

This is not a perfectly straightforward translation of the problem statement into a solution.  Without a lot of preparatory code you cannot directly translate the problem statement into code in the order it is presented in the problem statement.

This sample is about as simple as it can be done in Java. The only portion worth commenting on is how it transcribes the solution. Rather than going in the order the clauses are stated in the problem statement we consider the conditions from the last one first.  This changes it from a set of  "except" statements into a series of simpler if-then-else statements.

This is of the more common solutions (only considering correct solutions) presented by job candidates when asked this question in a job interview.  It is also one of the most boring solutions.