By Jan Reher | October 1, 2008
By Jan Reher, Senior Developer, Systematic Software Engineering A/S. firstname.lastname@example.org.
Monday, September 29, 2008
Anders Hejlsberg: C# and LINQ
Anders is the chief designer of C#. 25 years ago he implemented the famous Turbo Pascal development environment and compiler. To commemorate this jubilee, he began his talk by writing the proverbial “Hello World” program in Turbo Pascal, and then comparing the program text with the equivalent C#. C# came out as an obvious looser on qualities like succinctness and clarity. Oh well. The conclusion is that programming languages evolve very slowly, and despite all the progress that has been made in other areas, we still mainly program computers by writing text at them.
Languages evolve slowly but frameworks, libraries and tools have evolved a lot over the past 25 years. Indeed, learning a new language is not the hardest part. It’s learning to use all the libraries, frameworks, tools etc that come with it.
Anders told us about three trends he sees today, and of course about how .NET, C#, and Microsoft’s offerings in general fit into this. The trends are outlined by these keywords: Declarative, Concurrent, and Dynamic.
The trends require us to introduce a new taxonomy of programming languages. The old taxonomy is breaking up.
Declarative is about moving from stating “How” to stating “What”. Domain specific languages (DSLs) are very important here, and we need to distinguish between an external DSL, which is a true language in and by itself, and an internal DSL which is really just a fancy word for an API expressend in a “real” language with an associated usage pattern and perhaps a nice abstract syntax added on top. Languages like SQL, Unix script, and XLST are examples from the first category, while LINQ is Microsoft’s currently most-profiled example of the latter.
This is a good place for me to pedantically point out that SQL has been living a productive life as an internal DSL inside COBOL ever since COBOL’s inception (1960-ish), and that it was only when most of us abandoned COBOL that accessing a relational database from a programming language suddenly got difficult. Now, LINQ does somewhat the same job inside C#. And it’s a Good Thing. Only, it’s not a New Thing.
Anders then did a demo of LINQ. It’s very nice. Using LINQ you can abandon a lot of tedious explicit looping and accumulating, and state what you want the program to do instead of spelling out how you want it to do it. So if you are using C#, you should start using LINQ. It will also serve as a gentle introduction to things like closures and lambda expressions, which will do you no end of good.
Functional programming is at long last becoming mainstream. We still need some amount of side effects to interface the functional program with the external world. Microsoft’s contender in the arena is F# which is a mix of many Good Old Things presented as a nice package. It’s at heart a functional programming language in the ML tradition but with object orientation added on top (or on the side?). F# provides full access to the .NET CLR and the .NET libraries, and it’s fully supported by Visual Studio. Here, we got another nice demo. I got the impression from Anders that F# can do all the things that C# can, and then some. And using it will make the well-educated programmer more productive that C# will. So do we need C# anymore? I would be happy to abandon it; to me it has always seemed like C++ with water in it.
Dynamic as opposed to static is about having the compiler deduce the types of expressions in your code instead of tediously telling it e.g. that when you add two integers, you get an integer. Duh.
But we probably need both, depending on the task at hand. Metaprogramming is perhaps the greatest advantage of dynamically typed code, and it is sometimes neigh indispensable. But it is also a dangerous tool. Anders showed us how to do dynamic programming in today’s C#. It is just as unpleasant and ugly as doing it in Java is, but people are working on ways of improving the programming experience.
Concurrency is about the imminent multi-core revolution. Some people are beginning to take notice. I can afford to remain ignorant and lazy for a few more years, but concurrency is coming to my run-of-the-mill project too some day. And to yours.
Concurrency used to mean time-sharing several processes on a single CPU, and we have today several ways of coping with this. But now concurrency means spreading 1 process onto several CPUs and that is something else entirely. Another way of expressing this is that we are moving from coarse-grained concurrency to fine-grained concurrency.
People at Microsoft are working on parallel extensions to .NET. One of these is PLINQ, of which Anders gave another nice demo. The point here is that since (P)LINQ is declarative and does not tell the computer how to compute a result, it is possible for the runtime to exploit latent parallelism in the code, and execute it concurrently, with a significant performance gain. That exploitation does not occur all by itself but I, as an ordinary programmer, can rely on work done by people more brilliant than myself, and reap some of the benefits of concurrency just by adding a few incantations to my code. Anders called this “snake oil”, and it doesn’t always solve your real world problems. But trying it is easy and there a no ill side effects (pun intended).
So, these are interesting times for programming languages. Lots of new things are going on. Perhaps it is time my company considered introducing new paradigms and new languages for our next project,
And thanks for all the demos. I do not understand why most presenters prefer to show code stale and frozen into Powerpoint when they can show it alive and sizzling in an IDE. Anders did this very well.
Patrick Linskey: Designing for Scalability
Every architecture scales nicely at the beginning. Sooner or later, every architecture hits a bottleneck.
Trains and airplanes are a good analogy here. Both are intended to move things from place to place but they do it in very different ways, and their scalability bottlenecks are not the same. (Think about it for a bit)
Our job is to design systems without important bottlenecks, or failing to predict these, to remove the bottlenecks when we hit them.
There are two kinds of bottlenecks:
Artificial bottlenecks are not really bottlenecks, and they can be removed simply by work. A very common one is inefficiently written code. There is a lot of that out there and we should start by removing that. But this is cheating, really, since we are not removing real problems; we are just cleaning up the room so the real problems become visible. This is where the hard work begins.
Intrinsic bottlenecks are truly part of the domain, and there are some patterns that can be applied to solve them.
One is Divide and Conquer. Old hat but still true and tried.
Another one is to change the rules of the game so that success in terms of scalability gets gauged in a new manner. This is also called “adapting your requirements” and has more to do with social engineering than with computer science.
Language-wise, you need to choose a language that has parallelism built in. Patrick’s talk was scoped exclusively within the Java Virtual Machine, and his message was: Do not use Java if you want your enterprise application to scale. Java has been patched over the years to deal with scalability but it is incoherent and difficult to use. Use languages like Erlang or Scala that are designed for the JVM.
Ben Goodger: Google Chrome
The talk was about the design of the user experience of Google’s new browser. It was almost not at all about the technical stuff employed to achieve that user experience. This was nice.
Google wanted to produce a browser with a minimal user interface. Not a totally invisible user interface but something very little in-your-face.
To achieve that, they set up some principles and applied a lot of care. The principles are these:
- Let people do what they want to do
- Don’t force people to learn new things
- Make actions feel instant
- Reduce the number of popups
With these principles in place to act as a value set for designs to be evaluated against, the rest of the effort was apparently about care, about attention to details, and about making a lot of little things work. Some highlights:
- One text box is used for both URL entry and searching. That way I don’t need to know which is which.
- Providing a very loose, fuzzy and forgiving language for data entry. That way it feels as if the browser is really trying to understand me.
- Provide many paths to achieving any given goal. That way I won’t need to hunt around.
- Be helpful but don’t get in the user’s way. I’m here to browse content, I’m not here to run Chrome.
- Provide fault tolerance: The web is a harsh place.
I think this talk has bearings for all of us who try to produce solutions that provide a good user experience (and don’t we all?). Like Google Earth and the Google website itself, Google Chrome sets a standard that I would be proud to copy.
Sam Aaron: Aesthetic Programming with Ruby
I am almost totally ignorant when it comes to Ruby but I have strong opinions about aesthetics in general, and I care for the look and feel of code. So I chose this presentation over another one that may have taught me something more concrete and practical, but probably would not have changed or challenged me as a human being.
The word “aesthetics” has many definitions. Sam presented a few before settling for Thomas Aquinas’ triangle of proportion, clarity and integrity. This definition is really more about beauty than aesthetics, and I will contend that the two concepts are not the same. But proportion, clarity, and integrity will serve us nicely as three metrics for this quality we are trying to achieve—whatever we call it.
The problem is that there are loads of un-aesthetic code out there. Code that has neither proportion, clarity, nor integrity. This is a problem because many stakeholders need to look at code. One of these stakeholders is the compiler, which doesn’t care about aesthetics. But most other stakeholders are human, and this is where an aesthetic approach to writing code becomes important. In the words of Donald Knuth: Code is a form of literature, mainly addressed at people. Like all other literature, it is therefore important that the message of the code be expressed so that the readers can understand it.
Much to my delight Sam then dragged Douglas Hofstadter’s book “Gödel, Escher, Bach” onto the scene, and using the pq—system as an example, talked a bit about formal systems and isomorphisms. The point (Hofstadter’s point) is that formal systems have no intrinsic meaning but that isomorphisms introduce meaning to people. And the better isomorphism we apply, the better people will understand the intended meaning. “Gödel, Escher, Bach” is a wonderful book. Especially for computer people. Hofstadter states that the best classical music, both when performed and when expressed in musical notation, forms a Beautiful Aperiodic Crystal of Harmony (note the capitals). I wish someone would say that about my code.
So, back to Earth: writing aesthetic code is about creating isomorphisms in the code, thereby expressing meaning.
By the way, Sam lamented that word, “code”. Why do we call it “code”? It makes it sound as if it were somehow secret or esoteric. Other professions use formal notations too (musicians, chemists, trampoline gymnasts) but they don’t call it code.
Back to Ruby: Sam claimed that the use of intrinsic Domain Specific Languages supports the creation of elegant solutions, because by using a DSL, you can express a solution that is isomorphic to the problem.
Any DSL comes with a managerial overhead, and the size of this varies with the language. Sam’s claim was that Ruby minimises this overhead.
It is my experience that most good solutions are expressed in ways that are isomorphic to the problem domain. Some call this model-driven programming. Thinking in terms of DSLs can be a help but it is not a requirement. Any time I design a class intended to model a phenomenon in the problem domain, I introduce an isomorphism, and thereby bestow meaning to my program. And this is Good.
Back to the three “metrics”. Using a real system as an example, Sam argued that different parts of a system should frequently exhibit different amount of the three metrics. And possibly be expressed in different languages. He suggested a 3+1 layered architecture: At the bottom is the stable layer, which must indeed be stable. On top of that we build the dynamic layer whose job is to provide flexibility despite the unchanging and stable layer it is built upon. Further on top goes the domain layer where the DSL lives and provides the solution. At the pinnacle, perhaps, lies the poetry layer.
It occured to me later that I should have started a discussion with Sam about design patterns. He did not mention patterns at all. I think patterns have much to contribute when it comes to achieving qualities like beauty and meaning in code.
This was a good talk. It was inspiring and it had a message. I don’t think it had much to do with Ruby, really—Sam’s points should be applied to code no matter what language you use. But, as I said, I am almost totally ignorant when it comes to Ruby.
Bill Venners: Scala
I did not like this presentation. Bill treated Java as if it were an intrinsic and venerable component of the human condition, and as if the JVM were the only platform in the world. The Scala programming language was explained and evaluated against this single baseline.
It left me with the impression that Scala has nothing to offer that other languages have not offered for years. Smalltalk, C, and ML comes to mind.
So Scala may be able to provide new insights to some people but only if these people know no other programming language than Java.
Or maybe I got it all wrong. In which case I apologise.2008 JAOO | Tags: Anders Hejlsberg, Ben Goodger, Bill Venners, Jan Reher, JAOO, Patrick Linskey | 3 Comments »