Can Java Be Saved?

Posted on November 09, 2009 by Scott Leberknight

Java and Evolution

The Java language has been around for a pretty long time, and in my view is now a stagnant language. I don't consider it dead because I believe it will be around for probably decades if not longer. But it appears to have reached its evolutionary peak, and it doesn't look it's going to be evolved any further. This is not due to problems inherent in the language itself. Instead it seems the problem lies with Java's stewards (Sun and the JCP) and their unwillingness to evolve the language to keep it current and modern, and more importantly the goal to keep backward compatibility at all costs. Not just Sun, but also it seems the large corporations with correspondingly large investments in Java like IBM and Oracle aren't exactly chomping at the bit to improve Java. I don't even know if they think it even needs improvement at all. So really, the ultra-conservative attitude towards change and evolution is the problem with Java from my admittedly limited view of things.

That's why I don't hate Java. But, I do hate the way it has been treated by the people charged with improving it. It is clear many in the Java community want things like closures and a native property syntax but instead we got Project Coin. This, to me, is sad really. It is a shame that things like closures and native properties were not addressed in Java/JDK/whatever-it-is-called 7.

Why Not?

I want to know why Java can't be improved. We have concrete examples that it is possible to change a major language in major ways. Even in ways that break backward compatibility in order to evolve and improve. Out with the old, in with the new. Microsoft with C# showed that you can successfully evolve a language over time in major ways. For example C# has always had a property syntax but it now also has many features found in dynamically typed and functional languages such as type inference and, effectively, closures. With LINQ it introduced functional concepts. When C# added generics they did it correctly and retained the type information in the compiled IL, whereas Java used type-erasure and simply dropped the types from the compiled bytecode. There is a great irony here: though C# began life about five or six years after Java, it not only has caught up but has surpassed Java in most if not all ways, and has continued to evolve while Java has become stagnant.

C# is not the only example. Python 3 is a major overhaul of the Python language, and it introduced breaking changes that are not backwards compatible. I believe they provide a migration tool to assist you should you want to move from the 2.x series to version 3 and beyond. Microsoft has done this kind of thing as well. I remember when they made Visual Basic conform to the .NET platform and introduced some rather gut wrenching (for VB developers anyway) changes, and they also provided a tool to aid the transition. One more recent example is Objective-C which has experienced a resurgence in importance mainly because of the iPhone. Objective-C has been around longer than all of Java, C#, Ruby, Python, etc. since the 1980s. Apple has made improvements to Objective-C and it now sports a way to define and synthesize properties and most recently added blocks (effectively closures). If a language that pre-dates Java (Python also pre-dates Java by the way) can evolve, I just don't get why Java can't.

While it is certainly possible to remain on older versions of software, forcing yourself to upgrade can be a Good Thing, because it ensures you don't get the "COBOL Syndrome" where you end up with nothing but binaries that have to run on a specific hardware platform forever and you are trapped until you rewrite or you go out of business. The other side of this, of course, is that organizations don't have infinite time, money, and resources to update every single application. Sometimes this too can be good, because it forces you to triage older systems, and possibly consolidate or outright eliminate them if they have outlived their usefulness. In order to facilitate large transitions, I believe it is very important to use tools that help automate the upgrade process, e.g. tools that analyze code and fix it if possible (reporting all changes in a log) and which provide warnings and guidance when a simple fix isn't possible.

The JVM Platform

Before I get into the changes I'd make to Java to make it not feel like I'm developing with a straightjacket on while having to type masses of unnecessary boilerplate code, I want to say that I think the JVM is a great place to be. Obviously the JVM itself facilitates developing all kinds of languages as evidenced by the huge number of languages that run on the JVM. The most popular ones and most interesting ones these days are probably JRuby, Scala, Groovy, and Clojure though there are probably hundreds more. So I suppose you could make an argument that Java doesn't need to evolve any more because we can simply use a more modern language that runs on the JVM.

The main problem I have with that argument is simply that there is already a ton of Java code out there, and there are many organizations who are simply not going to allow other JVM-based languages; they're going to stick with Java for the long haul, right or wrong. This means there is a good chance that even if you can manage convince someone to try writing that shiny new web app using Scala and its Lift framework, JRuby on Rails, Grails, or Clojure, chances are at some point you'll also need to maintain or enhance existing large Java codebases. Wouldn't you like to be able to first upgrade to a version of Java that has closures, native property syntax, method/property handles, etc?

Next I'll choose what would be my top three choices to make Java much better immediately.

Top Three Java Improvements

If given the chance to change just three things about Java to make it better, I would choose these:

  • Remove checked exceptions
  • Add closures
  • Add formal property support

I think these three changes along would make coding in Java much, much better. Let's see how.

Remove Checked Exceptions

By removing checked exceptions you eliminate a ton of boilerplate try/catch clauses that do nothing except log a message, wrap and re-throw as a RuntimeException, pollute the API with throws clauses all over the place, or worst of all empty catch blocks that can cause very subtle and evil bugs. With unchecked exceptions, developers still have the option to catch exceptions that they can actually handle. It would be interesting to see how many times in a typical Java codebase people actually handle exceptions and do something at the point of exception, or whether they simply punt it away for the caller to handle, who in turn also punts, and so forth all the way up the call stack until some global handler catches it or the program crashes. If I were a betting man, I'd bet a lot of money that for most applications, developers punt the vast majority of the time. So why force people to handle something they cannot possible handle?

Add Closures

I specifically listed removing checked exceptions first, because to me it is the first step to being able to have a closure/block syntax that isn't totally horrendous. If you remove checked exceptions, then adding closures would seem to be much easier since you don't need to worry at all about what exceptions could possibly be thrown and there is obviously no need to declare exceptions. Closures/blocks would lead to better ability to handle collections, for example as in Groovy but in Java you would still have types (note I'm also using a literal property syntax here):

// Find all people whose last name is "Smith"
List<Person> peeps = people.findAll { Person person -> person.lastName.equals("Smith");   } 
or
// Create a list of names by projecting the name property of a bunch of Person objects
List<String> names = people.collect { Person person -> person.name; }

Not quite as clean as Groovy but still much better than the for loops that would traditionally be required (or trying to shoehorn functional-style into Java using the Jakarta Commons Collections or Google Collections). Removal of checked exceptions would allow, as mentioned earlier, the block syntax to not have to deal with declaring exceptions all over the place. Having to declare checked exceptions in blocks makes the syntax worse instead of better, at least when I saw the various closure proposals for Java/JDK/whatever 7 which did not get included. Requiring types in the blocks is still annoying, especially once you get used to Ruby and Groovy, but it would be passable.

Native Property Syntax

The third change should do essentially what Groovy for properties does but should introduce a "property" keyword (i.e. don't rely on whether someone accidentally put an access modifier in there as Groovy does). The syntax could be very clean:

property String firstName;
property String lastName;
property Date dateOfBirth;

The compiler could automatically generate the appropriate getter/setter for you like Groovy does. This obviates the need to manually code the getter/setter. Like Groovy you should be able to override either or both. It de-clutters code enormously and removes a ton of lines of silly getter/setter code (plus JavaDocs if you are actually still writing them for every get/set method). Then you could reference properties as you would expect: person.name is the "getter" and person.name = "Fred" is the "setter." Much cleaner syntax, way less boilerplate code. By the way, if someone used the word "property" in their code, i.e. as a variable name, it is just not that difficult to rename refactor, especially with all the advanced IDEs in the Java community that do this kind of thing in their sleep.

Lots of other things could certainly be done, but if just these three were done I think Java would be much better off, and maybe it would even come into the 21st century like Objective-C. (See the very long but very good Ars Technica Snow Leopard review for information on Objective-C's new blocks feature.)

Dessert Improvements

If (as I suspect they certainly will :-) ) Sun/Oracle/whoever takes my suggestions and makes these changes and improves Java, then I'm sure they'll want to add in a few more for dessert. After the main course which removes checked exceptions, adds closures, and adds native property support, dessert includes the following:

  • Remove type-erasure and clean up generics
  • Add property/method handles
  • String interpolation
  • Type inference
  • Remove "new" keyword

Clean Up Generics

Generics should simply not remove type information when compiled. If you're going to have generics in the first place, do it correctly and stop worrying about backward compatibility. Keep type information in the bytecode, allow reflection on it, and allow me to instantiate a "new T()" where T is some type passed into a factory method, for example. I think an improved generics implementation could basically copy the way C# does it and be done.

Property/Method Handles

Property/method handles would allow you to reference a property or method directly. They would make code that now must use strings strongly typed and refactoring-safe (IDEs like IntelliJ already know how to search in text and strings but can never be perfect) much nicer. For example, a particular pet peeve of mine and I'm sure a lot of other developers is writing Criteria queries in Hibernate. You are forced to reference properties as simple strings. If the lastName property is changed to surname then you better make sure to catch all the places the String "lastName" is referenced. So you could replace code like this:

session.createCriteria(Person.class)
	.add(Restrictions.eq("lastName", "Smith")
	.addOrder(Order.asc("firstName")
	.list();

with this using method/property handles:

session.createCriteria(Person.class)
	.add(Restrictions.eq(Person.lastName, "Smith")
	.addOrder(Order.asc(Person.firstName)
	.list();

Now the code is strongly-typed and refactoring-safe. JPA 2.0 tries mightily to overcome having strings in the new criteria query API with its metamodel. But I find it pretty much appalling to even look at, what with having to create or code-generate a separate "metamodel" class which you reference like "_Person.lastName" or some similar awful way. This metamodel class lives only to represent properties on your real model object for the sole purpose of making JPA 2.0 criteria queries strongly typed. It just isn't worth it and is total overkill. In fact, it reminds me of the bad-old days of rampant over-engineering in Java (which apparently is still alive and well in many circles but I try to avoid it as best I can). The right thing is to fix the language, not to invent something that adds yet more boilerplate and more complexity to an already overcomplicated ecosystem.

Method handles could also be used to make calling methods using reflection much cleaner than it currently is, among other things. Similarly it would make accessing properties via reflection easier and cleaner. And with only unchecked exceptions you would not need to catch the four or five kinds of exceptions reflective code can throw.

String Interpolation

String interpolation is like the sorbet that you get at fancy restaurants to cleanse your palate. This would seem to be a no-brainer to add. You could make code like:

log.error("The object of type  ["
    + foo.getClass().getName()
    + "] and identifier ["
    + foo.getId()
    + "] does not exist.", cause);

turn into this much more palatable version (using the native property syntax I mentioned earlier):

log.error("The object of type [${foo.class.name}] and identifier [${foo.id}] does not exist.", cause);

Type Inference

I'd also suggest adding type inference, if only for local variables like C# does. Why do we have to repeat ourselves? Instead of writing:

Person person = new Person();

why can't we just write:

var person = new Person();

I have to believe the compiler and all the tools are smart enough to infer the type from the "new Person()". Especially since other strongly-typed JVM languages like Scala do exactly this kind of thing.

Elminate "new"

Last but not least, and actually not the last thing I can think of but definitely the last I'm writing about here, let's get rid of the "new" keyword and either go with Ruby's new method or Python's constructor syntax, like so:

// Ruby-like new method
var person = Person.new()

// or Python-like construction
var person = Person()

This one came to me recently after hearing Bruce Eckel give an excellent talk on language evolution and archaeology. He had a ton of really interesting examples of why things are they way they are, and how Java and other languages like C++ evolved from C. One example was the reason for "new" in Java. In C++ you can allocate objects on the stack or the heap, so there is a stack-based constructor syntax that does not use "new" while the heap-based constructor syntax uses the "new" operator. Even though Java only has heap-based object allocation, it retained the "new" keyword which is not only boilerplate code but also makes the entire process of object construction pretty much immutable: you cannot change anything about it nor can you easily add hooks into the object creation process.

I am not an expert at all in the low-level details, and Bruce obviously knows what he is talking about way more than I do, but I can say that I believe the Ruby and Python syntaxes are not only nicer but more internally consistent, especially in the Ruby case because there is no special magic or sauce going on. In Ruby, new is just a method, on a class, just like everything else.

Conclusion to this Way Too Long Blog Entry

I did not actually set out to write a blog whose length is worthy of a Ted Neward blog. It just turned out that way. (And I do in fact like reading Ted's long blogs!) Plus, I found out that speculative fiction can be pretty fun to write, since I don't think pretty much any of these things are going to make it into Java anytime soon, if ever, and I'm sure there are lots of people in the Java world who hate things like Ruby won't agree anyway.

Several Must Have Firebug-Related Firefox Extensions

Posted on September 28, 2009 by Scott Leberknight

Last week while doing the usual (web development stuff) I discovered a few Firefox extensions I didn't even know I was missing until I found them by accident. The "accident" happened while adding Firebug to a Firefox that was running in a VMWare Fusion Windows virtual machine on which I was testing in, gasp, Windows. I went to find add-ons and searched for Firebug. And up came not only Firebug but also results for Firecookie, Firefinder, Inline Code Finder for Firebug, and CodeBurner for Firebug.

Of course everyone doing web development uses Firebug (or really should anyway) since it rules. But these other extensions provide some really nice functionality and complement Firebug perfectly. Here's a quick run down:

Firecookie

Firecookie lets you see all the cookies for a site, add new ones, remove existing cookies, etc. It gives useful information about each cookie like the name, value, raw value (if URI-encoded), domain, size, path, expiration, and security. Very cool.

Firecookie Firefox Add-On

Firefinder

Firefinder for Firebug lets you search for elements on a page using either CSS expressions or an XPath query. In the list of matching elements, you can expand each result, inspect the element by clicking the "Inspect" link, or click "FriendlyFire" which will copy the content you're looking at and post it up to JS Bin. (Be careful with this one if you have code you'd rather not have going up over the wire to a different web site.) Firefinder also puts a dashed border around each matching element it found. As you hover over search results, it highlights the matching element in the page. This is really useful when you want to find all elements matching a CSS expression or when you'd like to use XPath to find specific elements. Nice. Firefinder Firefox Add-On

Inline Code Finder for Firebug

The Inline Code Finder does just that. It finds inline CSS styles, JavaScript links, and inline events, and reports the number of each of these in its results pane. Even better, it highlights each of these problems on the page you are viewing with a thick red border, and as you hover over them it shows you what the problem is in a nicely tooltip. This is really nice to help you become less obtrusive by writing more unobtrusive JavaScript and avoiding inline styles. For older sites or sites that weren't designed with "unobtrusivity" in mind though, be warned that there might be a lot of red on the page!

Inline Code Finder Firefox Add-On

CodeBurner for Firebug

CodeBurner for Firebug provides an inline HTML and CSS reference within Firebug. It allows you to search for HTML elements or CSS styles and shows a definition and an example. It also provides links to the awesome Sitepoint reference and even to the Sitepoint live demos of the feature you are learning about. This is so unbelievably useful to have a HTML and CSS references directly within Firebug it isn't even funny. Thanks Sitepoint.

CodeBurner Firefox Add-On

Migrating Macs When You Use FileVault

Posted on September 21, 2009 by Scott Leberknight

Recently I got a brand spankin new MacBook Pro (MBP) to replace my three-year-old one. One of the things I did not want to deal with is setting up a new computer from scratch and reconfiguring and reinstalling everything I already had on my old MBP. I have things pretty-well organized, I know where things are, and I remember when I rebuilt from scratch from Tiger to Leopard I lost a bunch of settings and such because I hadn't used any kind of migration software. Not this time. This time I wanted my new computer to essentially be my old computer, but with more memory and faster with no hassles.

There are a few options I considered. First, simply take a SuperDuper backup and restore directly to the new MBP. I researched this a lot, talked to people who've done this both successfully and unsuccessfully, and talked to a Genius at the Apple store. The second option was to use the Migration Assistant to transfer from old to new. After all the research, I decided against restoring from SuperDuper for several reasons. First, it seemed to be hit or miss: some people reported success and some did not. Second, and more importantly, I was transferring from a circa August 2006 MBP, in fact one of the first revs that had the Intel chips inside, to a circa June 2009 MBP with the new trackpad and all the other updated jazz. I was mostly concerned that there could have been enough difference between the old and new computers' hardware and software drivers and that a direct restore would not contain up-to-date drivers and whatever else that could screw things up. So I chose to explore using Migration Assistant, which promised a very easy transfer of all files and settings.

So then I ran into the first snag, which is that Migration Assistant does not transfer File Vault-ed user accounts, and I happen to use FileVault. Doh. So I thought, "OK no problem I'll simply turn off FileVault on the old machine first." Except that I only had about 7GB of space left on my hard drive, and turning FileVault off needs about as much free space as your home directory currently takes up. So if my home directory is currently 50GB, then I need about 50GB free. Double Doh, because I didn't have that kind of space! To solve all these problems, here is what I did, for anyone else who might be trying to do the same thing:

Step 1 - Backup Original Mac

Make a SuperDuper backup of the old MBP onto a portable external hard drive. (I actually made two copies on two different external hard drives, just in case something got screwed up later.) The external drive should contain plenty of space, enough so you can turn off FileVault on your home directory.

Step 2 - Turn off FileVault on External Drive

Restart the old Mac and boot from the SuperDuper backup. (You hold down the option key during startup to get the screen where you can choose to boot from the Mac hard drive or the SuperDuper backup on the external drive.) Note at this point you are now running Mac OS X on the external hard drive. Log into the FileVault-ed account, go to System Preferences, and turn FileVault off. (This is why you needed to make sure the external drive has enough space to turn off FileVault, since you are turing it off there, not on your Mac's hard drive!) Now go grab something to eat and/or watch a movie as this step can take a few hours depending on how much data you have.

Once complete, log out and shut down the external drive. You are now ready to transfer.

Step 3 - Prepare for Migration Assistant on New Mac

Log into your new Mac, and connect the external drive that contains the freshly un-FileVault-ed SuperDuper bootable backup of the original Mac. If you or someone else already setup a new user account on the new Mac that has the same username as your old account does, then create a new account with Administrator privileges (e.g. "MigrationAccount") and then delete the account that you will be transferring to. For example, if your old Mac account username was "bsmith" and the new Mac has an account with the same username, remove the "bsmith" account on the new Mac. Otherwise when you attempt the migration you might receive the following message like I did the first time I tried: "There is an existing user account with the same name as an account you are transferring." This wouldn't have been an issue, except that the option to replace the account was disabled and so Migration Assistant was refusing to overwrite it. Thus you should delete it first.

Step 4 - Run Migration Assistant

On the new Mac, run Migration Assistant and answer the questions. You'll be transferring from the external SuperDuper backup to the new Mac. The questions are easy and straightforward. Go grab something to drink or watch some TV, as it'll take a while to transfer all your old files and settings from the external drive to the new Mac. I'll just assume everything went well, because it did for me. If something went wrong, well, I don't have answers other than maybe to try, try, try again from scratch.

Step 5 - Turn FileVault Back On

On the new Mac, turn FileVault on for the newly migrated account, e.g. if the "bsmith" account previously had FileVault on, then login as "bsmith" and turn on FileVault in System Preferences. Tick tock. More waiting as Mac OS X encrypts all the data in your home directory. Once this process completes, your new Mac should be pretty much the same as your old Mac, with all the same files and settings like Desktop, Screen Saver, etc. and with all your applications transferred successfully. And you are back up running with FileVault enabled.

Step 6 - Secure Erase External Hard Drive

At this point you have the new Mac setup with FileVault, and the old Mac still has FileVault on as well since you migrated from the external drive. But, the external drive now has unencrypted data sitting around since you turned FileVault off on it. Open up Disk Utility and do a secure erase of the un-FieVaulted external drive. Depending on which option you choose, e.g. "Zero Out Data", "7-Pass Erase", "35-Pass Erase", etc., the erase process can take a long time, as in days. In other words, a 7- pass write overwrites every part of the disk 7 times to sanitize it and make recovery of the unencrypted information much harder, and takes 7 times longer than just zeroing out the data, which writes zeroes all over data on the disk.

I only did a zero out of the data, because I knew I was going to immediately overwrite that external drive with a new SuperDuper backup once I was done. If you need more insurance than that, a 7-pass erase confirms to the DoD 5220.22-M specification which is probably good enough. (Actually I started out using 7-pass erase until I saw it was going to take a day or two, and then I got a tad lazy and just did the zero out. Perhaps that is bad, but I didn't feel like waiting that long, and it's not like I have data for 100,000 employees on my hard drive in an Excel spreadsheet anyway. Just a lot of code and presentations and such, really.)

Step 7 - Backup New Mac

Make a fresh SuperDuper backup of the new Mac.

Coda

Although all the above sounds like it took a long time, the waiting and time was mostly due to having to turn FileVault off and then on and doing the secure erase. Migration Assistant itself takes a fair amount of time but is totally worth it. Overall the entire process from start to finish on my old MBP with a 100GB hard drive containing only 7GB free space took between five and six hours, which is way less time than if I had tried to start over from scratch with the new Mac. I do wonder why Apple cannot just allow Migration Assistant to transfer accounts with FileVault enabled, because then pretty much all you'd need to do is run Migration Assistant directly, and you wouldn't need to go through all the drama.

Sorting Collections in Hibernate Using SQL in @OrderBy

Posted on September 15, 2009 by Scott Leberknight

When you have collections of associated objects in domain objects, you generally want to specify some kind of default sort order. For example, suppose I have domain objects Timeline and Event:

@Entity
class Timeline {

    @Required 
    String description

    @OneToMany(mappedBy = "timeline")
    @javax.persistence.OrderBy("startYear, endYear")
    Set<Event> events
}

@Entity
class Event {

    @Required
    Integer startYear

    Integer endYear

    @Required
    String description

    @ManyToOne
    Timeline timeline
}

In the above example I've used the standard JPA (Java Persistence API) @OrderBy annotation which allows you to specify the order of a collection of objects via object properties, in this example a @OneToMany association . I'm ordering first by startYear in ascending order and then by endYear, also in ascending order. This is all well and good, but note that I've specified that only the start year is required. (The @Required annotation is a custom Hibernate Validator annotation which does exactly what you would expect.) How are the events ordered when you have several events that start in the same year but some of them have no end year? The answer is that it depends on how your database sorts null values by default. Under Oracle 10g nulls will come last. For example if two events both start in 2001 and one of them has no end year, here is how they are ordered:

2001 2002  Some event
2001 2003  Other event
2001       Event with no end year

What if you want to control how null values are ordered so they come first rather than last? In Hibernate there are several ways you could do this. First, you could use the Hibernate-specific @Sort annotation to perform in-memory (i.e. not in the database) sorting, using natural sorting or sorting using a Comparator you supply. For example, assume I have an EventComparator helper class that implements Comparator. I could change Timeline's collection of events to look like this:

@OneToMany(mappedBy = "timeline")
@org.hibernate.annotations.Sort(type = SortType.COMPARATOR, comparator = EventCompator)
 Set<Event> events

Using @Sort will perform sorting in-memory once the collection has been retrieved from the database. While you can certainly do this and implement arbitrarily complex sorting logic, it's probably better to sort in the database when you can. So we now need to turn to Hibernate's @OrderBy annotation, which lets you specify a SQL fragment describing how to perform the sort. For example, you can change the events mapping to :

@OneToMany(mappedBy = "timeline")
@org.hibernate.annotations.OrderBy("start_year, end_year")
 Set<Event> events

This sort order is the same as using the JPA @OrderBy with "startYear, endYear" sort order. But since you write actual SQL in Hibernate's @OrderBy you can take advantage of whatever features your database has, at the possible expense of portability across databases. As an example, Oracle 10g supports using a syntax like "order by start_year, end_year nulls first" to order null end years before non-null end years. You could also say "order by start_year, end year nulls last" which sorts null end years last as you would expect. This syntax is probably not portable, so another trick you can use is the NVL function, which is supported in a bunch of databases. You can rewrite Timeline's collection of events like so:

@OneToMany(mappedBy = "timeline")
@org.hibernate.annotations.OrderBy("start_year, nvl(end_year , start_year)")
 Set<Event> events

The expression "nvl(end_year , start_year)" simply says to use end_year as the sort value if it is not null, and start_year if it is null. So for sorting purposes you end up treating end_year as the same as the start_year if end_year is null. In the contrived example earlier, applying the nvl-based sort using Hibernate's @OrderBy to specify SQL sorting criteria, you now end with the events sorted like this:

2001       Event with no end year
2001 2002  Some event
2001 2003  Other event

Which is what you wanted in the first place. So if you need more complex sorting logic than what you can get out of the standard JPA @javax.persistence.OrderBy, try one of the Hibernate sorting options, either @org.hibernate.annotations.Sort or @org.hibernate.annotations.OrderBy. Adding a SQL fragment into your domain class isn't necessarily the most elegant thing in the world, but it might be the most pragmatic thing.

Groovification

Posted on May 04, 2009 by Scott Leberknight

Last week I tweeted about groovification, which is defined thusly:

groovification. noun. the process of converting java source code into groovy source code (usually done to make development more fun)

On my main day-to-day project, we've been writing unit tests in Groovy for quite a while now, and recently we decided to start implementing new code in Groovy rather than Java. The reason for doing this is to gain more flexibility in development, to make testing easier (i.e. in terms of the ability to mock dependencies in a trivial fashion), to eliminate a lot of Java boilerplate code and thus write less code, and of course to make developing more fun. It's not that I hate Java so much as I feel Java simply isn't innovating anymore and hasn't for a while, and isn't adding features that I simply don't want to live without anymore such as closures and the ability to do metaprogramming when I need to. In addition, it isn't removing features that I don't want, such as checked exceptions. If I know, for a fact, that I can handle an exception, I'll handle it appropriately. Otherwise, when there's nothing I can do anyway, I want to let the damn thing propagate up and just show a generic error message to the user, log the error, and send the admin team an email with the problem details.

This being, for better or worse, a Maven project, we've had some interesting issues with mixed compilation of Java and Groovy code. The GMaven plugin is easy to install and works well but currently has some outstanding issues related to Groovy stub generation, specifically it cannot handle generics or enums properly right now. (Maybe someone will be less lazy than me and help them fix it instead of complaining about it.) Since many of our classes use generics, e.g. in service classes that return domain objects, we currently are not generating stubs. We'll convert existing classes and any other necessary dependencies to Groovy as we make updates to Java classes, and we are implementing new code in Groovy. Especially in the web controller code, this becomes trivial since the controllers generally depend on other Java and/or Groovy code, but no other classes depend on the controllers. So starting in the web tier seems to be a good choice. Groovy combined with implementing controllers using the Spring @MVC annotation-based controller configuration style (i.e. no XML configuration), is making the controllers really thin, lightweight, and easy to read, implement, and test.

I estimate it will take a while to fully convert all the existing Java code to Groovy code. The point here is that we are doing it piecemeal rather than trying to do it all at once. Also, whenever we convert a Java file to a Groovy one, there are a few basics to make the classes Groovier without going totally overboard and spending loads of time. First, once you've used IntelliJ's move refactoring to move the .java file to the Groovy source tree (since we have src/main/java and src/main/groovy) you can then use IntelliJ's handy-dandy "Rename to Groovy" refactoring. In IntelliJ 8.1 you need to use the "Search - Find Action" menu option or keystroke and type "Rename to..." and select "Rename to Groovy" since they goofed in version 8 and that option was left off a menu somehow. Once that's done you can do a few simple things to make the class a bit more groovy. First, get rid of all the semi-colons. Next, replace getter/setter code with direct property access. Third, replace for loops with "each"-style internal iterators when you don't need the loop index and "eachWithIndex" where you do. You can also get rid of some of the redundant modifiers like "public class" since that is the Groovy default. That's not too much at once, doesn't take long, and makes your code Groovier. Over time you can do more groovification if you like.

The most common gotchas I've found have to do with code that uses anonymous or inner classes since Groovy doesn't support those Java language features. In that case you can either make a non-public named class (and it's OK to put it in the same Groovy file unlike Java as long as it's not public) or you can refactor the code some other way (using your creativity and expertise since we are not monkeys, right?). This can sometimes be a pain, especially if you are using a lot of them. So it goes. (And yes, that is a Slaughterhouse Five reference.)

Happy groovification!

Thinking Matters

Posted on April 30, 2009 by Scott Leberknight

Aside from the fact that Oracle's Java Problem contains all kinds of factual and other errors (see the comments on the post) this sentence caught my eye in particular when referring to Java being "quite hard to work with" - "Then, as now, you needed to be a highly trained programmer to make heads or tails of the language."

What's the issue here? That Java is hard to work with? Perhaps more specifically, not just Java but perhaps the artificial complexity in developing "Enterprise" applications in Java? Nope. The problem is that this type of thinking epitomizes the attitude that business people and other "professionals" tend to have about software development in general, in that they believe it is or should be easy and that it is always the tools and rogue programmers that are the problem. Thus, with more and better tools, they reason, there won't be a need for skilled developers and the monkey-work of actually programming could be done by, well, monkeys.

I believe software development is one of the hardest activities humans currently do, and yes I suppose I do have some bias since I am a developer. Also contrary to what many people think, there is both art and engineering involved, and any given problem can be solved in an almost infinite variety of ways. Unlike more established disciplines that have literally been around for hundreds or thousands of years (law, medicine, accounting, architecture, certain branches of engineering like civil, etc.), the software industry hasn't even reached the century mark yet! As a result there isn't any kind of consensus whatsoever about a completely standardized "body of knowledge" and thus there isn't an industy-recognized set of standard exams and boards like you find in the medical and law professions for example. (That topic is for a future post.)

One thing that is certain is that software development involves logic, and thus people who can solve problems using logic will always be needed, whether the primary medium stays in textual format (source code) or whether it evolves into some different representation like Intentional Software is trying to do. So the statement from the article that "you needed to be a highly trained programmer to make heads or tails of the language" is always going to be true in software development. More generally, highly skilled people are needed in any complex endeavor, and attempts to dumb dumb complex things will likely not succeed in any area, not just software development. Would you trust someone to perform surgery on you so long as they have a "Dummies Guide to Surgery" book? Or someone to represent you in court who stayed at a Holiday Inn Express last night?

I hypothesize that things are becoming more complex as time moves on, not less. I also propose that unless we actually succeed in building Cylons who end up wiping us all out or enslaving us, we will never reach a point where we don't need people to actually think and use logic to solve problems. So even though many business-types would love to be able to hire a bunch of monkeys and pay them $0.01 per day to develop software, those who actually realize that highly skilled people are an asset and help their bottom line, and treat them as such, are the ones who will come out on top, because they will smash their competitors who think of software/IT purely as a cost center and not a profit center.

Running VisualVM on a 32-bit Macbook Pro

Posted on April 01, 2009 by Scott Leberknight

If you want/need to run VisualVM on a 32-bit Macbook Pro you'll need to do a couple of things. First, download and install Soy Latte, using these instructions - this gets you a Java 6 JDK/JRE on your 32-bit Macbook Pro. Second, download VisualVM and extract it wherever, e.g. /usr/local/visualvm. If you now try to run VisualVM you'll get the following error message:

$ ./visualvm
./..//platform9/lib/nbexec: line 489: /System/Library/Frameworks/JavaVM.framework/
Versions/1.6/Home/bin/java: Bad CPU type in executable

Oops. After looking at the bin/visualvm script I noticed it is looking for an environment variable named "jdkhome." So the third step is to export an environment variable named 'jdkhome' that points to wherever you installed Soy Latte:

export jdkhome=/usr/local/soylatte16-i386-1.0.3

Now run the bin/visualvm script from the command line. Oh, almost forgot to mention that you should also have X11 installed, which it will be by default on Mac OS X Leopard. Now if all went well, you should have VisualVM up and running!

Missing aop 'target' packages in Spring 3.0.0.M1 zip file

Posted on January 15, 2009 by Scott Leberknight

Today I was mucking around with the Spring 3.0.0.M1 source release I downloaded as a ZIP file. I wanted to simply get the sample PetClinic up and running and be able to load Spring as a project in IntelliJ. Note Spring now requires Java 6 to build, so if you're using an older 32-bit Macbook Pro you'll need to install JDK 6. I used these instructions generously provided by Landon Fuller to install Soy Latte, which is a Java 6 port for Mac OS X (Tiger and Leopard). So I went to run the "ant jar package" command (after first setting up Ivy since that is how Spring now manages dependencies) and everything went well until I got a compilation exception. There unfortunately wasn't any nice error message about why the compile failed.

So next I loaded up the Spring project in IntelliJ and tried to compile from there. Aha! It tells me that the org.springframework.aop.target package is missing as well as the org.springframework.aop.framework.autoproxy.target package, and of course all the classes in those packages were also missing. I was fairly sure I didn't accidentally delete those two packages in the source code, so I checked the spring-framework-3.0.0.M1.zip file to be sure. Sure enough those two 'target' packages are not present in the source code in the zip file. The resolution is to go grab the missing files from the Spring 3.0.0.M1 subversion repository and put them in the correct place in the source tree. The better resolution is to do an export of the 3.0.0.M1 tag from the Subversion repo directly, rather than be lazy like I was and download the zip file.

I still am wondering why the 'target' packages were missing, however. My guess is that whatever build process builds the zip file for distribution excluded directories named 'target' since 'target' is a common output directory name in build systems like Ant and Maven and usually should be excluded since it contains generated artifacts. If that assumption is correct and all directories named 'target' were excluded, then unfortunately the two aop subpackages named 'target' got mistakenly excluded which caused a bit of head-scratching as to why Spring wouldn't compile.

Groovy + Spring = Groovier Spring

Posted on January 06, 2009 by Scott Leberknight

If you're into Groovy and Spring, check out my two-part series on IBM developerWorks on using Groovy together with Spring's dynamic language support for potentially more flexible (and interesting) applications. In Part 1 I show how to easily integrate Groovy scripts (i.e. .groovy files containing one or more classes) into Spring-based applications. In Part 2 I show how to use the "refreshable beans" feature in Spring to automatically and transparently reload Spring beans implemented in Groovy from pretty much anywhere including a relational database, and why you might actually are to do something like that!

iPhone Bootcamp Summary

Posted on December 05, 2008 by Scott Leberknight

So, after having actually written a blog entry covering each day of the iPhone bootcamp at Big Nerd Ranch, I thought a more broad summary would be in order. (That, and I'm sitting in the airport waiting for my flight this evening.) Anyway, the iPhone bootcamp was my second BNR class (I took the Cocoa bootcamp last April and wrote a summary blog about it here.)

As with the Cocoa bootcamp, I had a great time and learned a ton about iPhone development. I met a lot of really cool and interesting people with a wide range of backgrounds and experiences. This seems to be a trend at BNR, that the people who attend are people who have a variety of knowledge and experience, and bring totally different perspectives to the class. The students who attend are also highly motivated people in general, which, when combined with excellent instruction and great lab coding exercises all week, makes for a great learning environment.

Another interesting thing that happens at BNR is that in this environment, you somehow don't burn out and can basically write code all day every day and many people keep at it into the night hours. I think this is due to the way the BNR classes combine short, targeted lecture with lots and lots and lots of hands-on coding. In addition, taking an afternoon hike through untouched nature really helps to refresh you and keep energy levels up. (Maybe if more companies, and the USA for that matter, encouraged this kind of thing people would actually be more productive rather than less.) And because of the diversity of the students, every meal combines good food with interesting conversation.

So, thanks to our instructors Joe and Brian for a great week of learning and to all the students for making it a great experience. Can't wait to take the OpenGL bootcamp sometime in the future.