Do You Unit Test Getters and Setters?

Posted on May 31, 2007 by Scott Leberknight

Since Java has no notion native syntax for declaring properties you have to write explicit getter and setter methods, or hopefully you never write them but rather you have your IDE generate them. The question is whether you should write explicit tests for these methods or not. If you measure your unit test coverage, getters and setters are normally tested during the course of all your other tests, for example if you use an ORM tool like Hibernate it will call the setters when populating objects and the getters when reading object state to persist changes. But most people don't assert every single property in the course of testing their data access tier, so the actual behavior many times is not actually being tested.

Since getters and setters are explicit code in your application, I think they should be tested. But no way am I going to write individual tests for every single getter and setter method. So the other day I wrote a test utility that uses reflection to test that values passed to the setter methods are the values returned by the corresponding getters. Some people think this is cheating or just playing around with test coverage numbers. :-) Tools like Findbugs consider this to be a code smell since the getter may be returning a reference to a mutable object, as does Josh Bloch in Effective Java. While true to an extent, I want to know how many Java developers really create defensive copies in getter and/or setter methods. Not many I bet. As a matter of practicality, it just doesn't matter most of the time. I've never once had a problem where a Date, which is a mutable object, returned by a getter was changed and caused weird issues.

Most Java developers know that objects returned by getters are meant for reading, and if you want to change the value, explicitly call the setter. There are exceptions of course: if a collection or map or array is returned sometimes you are expected (or required) to actually manipulate the returned mutable collection object. One of those cases is when using Hibernate as I described in an earlier post. In any case I think that since it is possible someone could manually mess up a getter or setter, they should be tested. In languages that provide explicit support for properties this is a joke, but this is Java, let's just get over that and just test it. (As the mantra goes, "test everything that could possibly break.")

Ok, some code. The following is all we have to do to test basic getter/setter behavior of an object:

@Test
public void testProperties() {
    assertBasicGetterSetterBehavior(new MyBean());
}

This tests all the getter/setter pairs in the MyBean class. The assertBasicGetterSetterBehavior is statically imported from a class we created named PropertyAsserter, so we can just call it. This is the simplest usage in which we use the Java beans package classes like Introspector and PropertyDescriptor to find all the read/write properties and test the behavior, automatically creating argument objects for setter methods.

The core method is this one:

/**
 * See {@link #assertBasicGetterSetterBehavior(Object,String)} method. Only difference is that here we accept an
 * explicit argument for the setter method.
 *
 * @param target   the object on which to invoke the getter and setter
 * @param property the property name, e.g. "firstName"
 * @param argument the property value, i.e. the value the setter will be invoked with
 */
public static void assertBasicGetterSetterBehavior(Object target, String property, Object argument) {
    try {
        PropertyDescriptor descriptor = new PropertyDescriptor(property, target.getClass());
        Object arg = argument;
        if (arg == null) {
            Class type = descriptor.getPropertyType();
            if (DEFAULT_TYPES.contains(type)) {
                arg = DEFAULT_ARGUMENTS.get(DEFAULT_TYPES.indexOf(type));
            }
            else {
                arg = ReflectionUtils.invokeDefaultConstructorEvenIfPrivate(type);
            }
        }

        Method writeMethod = descriptor.getWriteMethod();
        Method readMethod = descriptor.getReadMethod();

        writeMethod.invoke(target, arg);
        Object propertyValue = readMethod.invoke(target);
        assertSame(property + " getter/setter failed test", arg, propertyValue);
    }
    catch (IntrospectionException e) {
        String msg = "Error creating PropertyDescriptor for property [" + property +
                "]. Do you have a getter and a setter?";
        log.error(msg, e);
        fail(msg);
    }
    catch (IllegalAccessException e) {
        String msg = "Error accessing property. Are the getter and setter both accessible?";
        log.error(msg, e);
        fail(msg);
    }
    catch (InvocationTargetException e) {
        String msg = "Error invoking method on target";
        fail(msg);
        log.error(msg, e);
    }
}

This method accepts a target object, the name of the property to test, and optionally an argument. If the argument is one a of a list of default types, things like a Set or a List, we'll just use a default object like an empty Set or List as the argument. Otherwise we'll use the default constructor, which means one must exist. Then we invoke the setter with the supplied argument or one we created, and assert that the object returned by the getter is the exact same object, e.g. using == for comparison. (Making this assertion is what makes tools like Findbugs unhappy, which is why I disable the rule that checks this "problem.") As you can see, we have a nice little subversive method named invokeDefaultConstructorEvenIfPrivate in our ReflectionUtils class that allows you to call private constructors. In Java you need this kind of thing for unit testing so you can keep private things private, as opposed to elevating to default access just for unit tests. But what about the one-liner in our test example above? The following method is the one we saw earlier:

/**
 * See {@link #assertBasicGetterSetterBehavior(Object,String)} method. Big difference here is that we try to
 * automatically introspect the target object, finding read/write properties, and automatically testing the getter
 * and setter. Note specifically that read-only properties are ignored, as there is no way for us to know how to set
 * the value (since there isn't a public setter).
 * <p/>
 * Note also that array properties are ignored.
 *
 * @param target the object on which to invoke the getter and setter
 */
public static void assertBasicGetterSetterBehavior(Object target) {
    try {
        BeanInfo beanInfo = Introspector.getBeanInfo(target.getClass());
        PropertyDescriptor[] descriptors = beanInfo.getPropertyDescriptors();
        for (PropertyDescriptor descriptor : descriptors) {
            if (descriptor.getWriteMethod() == null) {
                continue;
            }
            if (descriptor.getPropertyType().isArray()) {
                continue;
            }
            assertBasicGetterSetterBehavior(target, descriptor.getDisplayName());
        }
    }
    catch (IntrospectionException e) {
        fail("Failed while introspecting target " + target.getClass());
    }
}

This is pretty simple, albeit verbose. We use the nifty java.beans.Introspector class to find the "properties," get the property descriptors and then call another overloaded assertBasicGetterSetterBehavior method that will call our original method with a null argument which means we'll create one automatically. Using this technique it is simple to automatically test generic getters and setters by adding one test method containing one line of code. We have a bunch of other overloaded methods in PropertyAsserter so you could choose to test only some of your properties automatically, for example if the getter and/or setter have side effects like validation and you need to explicitly test those cases.

Dirt Simple Mock Objects In Groovy

Posted on May 30, 2007 by Scott Leberknight

I've just started on a new project and of course we want to ensure all, and I really mean all, our code is tested. Now, this being a Java project many people assume 80% is pretty good and that getting upwards of 90% coverage and beyond takes a superhuman effort. In the past I would have agreed. Now that I am learning Groovy, however, I no longer agree and now think it is not only possible to get above 90% coverage and close to 100%, but quite easy.

First some background on why normally 80% is acceptable. Well, it is simply that Java is not very flexible when it comes to mocking certain types of objects. For example, try mocking a standard JDK class that is marked as final and making it do your bidding. Not easy. It is easy to mock classes that implement an interface and even non-final classes using things like jMock and EasyMock.

Let's take an example. We have some reflective code that uses the java.lang.reflect.Field class' get() method. This method throws IllegalArgumentException and IllegalAccessException, the latter of which is a checked exception. Since there is nothing meaningful we can really do about a IllegalAccessException we catch it and rethrow as an IllegalStateException, which is a runtime (unchecked) exception. The problem is that covering the case when an IllegalAccessException is thrown is not something you can mock using a tool like EasyMock, even the cool EasyMock Class Extension, since the class is marked final.

Last week I found something called jmockit which is very cool and essentially uses some Java 5 class instrumentation magic (you must specify a javaagent to the JVM when running your tests) to replace byte code at runtime. This allows you to literally mock anything at all, except for the minor issue that if you want to mock methods in a JDK class like Field all the test classes must be loaded by the boot class loader, which means you have to use the -Xbootclasspath startup option. This starts to really become intrusive, complicate your build process, and generally make things a lot more complex. Even needing to run the JVM with -javaagent:jmockit.jar is intrusive and somewhat complicated.

Enter Groovy. I am a total beginner at Groovy having only attended some sessions at various conferences like No Fluff Just Stuff and JavaOne, a little tinkering with the Groovy shell, and having purchased Groovy in Action. Within an hour or so of tinkering I was able to produce an IllegalAccessException from a mocked Field using Groovy. The following code is probably crap (again being a Groovy newbie) but is ridiculously simple and readable nonetheless:

import groovy.mock.interceptor.MockFor
import java.lang.reflect.Field

class SimpleUnitTest extends GroovyTestCase {

    void testMockingField() {
        def fieldMock = new MockFor(Field)
        def mb = new MyBean()
        fieldMock.demand.get(mb) { throw new IllegalAccessException("haha") }
        fieldMock.use {
            Field f = MyBean.class.getDeclaredField("name")
            shouldFail(IllegalAccessException) {
                f.get(mb)
            }
        }
    }

}

class MyBean {	
    def name = "hello, groovy"
}

That's it. It is ridiculous how easy Groovy makes mocking any object. The steps are basically:

  1. Create mock using MockFor
  2. Tell mock what to do using demand
  3. Execute code under test in a closure passed to use

Developers used to the power of dynamic languages like Ruby of course won't think too much of this as it is even simpler to create mocks, but this is something new for Java developers. Now there is really almost no reason why you can't test literally all of your Java code, and more importantly, do it very easily.

Using jps to Find JVM Process Information

Posted on April 25, 2007 by Scott Leberknight

Recently I wrote a little Rails app that allows people to remotely start/stop/restart CruiseControl and view the output logs on my development workstation, since people don't have access to my workstation directly and from time to time we've had CruiseControl hang for some reason or another. (The shared resource that was available for running CruiseControl is totally overloaded with processes and was taking an hour to run a complete build with tests and code analysis reports while it takes about nine minutes on my machine, so we've decided to keep it on my machine!) Anyway, to control the process I needed to first find the CruiseControl process.

The first iteration used the ntprocinfo tool combined with a simple grep to find all "java.exe" processes running on the machine. This is obviously crap since it only works consistently and correctly if CruiseControl is the only java process running. The I remembered a tool that Matt had showed me a while back to list all the JVMs running on a machine called jps - the Java Virtual Machine Process Status Tool.

This tool was introduced in JDK 1.5/5.0 (or whatever it's called) and lists all the instrumented JVMs currently running on localhost or a remote machine. By default (no command line options) the output shows the local VM identifier (lvmid) and a short form of the class name or JAR file that the VM was started from on the local machine. The lvmid is normally the operating system's process identifier for the JVM. For example:

C:\dev\jps
624 startup.jar
3508 startup.jar
3348 Jps
2444 CruiseControlWithJetty

This is just what I want since now I can do a simple grep on this output and find which process is the CruiseControl process, and then I can use ntprocinfo to kill the process. Easy.

There are command line switches that provide more process detail such as the fully qualified class name (FQCN) or full path to the application JAR file, the arguments to the main method, and JVM arguments. For example, the "l" switch (a lowercase L) gives the FQCN or full path to the class or JAR:

C:\dev\jps -l
624 C:\dev\eclipse\startup.jar
3508 C:\dev\radrails\startup.jar
1948 sun.tools.jps.Jps
2444 CruiseControlWithJetty

Or, the "l" option combined with the "v" option lists FQCN/JAR file and the VM arguments:

C:\dev\jps -lv
624 C:\dev\eclipse\startup.jar -Xms256M -Xmx512M
3508 C:\dev\radrails\startup.jar
1316 sun.tools.jps.Jps -Dapplication.home=C:\dev\java\jdk1.5.0_10 -Xms8m
2444 CruiseControlWithJetty -Xms128m -Xmx256m -Djavax.management.builder.initial=mx4j.server.MX4JMBeanServerBuilder

You can also list the java processes on a remote host, though that machine must have a jstatd process running.

Last but not least, I saw the following warnings in the jps documentation.

  • "This utility is unsupported and may not be available in future versions of the JDK."
  • "You are advised not to write scripts to parse jps output since the format may change in future releases. If you choose to write scripts that parse jps output, expect to modify them for future releases of this tool."

Oh well, nothing is forever, right? By the time thet get around to changing the output format or removing this very useful tool, we'll hopefully have obtained a decent shared build machine where CruiseControl doesn't hang up and which everyone has access to be able to fix any problems if they should arise. As to why CruiseControl hangs up sometimes, I have no idea.

Rails Without a Database

Posted on April 07, 2007 by Scott Leberknight

Last week I needed to write a simple web application that would allow others on my team to control a process on my workstation when I wasn't around. Essentially I wanted to allow people to start, stop, restart, get process information about, and view the log of, a process running on my workstation. Since I wanted it to be simple, and because I didn't want it to take more than a few hours, I chose to write in in Rails. This turned out to be ridiculously simple, as I ended up with two controllers - an authentication controller and the process controlling controller - along with a few views. But, since all I was doing was calling system commands from the process controlling controller, there is no database in this application. (The "authentication" was just a hardcoded user name and password that I shared with my team so no database stuff there!)

So what happens when you have a Rails application with no database? Well, it works functionally, i.e when you are hacking your controllers and such, but the tests all fail! This is because Rails assumes you always have a database. A quick Google search led to an excellent resource, Rails Recipes, by the Pragmatic Progammers for writing and testing Rails apps with no database. One of the recipes is, conveniently enough, Rails without a Database! Basically, it describes how you can modify the Rails test_helper.rb file to remove all the database-specific setup and teardown code and then you can run your tests with no database to speak about. Cool.

Let's Play "Who Owns That Collection?" With Hibernate

Posted on March 28, 2007 by Scott Leberknight

If you have used Hibernate and mapped a one-to-many relationship you've probably come across the "delete orphan" feature. This feature offers cascade delete of objects orphaned by code like the following:

Preference pref = getSomePreference();
user.getPreferences().remove(pref);

In the above code, a specific Preference is removed from a User. With the delete orphan feature, and assuming there is an active transaction associated with a session, the preference that was removed from the user is automatically deleted from the database when the transaction commits. This feature is pretty handy, but can be tricky if you try to write clever code in you setter methods, e.g. something like this:

// Do not do this!
public void setPreferences(Set newPreferences) {
    this.preferences = newPreferences == null ? new HashSet<Preference>() : newPreferences;
}

Code like the above results in a HibernateException with the following message if you pass null into setPreferences and try to save the user object:

A collection with cascade="all-delete-orphan" was no longer referenced by the owning entity instance

What is happening here is that Hibernate requires complete ownership of the preferences collection in the User object. If you simply set it to a new object as in the above code sample, Hibernate is unable to track changes to that collection and thus has no idea how to apply the cascading persistence to your objects! The same error will occur if you passed in a different collection, e.g.:

user.setPreferences(new HashSet<Preference>());

So the point is that Hibernate's delete orphan abstraction is leaking into your domain model object. This is pretty much unavoidable but is a leaky abstraction nonetheless that developers need to be aware of lest they run into the error mentioned above.

So how can you avoid this problem? The only sure way that I know of is to make the setter method private, since passing any new collection or null results in the "owning entity" error. This way only Hibernate will use the setter method to load up user objects (it invokes the method reflectively after setting it accessible via the Reflection API). Then you could add a method addPreference to your code which is the public API for adding preferences. Anyone could of course use reflection to do the same thing Hibernate is doing, but then all bets are off as they are subverting your public API. For example:

public void addPreference(Preference p) {
    getPreferences().add(p);
    p.setUser(this);
}

This has the nice side effect of establishing the bi-directional relationship between user and preference, assuming your model allows bi-directional navigation. You could also add a null check if you are paranoid. Removing a preference from a user is equally simple. You can write a helper method removePreference or you could call the getter and then call remove as shown here:

user.getPreferences().remove(aPref);

Essentially, you can operate on the collection returned by getPreferences as a normal Java collection and Hibernate will do the right thing. Since you are operating on the collection maintained and observed by Hibernate, it is able to track your changes, whereas replacing the collection wholesale makes Hibernate really mad, since it believes it is the sole proprietor of the collection, not you! For example, if you want to remove all the user's preferences you could write the following:

user.getPreferences().clear();

Note that all the above discussion refers to one-to-many relationships that specify the delete orphan cascade type; usually you will specify both "all" and "delete-orphan." In cases where you are only using the "all" cascade option, the semantics are quite different. Assuming the normal case where the "many" side of the relationship owns the relationship -- i.e. you used inverse="true" in your mapping file or @OneToMany(mappedBy = "user") if using annotations - then you must explicitly delete the child objects as Hibernate will only track that side of the relationship. For example, if cascade is "all" and you remove a preference from a user and then save the user, nothing happens! You would need to explicitly delete the preference object, as shown here:

// Assume only using cascade="all" and an inverse="true" mapping in User.hbm.xml
user.getPreferences().remove(aPref);  // does not cause database delete
session.delete(aPref);                // this causes the database deletion

One last thing to note in the above is that you must remove aPref from the user, or else Hibernate will throw an exception stating that the preference would be re-saved upon cascade! So if the User object is in the Session, remember you need to undo both sides of the relationship for things to work properly.

LoggerIsNotStaticFinal

Posted on March 27, 2007 by Scott Leberknight

For anyone who uses PMD, the title of this blog appears in their list of PMD errors if they don't declare their loggers static and final. Specifically, the LoggerIsNotStaticFinal rule simply says that a log should be declared static and final. I also like to make sure they are private as well. For example:

// Jakarta Commons Logging
private static final Log log = LogFactory.getLog(MyClass.class);

The above code also shows another good practice, which is to pass the Class object to the getLog() method, instead of a string. Why the java.util.logging.Logger class doesn't even provide a method accepting a Class object is simply beyond me. Why did the people who developed the java.util.logging package base their API on Log4j yet omit some of the most useful parts of it? Oh well.

Now to the point. Why it is good practice to declare loggers private, static, and final? A logger is an internal implementation detail, so it should be private. You only need one logger for all instances of a class, hence static. And a logger should not be able to be replaced, thus final. So if this is good, what's not so good (at least in my opinion)? Simple - any logger that is not private, static, final, and which doesn't pass in a Class object to getLog()! For example, consider this common bit of code, declared in some base class:

// Not so good logger declaration
protected final Log log = LogFactory.getLog(getClass());

Why is this bad? Well, it isn't static for one thing. For another, it uses getClass() to obtain the log. At first this seems efficient since now all subclasses automatically inherit a ready-made log of the correct runtime type. So what's the issue here? The biggest problem with loggers declared in this manner is that you now get all the logging from the superclass mixed in with the logging from the subclass, and it is impossible in the log output to discern which messages came from which class unless you look at the source. This is really annoying if the superclass has a lot of logging that you don't want to see, since you cannot filter it out.

Another problem is that your ability to set log levels differently goes away, for example if a subclass resides in a different package than the superclass. In that case, if you try to filter out logging from the superclass, you can't because the actual runtime class was used to obtain the logger.

Last, having a protected logger just seems to violate basic object-oriented principles. Why in the world should subclasses know about an internal implementation detail from a superclass that is a cross-cutting concern, no less? Anyway, though this is a silly little rant it really is annoying when you extend a superclass that declares a protected logger like this.

JSP 2.0 Expressions Should Escape XML By Default

Posted on July 20, 2006 by Scott Leberknight

Jeff has posted a nice blog about cross-site scripting (XSS) vulnerabilities in JSP 2.0 expressions. With JSP 2.0 you can use the following to emit the description of a "todo" item:

${todo.description}

That's pretty nice. What happens when someone has entered a description like this?

<script type="text/javascript">alert('F#$@ you!');</script>

Well, it executes the JavaScript and pops up a nice little message to you. Of course more malicious things could be injected there but you get the idea. JSTL's c:out tag, by default, escapes XML content so the following code will not execute the embedded JavaScript but will simply display it as part of the web page.

<c:out value="${todo.description}"/>

The nice thing here is that the default behavior of c:out is to escape XML content. If you need to override this and not escape XML content, you can simply write the following.

<c:out value="${todo.description}" escapeXml="false"/>

My question is this: Why in the world did the expert group on the JSP 2.0 JSR decide to make not escaping XML content the default for EL expressions, when they made the opposite decision for c:out? As Jeff alluded to in his post, it is too much of a hassle to try and determine where it is safe to use the JSP 2.0 expression syntax and where you need to ensure potential XML content is escaped. So the safest bet is to use c:out or the JSTL 1.1 function escapeXml, which looks like this.

${fn:escapeXml(todo.description)}

Given the choice between c:out and fn:escapeXml() I probably would prefer the latter as it seems a tad bit cleaner and more in the spirit of JSP 2.0 expressions. But I would prefer instead that the JSP expression language escaped XML content by default rather than have to choose which XML-escaping syntax to use.

Limiting Results in SQL Queries When Using Oracle

Posted on May 22, 2006 by Scott Leberknight

A long time ago, in a galaxy far away I posted about limiting the number of query results when performing SQL queries having ORDER BY clauses in them. For example, you might want to run a SQL query that sorts people by last then first name, and returns rows 41-50. You then might want to go to the next "page" of results, still sorting by last then first name, which is rows 51-60. That post talked about how easy it is to do using MySql, and it is because MySql provides the lovely "LIMIT" clause. Oracle, on the other hand, seemingly makes it very difficult to do what is quite simple conceptually. I am pretty sure that's on purpose, so you have to spend $300 per hour for one of their service consultants. Alternatively, you could use something like Hibernate to figure out how to write queries like this in Oracle, without spending a dime.

So basically I give all the credit to Hibernate and its developers as they obviously figured this out and incorporated it nicely into their product, abstracting away all the pain. So in Hibernate you could write the following Java code:

Criteria criteria = session.createCriteria(Person.class);
criteria.addOrder(Order.asc("lastName"));
criteria.addOrder(Order.asc("firstName"));
criteria.setFirstResult(40);  // rows are zero-based
criteria.setMaxResults(10);
List results = criteria.list();

So now all we have to do is turn the hibernate.show_sql property to true and look at the SQL that Hibernate generates. So I ran the query and figured out the gist of what Hibernate is doing. Here is an example of a SQL query you could run in Oracle, formatted for readability. That is, assuming you consider this query readable at all!

select * from
    ( select row_.*, rownum rownum_ from
        (
            select * from people p
            where lower(p.last_name) like 's%'
            order by p.last_name, p.first_name
        )
      row_ where rownum <= 50
    )
where rownum_ > 40

Isn't that intuitive? How nice of Oracle to provide such a succinct, easily understandable way of limiting a query to a specific set of rows when using ORDER BY. Also, note that the entire reason we have to go through all these hoops is because Oracle applies the special rownum pseudo-field before the ORDER BY clause, which means once the results have been ordered according to your criteria (e.g. last name then first name) the rownum is in some random order and thus cannot be used anymore to select rows N through M.

So what is exactly going on in the above query? First, the actual query is the innermost query which returns all the possible results, ordered the way you want them. At this point the results are in the correct order, but the rownum for this query is in some random order which we cannot use for filtering. The query immediately wrapping the actual query selects all its results, aliases rownum as rownum_, and filters out all rows where rownum is equal to or less than 50. Note that the values of rownum_ will be in the correct order, since the rownum pseudo-column has been applied to the results of the already sorted innermost query.

So if we stopped now we'd have rows one through 50 of the actual query. But we only want rows 41-50, so we need to wrap our current results in one more outer query. This outermost query uses the aliased rownum_ column and only includes rows where rownum_ is greater than 40. So that's it. A generic solution you can use for paging result sets in Oracle databases. One caveat: I've not done any performance testing on this and so don't really know whether first selecting all results before filtering them in the database would cause performance problems on really large numbers of results. It certainly seems that it could affect performance, but then again how else could you sort the results and grab some set of rows in the "middle" of the results without using a technique similar to this? I suppose if I were really enterprising I could look under the covers of some other database whose code is open source, and which has a simple syntax for limiting the rows, to find out how databases internally handle and perhaps optimize this kind of thing, but it is late and I'm already going to be hating myself for staying up this late as it is.

Display Error Messages For Multiple Models in Rails

Posted on May 22, 2006 by Scott Leberknight

Normally in Rails when you want to display validation error messages for a model object, you simply use the error_messages_for() helper method. For simple sites this is usually just fine, as it displays a message stating that something went wrong along with a list of validation error messages. If you use the Rails helper methods to generate your HTML controls, then those fields are also wrapped in a div element which can be styled to indicate there were problems with that field. So you have some explanation of the validation errors, generally at the top of the page, and the fields where validation errors occurred can be styled as such, for example with a red outline or a red backgroun or whatever makes it clear to users there is a problem with the field.

This works very well when you are validating a single model object. But what if you are validating multiple models? You could have a separate error_messages_for() for each model object. That will work, but it is ugly, since you'll have a separate list of error messages for every model object you are validating. I searched the Rails API and could not find a method to display errors for multiple models, so I wrote one that is based on the error_messages_for() method. Basically I copied Rails' error_messages_for() method and then modified it to display messages for multiple models.

It works like this. You pass in an array of object names for which to collect and display errors instead of a single object name. The first object name is assumed to be the "main" object (e.g. the "master" in a master/detail relationship) and is used in the error explanation should any validation errors occur. For example, assume you have a page that allows you to create a stock watch list and also add stock ticker symbols for that watch list. The "main" object here should be the watch list object, and if validation errors occur then the message the gets printed is "2 errors prohibited this watch list from being saved." This makes some sense, at least to me, since the watch list is the main thing you are saving; the stock ticker symbols can be entered if the user wants, but are not required since they could be added later. As with the Rails error_messages_for() method, you can pass in additional options.

That's pretty much it, except for the code. So here is the code:

def error_messages_for_multiple_objects(object_names, options = {})
  options = options.symbolize_keys
  object_name_for_error = object_names[0]
  all_errors = ""
  all_errors_count = 0
  object_names.each do |object_name|
    object = instance_variable_get("@#{object_name}")
    if object && !object.errors.empty?
      object_errors = object.errors.full_messages.collect {
        |msg| content_tag("li", msg)
      }
      all_errors_count += object_errors.size
      all_errors << "#{object_errors}"
    end
  end

  if all_errors_count > 0
    tag = content_tag("div",
      content_tag(
        options[:header_tag] || "h2",
        "#{pluralize(all_errors_count, "error")} prohibited this" \
        " #{object_name_for_error.to_s.gsub("_", " ")} from being saved"
      ) +
      content_tag("p", "There were problems with the following fields:") +
      content_tag("ul", all_errors),
        "id" => options[:id] || "errorExplanation",
        "class" => options[:class] || "errorExplanation"
    )
  else
    ""
  end
end

The code works like this. First we extract the name of the "main" object as the first element of the object_names array. Next we loop through all the model objects, checking if there are any errors. If there are errors we collect them in a string containing an li tag for each error and append them to the all_errors string. Once we've checked all the objects for errors, and if there were errors, we wrap all_errors in a div containing a header, the main error message, and finally create an unordered list and stuff all_errors in that list. If there were no errors at all we simply return an empty string. That's it. Now you can easily display a list of validation errors for multiple model objects, using code like this: error_messages_for_multiple_objects( ["watch_list", "stock1", "stock2"] ). Of course if we have an unknown number of stocks in this case, then we could construct the array of object names first and then pass it into the method, which would be more flexible.

JavaOne Summary

Posted on May 22, 2006 by Scott Leberknight

I am sitting in Oakland Airport waiting for my plane back home, so I thought I'd actually write a quick summary about JavaOne which I just attended this past week. I went to the 2003 and 2004 editions of JavaOne, and after that had decided not to come back as the quality of technical content was very low, especially compared to boutique conferences like No Fluff Just Stuff where almost every speaker is a book author, a well-known industry expert, an author of some popular open-source framework, or just is an excellent speaker and expert in their topic. But one of my friends went in 2005 (he was lucky enough to get a free ticket) and said that Sun had done a lot to improve the technical content and speaker quality; mostly it seemed they started inviting No Fluff speakers and other industry experts. So this year, after looking at the session descriptions and seeing a bunch of No Fluff speakers, people like Rod Johnson, Josh Bloch, and Neal Gafter, I decided to give it a try again. There's certainly no question JavaOne is always a good time with good parties, lots of food, lots of drinking, and is really a vacation!

Overall the conference was pretty good, and the speakers were in general very good. Then again, I did attend mostly sessions given by speakers I already knew were good and avoided the vendor-driven talks on EJB 3, JSF, and other such things that I never plan to actually use in a real project. I only attended the conference opening General Session on Tuesday morning, and found it to be quite anemic. Even the announcement that Sun planned to open source Java was subdued and not really much of an announcement - in fact most people sort of looked around and seemed to be asking "Did they just say they were going to open source Java?" since the way they announced it was so, well, lame. Much of the conference centered around the "Compatibility Matters" theme and the ubiquitous "Ease of Use" mantra, also known as "Ode to Visual Basic Programmers."

It continues to amaze me that the people doing the EJB 3 specification don't really seem to have learned much from Spring and Hibernate. Oh, they say they have a POJO-based programming model, inversion of control, dependency injection, and AOP, but in reality they are very limited and basic and don't approach the power that Spring provides, for example. There are annotations all over the place, which do remove a lot of the need for mounds of XML configuration code, but the "POJOs" are now littered with EJB imports, since you need to import the annotations you plan to use. So if you import the EJB annotations, are the POJOs really POJOs? Adding the @Stateless annotation to get yourself a stateless session bean still ties you to an EJB container. In Spring you are not tied to anything, and you generally don't have framework or vendor-specific imports in your classes, and most definitely not in your interfaces. And because the new JPA (Java Persistence API) does not include a "Criteria" API nor any notion of a second-level cache a la Hibernate, why in the world would I choose it over Hibernate 3?

I did attend some pretty interesting "non-standard" talks on things like JRuby, grid computing, TestNG, and Spring. I say "non-standard" because they are not the typical vendor-driven talks and are about topics that are not "standard" Java EE. It is refreshing that Sun has finally allowed these types of talks and has realized that not all Java developers are using the "standard" Java EE technologies as found in the specifications. Other interesting talks included one about new JVM features designed to permit dynamically typed languages like Ruby and Python to run on the JVM; and the new scripting language features built into Java SE 6. Initially I thought the addition of scripting support directly in the Java SE was a little silly, but after going to several talks I now at least see some potential benefits to it and think it might actually be a Good Thing - only time will tell I suppose.

Overall I think I got my money's worth, and that even included an appearance at the "After Dark Bash" from the Mythbusters who shot some t-shirts at extremely high velocities across the Moscone Center! Assuming Sun keeps bringing back the good speakers and keeps permitting the "non-standard" topics, I just might keep going back!