Validating Domain Objects in Hibernate Part 1: Introduction

Posted on September 10, 2007 by Scott Leberknight

This is the first in a series of short blogs showing how you can use Hibernate to validate your domain objects. Here I'll provide a brief introduction to validating your domain objects using Hibernate and why it matters. In future installments I'll show you how to enable the validations you apply to your domain objects, how to create your own custom validators, and provide some tips about integrating the Hibernate validator into your applications.

Every system needs data validation. In Java-based web applications using MVC frameworks like Struts, Spring MVC, or JSF, most developers simply use their framework's built-in validation scheme. Each of the aforementioned frameworks provides a way to validate form data, but this is usually restricted to the web tier. Struts provides ActionForms that have a validate() method you implement or you can use the Jakarta Commons Validator; JSF uses custom tags like <f:validateLength/> in the presentation tier; and Spring MVC has a Validator interface that can be easily plugged into Spring MVC's SimpleFormController. These solutions are either tied directly to the web tier or would require a bunch of infrastructure (or plumbing) code to make it work across all tiers seamlessly.

This is fine is many cases, but what happens when you expose functionality via a web service (I won't get into a REST vs WS-Death Star argument here, OK?) and you still need data validation? In fact usually you need the exact same validation for the domain objects. Now what? Many developers simply end up copy/pasting validation code. Ugh. Others come up with their own validation framework to keep things DRY but end up making things more complicated, plus they are writing infrastructure code instead of business logic. Along came Ruby on Rails which made it trivial to declare validation on domain objects and have that validation automatically apply no matter where the domain objects are used, i.e. in web controllers, web services, etc. This was probably most Java developers' first experience with applying validation directly to domain objects, and they were jealous (or at least I was).

Fortunately Hibernate provides the Hibernate Validator which allows you to annotate your domain objects with validation constraints. For example:

@Entity
public class User extends BaseEntity {
	
    // id, version, etc. are defined in BaseEntity
    private String userName;
    private String firstName;
    private String lastName;
    private String ssn;
	
    @Required
    @Email
    public String getUserName() { return userName; }
    public void setUserName(String userName) { this.userName = userName; }
	
    @Required
    @Length(min = 2, max = 50)
    public String getFirstName () { return firstName; }
    public void setFirstName() { this.firstName = firstName; }

    @Required
    @Length(min = 2, max = 50)
    public String getLastName () { return lastName; }
    public void setLastName() { this.lastName = lastName; }
	
    @Pattern(regex = "[0-9]{3}-[0-9]{2}-[0-9]{4}")
    public String getSsn() { return ssn; }
    public void setSsn(String ssn) { this.ssn = ssn; }
}

In the above code, we are mandating the following validation rules on all User objects:

  • User name is required and must be a well-formed email address.
  • First name is required and must be between 2 and 50 characters in length.
  • Last Name is required and must be between 2 and 50 characters in length.
  • Social Security Number is not required, but if present must match the format NNN-NN-NNNN.

With these annotations, we can now have any User object validated automatically and the same validation rules will apply no matter where the User object is used, i.e. the validation is where it should be, in the problem domain and not tied to the a specific application tier. In the next article, I'll show how to actually enable the Hibernate Validator and how to validate objects.

Spring Tip: Don't Explicitly Call Setter Methods

Posted on July 24, 2007 by Scott Leberknight

Spring has many setter methods in its classes. For example, take a look at the Hibernate support class HibernateTemplate and the Spring MVC base controller BaseCommandController. This tip is very simple:

Setter methods in Spring are for bean configuration only.

For example, say you are implementing a DAO and have extended the Spring support class HibernateDaoSupport which provides a HibernateTemplate for use in data access methods. You might have a method to search for blog entries like this:

public List findBlogEntriesByTitle(String title) {
    return getHibernateTemplate().findByNamedParam(
        "from BlogEntry e where e.title like :theTitle", "theTitle", "%" + title + "%");
}

Now let's say you have a really popular blog and want to limit the number of search results to 100. You check the HibernateTemplate JavaDocs and find there is a setMaxResults() method that looks promising. So you change your method to this:

public List findBlogEntriesByTitle(String title) {
    getHibernateTemplate().setMaxResults(100);  // Don't do this!
    return getHibernateTemplate().findByNamedParam(
        "from BlogEntry e where e.title like :theTitle", "theTitle", "%" + title + "%");
}

The above innocent-looking code has an insidious side-effect, which occurs because HibernateTemplate, like many Spring classes, is intended to be used in a thread-safe manner. In the example, calling setMaxResults sets the maximum number of results for all threads flowing through the HibernateTemplate after this method has been called, instead of just for the current Hibernate Session. While setting the maximum query results may not be a big deal, what if instead you did this:

public void updateAccountProfile(AccountProfile profile) {
    HibernateTemplate ht = getHibernateTemplate();
    int originalFlushMode = ht.getFlushMode();
    ht.setFlushMode(FlushMode.FLUSH_EAGER);  // Don't do this!
    ht.update(profile);
    ht.setFlushMode(originalFlushMode);
}

This is much worse as now the Hibernate flush mode is changed for every thread using the HibernateTemplate and causes non-deterministic flushing behavior. I have seen a project encounter this very bug and it was initially difficult to track down since most of the time everything worked just fine. As an aside, you should rarely if ever need to explicitly flush a Hibernate Session in application code.

In general, setter methods in Spring classes are intended for initial bean configuration, and should not be called at runtime as they are generally not thread-safe. So, if you want to call a setter method, your best bet is to look for an alternate, thread-safe solution. In the above example, the solution could be to use a HibernateCallback which passes you a Hibernate Session on which you can set the flush mode only for that particular Session. In any case, making sure your team understands this could save some headaches down the road, especially for developers who are new to Spring and who are used to using setter methods to change object properties at runtime, not just as an initial configuration mechanism.

Enforce Required Dependencies in Spring 2.0

Posted on July 10, 2007 by Scott Leberknight

Spring 2.0 added the @Required annotation that allows you to define which bean properties are required to be injected. In combination with the RequiredAnnotationBeanPostProcessor, Spring will blow up at application startup if any dependencies are not satisfied. In my (admittedly limited) testing of this feature, only one unsatisfied dependency is reported at a time - in other words Spring fails fast at the first missing dependency. In any case, the following is all you need to do to enable this feature.

First, annotate the setter methods for required properties.

// In some class that requires a UserDao to be injected.
@Required
public void setUserDao(UserDao userDao) {
    this.userDao = userDao;
}

Second, in one of your Spring application context files, add the following:

<bean class="org.springframework.beans.factory.annotation.RequiredAnnotationBeanPostProcessor"/>

That's it. If you want you can configure the RequiredAnnotationBeanPostProcessor to look for a different type of annotation, for example maybe you want your own @Injected annotation:

// In some class that requires a UserDao to be injected.
@Injected
public void setUserDao(UserDao userDao) {
    this.userDao = userDao;
}
<bean class="org.springframework.beans.factory.annotation.RequiredAnnotationBeanPostProcessor">
    <property name="requiredAnnotationType" value="com.acme.annotation.Injected"/>
<bean>

Overall this is a nice feature that might save you some time debugging a missing dependency and at the same time making your code a little more explicit, since the annotated setter methods tell the reader something, i.e. that this is a "special" property that is required to be set before the class can successfully be used.

Do You Unit Test Getters and Setters?

Posted on May 31, 2007 by Scott Leberknight

Since Java has no notion native syntax for declaring properties you have to write explicit getter and setter methods, or hopefully you never write them but rather you have your IDE generate them. The question is whether you should write explicit tests for these methods or not. If you measure your unit test coverage, getters and setters are normally tested during the course of all your other tests, for example if you use an ORM tool like Hibernate it will call the setters when populating objects and the getters when reading object state to persist changes. But most people don't assert every single property in the course of testing their data access tier, so the actual behavior many times is not actually being tested.

Since getters and setters are explicit code in your application, I think they should be tested. But no way am I going to write individual tests for every single getter and setter method. So the other day I wrote a test utility that uses reflection to test that values passed to the setter methods are the values returned by the corresponding getters. Some people think this is cheating or just playing around with test coverage numbers. :-) Tools like Findbugs consider this to be a code smell since the getter may be returning a reference to a mutable object, as does Josh Bloch in Effective Java. While true to an extent, I want to know how many Java developers really create defensive copies in getter and/or setter methods. Not many I bet. As a matter of practicality, it just doesn't matter most of the time. I've never once had a problem where a Date, which is a mutable object, returned by a getter was changed and caused weird issues.

Most Java developers know that objects returned by getters are meant for reading, and if you want to change the value, explicitly call the setter. There are exceptions of course: if a collection or map or array is returned sometimes you are expected (or required) to actually manipulate the returned mutable collection object. One of those cases is when using Hibernate as I described in an earlier post. In any case I think that since it is possible someone could manually mess up a getter or setter, they should be tested. In languages that provide explicit support for properties this is a joke, but this is Java, let's just get over that and just test it. (As the mantra goes, "test everything that could possibly break.")

Ok, some code. The following is all we have to do to test basic getter/setter behavior of an object:

@Test
public void testProperties() {
    assertBasicGetterSetterBehavior(new MyBean());
}

This tests all the getter/setter pairs in the MyBean class. The assertBasicGetterSetterBehavior is statically imported from a class we created named PropertyAsserter, so we can just call it. This is the simplest usage in which we use the Java beans package classes like Introspector and PropertyDescriptor to find all the read/write properties and test the behavior, automatically creating argument objects for setter methods.

The core method is this one:

/**
 * See {@link #assertBasicGetterSetterBehavior(Object,String)} method. Only difference is that here we accept an
 * explicit argument for the setter method.
 *
 * @param target   the object on which to invoke the getter and setter
 * @param property the property name, e.g. "firstName"
 * @param argument the property value, i.e. the value the setter will be invoked with
 */
public static void assertBasicGetterSetterBehavior(Object target, String property, Object argument) {
    try {
        PropertyDescriptor descriptor = new PropertyDescriptor(property, target.getClass());
        Object arg = argument;
        if (arg == null) {
            Class type = descriptor.getPropertyType();
            if (DEFAULT_TYPES.contains(type)) {
                arg = DEFAULT_ARGUMENTS.get(DEFAULT_TYPES.indexOf(type));
            }
            else {
                arg = ReflectionUtils.invokeDefaultConstructorEvenIfPrivate(type);
            }
        }

        Method writeMethod = descriptor.getWriteMethod();
        Method readMethod = descriptor.getReadMethod();

        writeMethod.invoke(target, arg);
        Object propertyValue = readMethod.invoke(target);
        assertSame(property + " getter/setter failed test", arg, propertyValue);
    }
    catch (IntrospectionException e) {
        String msg = "Error creating PropertyDescriptor for property [" + property +
                "]. Do you have a getter and a setter?";
        log.error(msg, e);
        fail(msg);
    }
    catch (IllegalAccessException e) {
        String msg = "Error accessing property. Are the getter and setter both accessible?";
        log.error(msg, e);
        fail(msg);
    }
    catch (InvocationTargetException e) {
        String msg = "Error invoking method on target";
        fail(msg);
        log.error(msg, e);
    }
}

This method accepts a target object, the name of the property to test, and optionally an argument. If the argument is one a of a list of default types, things like a Set or a List, we'll just use a default object like an empty Set or List as the argument. Otherwise we'll use the default constructor, which means one must exist. Then we invoke the setter with the supplied argument or one we created, and assert that the object returned by the getter is the exact same object, e.g. using == for comparison. (Making this assertion is what makes tools like Findbugs unhappy, which is why I disable the rule that checks this "problem.") As you can see, we have a nice little subversive method named invokeDefaultConstructorEvenIfPrivate in our ReflectionUtils class that allows you to call private constructors. In Java you need this kind of thing for unit testing so you can keep private things private, as opposed to elevating to default access just for unit tests. But what about the one-liner in our test example above? The following method is the one we saw earlier:

/**
 * See {@link #assertBasicGetterSetterBehavior(Object,String)} method. Big difference here is that we try to
 * automatically introspect the target object, finding read/write properties, and automatically testing the getter
 * and setter. Note specifically that read-only properties are ignored, as there is no way for us to know how to set
 * the value (since there isn't a public setter).
 * <p/>
 * Note also that array properties are ignored.
 *
 * @param target the object on which to invoke the getter and setter
 */
public static void assertBasicGetterSetterBehavior(Object target) {
    try {
        BeanInfo beanInfo = Introspector.getBeanInfo(target.getClass());
        PropertyDescriptor[] descriptors = beanInfo.getPropertyDescriptors();
        for (PropertyDescriptor descriptor : descriptors) {
            if (descriptor.getWriteMethod() == null) {
                continue;
            }
            if (descriptor.getPropertyType().isArray()) {
                continue;
            }
            assertBasicGetterSetterBehavior(target, descriptor.getDisplayName());
        }
    }
    catch (IntrospectionException e) {
        fail("Failed while introspecting target " + target.getClass());
    }
}

This is pretty simple, albeit verbose. We use the nifty java.beans.Introspector class to find the "properties," get the property descriptors and then call another overloaded assertBasicGetterSetterBehavior method that will call our original method with a null argument which means we'll create one automatically. Using this technique it is simple to automatically test generic getters and setters by adding one test method containing one line of code. We have a bunch of other overloaded methods in PropertyAsserter so you could choose to test only some of your properties automatically, for example if the getter and/or setter have side effects like validation and you need to explicitly test those cases.

Dirt Simple Mock Objects In Groovy

Posted on May 30, 2007 by Scott Leberknight

I've just started on a new project and of course we want to ensure all, and I really mean all, our code is tested. Now, this being a Java project many people assume 80% is pretty good and that getting upwards of 90% coverage and beyond takes a superhuman effort. In the past I would have agreed. Now that I am learning Groovy, however, I no longer agree and now think it is not only possible to get above 90% coverage and close to 100%, but quite easy.

First some background on why normally 80% is acceptable. Well, it is simply that Java is not very flexible when it comes to mocking certain types of objects. For example, try mocking a standard JDK class that is marked as final and making it do your bidding. Not easy. It is easy to mock classes that implement an interface and even non-final classes using things like jMock and EasyMock.

Let's take an example. We have some reflective code that uses the java.lang.reflect.Field class' get() method. This method throws IllegalArgumentException and IllegalAccessException, the latter of which is a checked exception. Since there is nothing meaningful we can really do about a IllegalAccessException we catch it and rethrow as an IllegalStateException, which is a runtime (unchecked) exception. The problem is that covering the case when an IllegalAccessException is thrown is not something you can mock using a tool like EasyMock, even the cool EasyMock Class Extension, since the class is marked final.

Last week I found something called jmockit which is very cool and essentially uses some Java 5 class instrumentation magic (you must specify a javaagent to the JVM when running your tests) to replace byte code at runtime. This allows you to literally mock anything at all, except for the minor issue that if you want to mock methods in a JDK class like Field all the test classes must be loaded by the boot class loader, which means you have to use the -Xbootclasspath startup option. This starts to really become intrusive, complicate your build process, and generally make things a lot more complex. Even needing to run the JVM with -javaagent:jmockit.jar is intrusive and somewhat complicated.

Enter Groovy. I am a total beginner at Groovy having only attended some sessions at various conferences like No Fluff Just Stuff and JavaOne, a little tinkering with the Groovy shell, and having purchased Groovy in Action. Within an hour or so of tinkering I was able to produce an IllegalAccessException from a mocked Field using Groovy. The following code is probably crap (again being a Groovy newbie) but is ridiculously simple and readable nonetheless:

import groovy.mock.interceptor.MockFor
import java.lang.reflect.Field

class SimpleUnitTest extends GroovyTestCase {

    void testMockingField() {
        def fieldMock = new MockFor(Field)
        def mb = new MyBean()
        fieldMock.demand.get(mb) { throw new IllegalAccessException("haha") }
        fieldMock.use {
            Field f = MyBean.class.getDeclaredField("name")
            shouldFail(IllegalAccessException) {
                f.get(mb)
            }
        }
    }

}

class MyBean {	
    def name = "hello, groovy"
}

That's it. It is ridiculous how easy Groovy makes mocking any object. The steps are basically:

  1. Create mock using MockFor
  2. Tell mock what to do using demand
  3. Execute code under test in a closure passed to use

Developers used to the power of dynamic languages like Ruby of course won't think too much of this as it is even simpler to create mocks, but this is something new for Java developers. Now there is really almost no reason why you can't test literally all of your Java code, and more importantly, do it very easily.

Using jps to Find JVM Process Information

Posted on April 25, 2007 by Scott Leberknight

Recently I wrote a little Rails app that allows people to remotely start/stop/restart CruiseControl and view the output logs on my development workstation, since people don't have access to my workstation directly and from time to time we've had CruiseControl hang for some reason or another. (The shared resource that was available for running CruiseControl is totally overloaded with processes and was taking an hour to run a complete build with tests and code analysis reports while it takes about nine minutes on my machine, so we've decided to keep it on my machine!) Anyway, to control the process I needed to first find the CruiseControl process.

The first iteration used the ntprocinfo tool combined with a simple grep to find all "java.exe" processes running on the machine. This is obviously crap since it only works consistently and correctly if CruiseControl is the only java process running. The I remembered a tool that Matt had showed me a while back to list all the JVMs running on a machine called jps - the Java Virtual Machine Process Status Tool.

This tool was introduced in JDK 1.5/5.0 (or whatever it's called) and lists all the instrumented JVMs currently running on localhost or a remote machine. By default (no command line options) the output shows the local VM identifier (lvmid) and a short form of the class name or JAR file that the VM was started from on the local machine. The lvmid is normally the operating system's process identifier for the JVM. For example:

C:\dev\jps
624 startup.jar
3508 startup.jar
3348 Jps
2444 CruiseControlWithJetty

This is just what I want since now I can do a simple grep on this output and find which process is the CruiseControl process, and then I can use ntprocinfo to kill the process. Easy.

There are command line switches that provide more process detail such as the fully qualified class name (FQCN) or full path to the application JAR file, the arguments to the main method, and JVM arguments. For example, the "l" switch (a lowercase L) gives the FQCN or full path to the class or JAR:

C:\dev\jps -l
624 C:\dev\eclipse\startup.jar
3508 C:\dev\radrails\startup.jar
1948 sun.tools.jps.Jps
2444 CruiseControlWithJetty

Or, the "l" option combined with the "v" option lists FQCN/JAR file and the VM arguments:

C:\dev\jps -lv
624 C:\dev\eclipse\startup.jar -Xms256M -Xmx512M
3508 C:\dev\radrails\startup.jar
1316 sun.tools.jps.Jps -Dapplication.home=C:\dev\java\jdk1.5.0_10 -Xms8m
2444 CruiseControlWithJetty -Xms128m -Xmx256m -Djavax.management.builder.initial=mx4j.server.MX4JMBeanServerBuilder

You can also list the java processes on a remote host, though that machine must have a jstatd process running.

Last but not least, I saw the following warnings in the jps documentation.

  • "This utility is unsupported and may not be available in future versions of the JDK."
  • "You are advised not to write scripts to parse jps output since the format may change in future releases. If you choose to write scripts that parse jps output, expect to modify them for future releases of this tool."

Oh well, nothing is forever, right? By the time thet get around to changing the output format or removing this very useful tool, we'll hopefully have obtained a decent shared build machine where CruiseControl doesn't hang up and which everyone has access to be able to fix any problems if they should arise. As to why CruiseControl hangs up sometimes, I have no idea.

Rails Without a Database

Posted on April 07, 2007 by Scott Leberknight

Last week I needed to write a simple web application that would allow others on my team to control a process on my workstation when I wasn't around. Essentially I wanted to allow people to start, stop, restart, get process information about, and view the log of, a process running on my workstation. Since I wanted it to be simple, and because I didn't want it to take more than a few hours, I chose to write in in Rails. This turned out to be ridiculously simple, as I ended up with two controllers - an authentication controller and the process controlling controller - along with a few views. But, since all I was doing was calling system commands from the process controlling controller, there is no database in this application. (The "authentication" was just a hardcoded user name and password that I shared with my team so no database stuff there!)

So what happens when you have a Rails application with no database? Well, it works functionally, i.e when you are hacking your controllers and such, but the tests all fail! This is because Rails assumes you always have a database. A quick Google search led to an excellent resource, Rails Recipes, by the Pragmatic Progammers for writing and testing Rails apps with no database. One of the recipes is, conveniently enough, Rails without a Database! Basically, it describes how you can modify the Rails test_helper.rb file to remove all the database-specific setup and teardown code and then you can run your tests with no database to speak about. Cool.

Let's Play "Who Owns That Collection?" With Hibernate

Posted on March 28, 2007 by Scott Leberknight

If you have used Hibernate and mapped a one-to-many relationship you've probably come across the "delete orphan" feature. This feature offers cascade delete of objects orphaned by code like the following:

Preference pref = getSomePreference();
user.getPreferences().remove(pref);

In the above code, a specific Preference is removed from a User. With the delete orphan feature, and assuming there is an active transaction associated with a session, the preference that was removed from the user is automatically deleted from the database when the transaction commits. This feature is pretty handy, but can be tricky if you try to write clever code in you setter methods, e.g. something like this:

// Do not do this!
public void setPreferences(Set newPreferences) {
    this.preferences = newPreferences == null ? new HashSet<Preference>() : newPreferences;
}

Code like the above results in a HibernateException with the following message if you pass null into setPreferences and try to save the user object:

A collection with cascade="all-delete-orphan" was no longer referenced by the owning entity instance

What is happening here is that Hibernate requires complete ownership of the preferences collection in the User object. If you simply set it to a new object as in the above code sample, Hibernate is unable to track changes to that collection and thus has no idea how to apply the cascading persistence to your objects! The same error will occur if you passed in a different collection, e.g.:

user.setPreferences(new HashSet<Preference>());

So the point is that Hibernate's delete orphan abstraction is leaking into your domain model object. This is pretty much unavoidable but is a leaky abstraction nonetheless that developers need to be aware of lest they run into the error mentioned above.

So how can you avoid this problem? The only sure way that I know of is to make the setter method private, since passing any new collection or null results in the "owning entity" error. This way only Hibernate will use the setter method to load up user objects (it invokes the method reflectively after setting it accessible via the Reflection API). Then you could add a method addPreference to your code which is the public API for adding preferences. Anyone could of course use reflection to do the same thing Hibernate is doing, but then all bets are off as they are subverting your public API. For example:

public void addPreference(Preference p) {
    getPreferences().add(p);
    p.setUser(this);
}

This has the nice side effect of establishing the bi-directional relationship between user and preference, assuming your model allows bi-directional navigation. You could also add a null check if you are paranoid. Removing a preference from a user is equally simple. You can write a helper method removePreference or you could call the getter and then call remove as shown here:

user.getPreferences().remove(aPref);

Essentially, you can operate on the collection returned by getPreferences as a normal Java collection and Hibernate will do the right thing. Since you are operating on the collection maintained and observed by Hibernate, it is able to track your changes, whereas replacing the collection wholesale makes Hibernate really mad, since it believes it is the sole proprietor of the collection, not you! For example, if you want to remove all the user's preferences you could write the following:

user.getPreferences().clear();

Note that all the above discussion refers to one-to-many relationships that specify the delete orphan cascade type; usually you will specify both "all" and "delete-orphan." In cases where you are only using the "all" cascade option, the semantics are quite different. Assuming the normal case where the "many" side of the relationship owns the relationship -- i.e. you used inverse="true" in your mapping file or @OneToMany(mappedBy = "user") if using annotations - then you must explicitly delete the child objects as Hibernate will only track that side of the relationship. For example, if cascade is "all" and you remove a preference from a user and then save the user, nothing happens! You would need to explicitly delete the preference object, as shown here:

// Assume only using cascade="all" and an inverse="true" mapping in User.hbm.xml
user.getPreferences().remove(aPref);  // does not cause database delete
session.delete(aPref);                // this causes the database deletion

One last thing to note in the above is that you must remove aPref from the user, or else Hibernate will throw an exception stating that the preference would be re-saved upon cascade! So if the User object is in the Session, remember you need to undo both sides of the relationship for things to work properly.

LoggerIsNotStaticFinal

Posted on March 27, 2007 by Scott Leberknight

For anyone who uses PMD, the title of this blog appears in their list of PMD errors if they don't declare their loggers static and final. Specifically, the LoggerIsNotStaticFinal rule simply says that a log should be declared static and final. I also like to make sure they are private as well. For example:

// Jakarta Commons Logging
private static final Log log = LogFactory.getLog(MyClass.class);

The above code also shows another good practice, which is to pass the Class object to the getLog() method, instead of a string. Why the java.util.logging.Logger class doesn't even provide a method accepting a Class object is simply beyond me. Why did the people who developed the java.util.logging package base their API on Log4j yet omit some of the most useful parts of it? Oh well.

Now to the point. Why it is good practice to declare loggers private, static, and final? A logger is an internal implementation detail, so it should be private. You only need one logger for all instances of a class, hence static. And a logger should not be able to be replaced, thus final. So if this is good, what's not so good (at least in my opinion)? Simple - any logger that is not private, static, final, and which doesn't pass in a Class object to getLog()! For example, consider this common bit of code, declared in some base class:

// Not so good logger declaration
protected final Log log = LogFactory.getLog(getClass());

Why is this bad? Well, it isn't static for one thing. For another, it uses getClass() to obtain the log. At first this seems efficient since now all subclasses automatically inherit a ready-made log of the correct runtime type. So what's the issue here? The biggest problem with loggers declared in this manner is that you now get all the logging from the superclass mixed in with the logging from the subclass, and it is impossible in the log output to discern which messages came from which class unless you look at the source. This is really annoying if the superclass has a lot of logging that you don't want to see, since you cannot filter it out.

Another problem is that your ability to set log levels differently goes away, for example if a subclass resides in a different package than the superclass. In that case, if you try to filter out logging from the superclass, you can't because the actual runtime class was used to obtain the logger.

Last, having a protected logger just seems to violate basic object-oriented principles. Why in the world should subclasses know about an internal implementation detail from a superclass that is a cross-cutting concern, no less? Anyway, though this is a silly little rant it really is annoying when you extend a superclass that declares a protected logger like this.

JSP 2.0 Expressions Should Escape XML By Default

Posted on July 20, 2006 by Scott Leberknight

Jeff has posted a nice blog about cross-site scripting (XSS) vulnerabilities in JSP 2.0 expressions. With JSP 2.0 you can use the following to emit the description of a "todo" item:

${todo.description}

That's pretty nice. What happens when someone has entered a description like this?

<script type="text/javascript">alert('F#$@ you!');</script>

Well, it executes the JavaScript and pops up a nice little message to you. Of course more malicious things could be injected there but you get the idea. JSTL's c:out tag, by default, escapes XML content so the following code will not execute the embedded JavaScript but will simply display it as part of the web page.

<c:out value="${todo.description}"/>

The nice thing here is that the default behavior of c:out is to escape XML content. If you need to override this and not escape XML content, you can simply write the following.

<c:out value="${todo.description}" escapeXml="false"/>

My question is this: Why in the world did the expert group on the JSP 2.0 JSR decide to make not escaping XML content the default for EL expressions, when they made the opposite decision for c:out? As Jeff alluded to in his post, it is too much of a hassle to try and determine where it is safe to use the JSP 2.0 expression syntax and where you need to ensure potential XML content is escaped. So the safest bet is to use c:out or the JSTL 1.1 function escapeXml, which looks like this.

${fn:escapeXml(todo.description)}

Given the choice between c:out and fn:escapeXml() I probably would prefer the latter as it seems a tad bit cleaner and more in the spirit of JSP 2.0 expressions. But I would prefer instead that the JSP expression language escaped XML content by default rather than have to choose which XML-escaping syntax to use.