iPhone Bootcamp Day 5

Posted on December 05, 2008 by Scott Leberknight

Today is Day 5 — the final day — of the iPhone bootcamp at Big Nerd Ranch, taught by Joe Conway. The last day of Big Nerd Ranch bootcamps are half-days so you have breakfast, have class until lunch, have lunch, and then shuttle back to the airport. Unfortunately for me my flight isn't until 7pm so I have a lot of time to write this entry!

See here for a list of each day's blog entries.

Preferences

First up on this last day of iPhone bootcamp is preferences, which allow an app to keep user settings in a centralized location. While you can store user settings yourself using a custom UI, the easiest way is to use the iPhone's built-in Settings app. The Settings app is the central location where apps can store user settings. It is also where all the other common iPhone settings are, which is convenient.

You work with the NSUserDefaults object to store app settings based on the app's bundle ID, for example com.acme.MyKillerApp. You'll also want to ensure you provide "factory default" settings using the registerDefaults method; these settings will be used if the user has not changed anything. NSUserDefaults is basically a dictionary of key-value pairs, and you obtain the standard user defaults using the standardUserDefaults method. From that point you can write user settings using methods like setObject:forKey, setInteger:forKey, and so on, and you can remove settings using removeObjectForKey. Similarly, you read settings using methods like objectForKey, integerForKey, and so forth.

As mentioned earlier, the Settings app is a uniform way of setting preferences for all apps. Users know where to look for setting preferences for apps there. In addition, managing the settings separately from your application also saves screen real estate, which is important since the screen isn't all that big. You define the settings for your application using the Settings Bundle. This is a standard Property List. You use special property keys, such as PSToggleSwitchSpecifier, to define the settings for your application. The Settings app uses the Settings bundle and property list to create the settings UI for your app. For example, if you define settings for a PSToggleSwitchSpecifier, a PSTextFieldSpecifier, and a PSMultiValueSpecifier, the user will see a toggle switch, a text field, and a slider when she goes to edit your app's settings.

For the lab exercise, we modified our Random Number app (from Day 4 during the Web Services section) to have user settings for how the random numbers are generated — for example minimum number and the range of numbers. It is quite easy to add user settings to your app in the iPhone using the Settings app and bundle.

Instruments

Joe briefly talked about using the Instruments app to do powerful debugging, for example to detect memory leaks in your app. He then demonstrated using the Leaks tool within Instruments to track a memory leak — which he had previously introduced intentionally — in the Tile Game app. Leaks was able to pinpoint the exact line of code where a leaked object (i.e. it was not released) was allocated. This is pretty cool.

Joe also showed what leaks look like when they come from C code versus Objective-C code. C-code allocation leaks (i.e. allocated using malloc) show up in Instruments as just 'General Block' and from what I gather are probably not that fun to track down the exact cause. Leaks in Objective-C code are easier, and as I just mentioned, Leaks can show you the exact line of code where a leaked object was allocated.

Networking

By this time, it is getting closer to lunch, and the end of the class. Sad, I know. Anyway, probably one of the hardest and broadest topics was saved for last, given that you could spend an entire course on nothing but network programming. Anyway, on the iPhone a key concept when doing network programming is "reachability." Reachability does not mean that a network resource is definitely available; rather it means the resource is "not impossible" to reach. Another important thing on the iPhone is figuring out what type of connection you have, for example EDGE or 3G or Wi-Fi. You might choose to do different things based on the type of connection.

With the SystemConfiguration framework you can determine if the outside world is reachable, again with the caveat the "reachable" means "not impossible to reach." When coding you work with "reachability references" and "reachability flags" which describe the type of connection, for example reachable, connection required, is wide-area network, and so on.

We then learned a bit about Bonjour, which is basically a zero-configuration network service discovery mechanism that works on all platforms, has implementations in many languages, and is easy to use. For example, Bonjour makes it ridiculously easy to find printers on a network, as opposed to the way you have to do it in other operating systems. You can use Bonjour on the iPhone to discover and resolve Bonjour services. You use NSNetServices and NSNetServiceBrowser to accomplish this, and you specify a delegate to receive notifications when services appear and disappear.

In addition to Bonjour, you can also send and receive data over the network as you would expect. To send and receive data you can either use a C-based Core Foundation interface or use a stream-based interface with CFStream. You register read and write callbacks with the main iPhone run loop, which calls your functions when network events occur.

Network programming is really hard, since there are so many things that can go wrong and you need to deal with many if not all of them. Joe recommended that before jumping right into hardcore networking, developers should become familiar with the concepts and recommended the Beej's guide to network programming as a good primer. Given that I've really only done simple socket programming in higher-level APIs like Java's, and have not really done much of it, I'll need to check that out. And so this concludes this episode of iPhone bootcamp blog, since it is now time for lunch and, sadly, the course is over!

Random Thoughts

This week has been a really intensive introduction to iPhone programming. We've gone from a very simple app on Day 1 and covered all kinds of things like form-based UIs and text manipulation; Core Location; Core Graphics; view transitions and Core Animation; using the iPhone camera and accelerometer; retrieving web-based data; manipulating Address Book data; setting user preferences, and finally networking and debugging using Instruments. We've basically covered an entire book's worth of content and written about 10 iPhone applications ranging from simple to not-so-simple. If you are interested in programming the iPhone, I definitely recommend checking out the iPhone bootcamp at Big Nerd Ranch. Because the OpenGL stuff we did on the iPhone was so cool, I'm thinking my next course to take might be the OpenGL bootcamp but that probably won't happen until next year sometime! With my new knowledge I plan to go home and try my hand at creating a snowflake or snowstorm application that my daughters can play with. If I actually get it working maybe I'll write something about it.

iPhone Bootcamp Day 4

Posted on December 04, 2008 by Scott Leberknight

Today is Day 4 of the iPhone bootcamp at Big Nerd Ranch, taught by Joe Conway. Unfortunately that means we are closer to the end than to the beginning.

See here for a list of each day's blog entries.

View Transitions

Since we had not completely finished up the Core Graphics lab from Day 3, we spent the first hour completing the lab exercise, which was to build a "Tile Game" application where you take an image, split it into tiles, and the user can move the tiles around and try to arrange them in the correct way to produce the full image. Actually you don't really split the image; you use a grid of custom objects — which are a subclass ofUIControl — and translate the portion of the image being split to the location in the grid where the tile is located. When a user moves the tiles around, each tile still displays its original portion of the image. So, you are using the same image but only displaying a portion of the image in each tile.

After completing the lab, Joe went over view transitions. View transitions allow you to easily add common animations (like the peel and flip animations you see on many iPhone apps) to your applications to create nice visual effects and a good user experience. For example, in our Tile Game app, when you move a tile it abruptly jumps from the old location to the new location with no transition of any kind. Also, when you touch the "info" button to select a new tiled-image there is no animation. It would be better to make the tiles slide from square to square, and to flip the tile over to the image selection screen. View transitions let you do this pretty easily. (Joe mentioned that later we'll be covering Core Animation which is more powerful and lets you perform all kinds of advanced animations.)

So, to reiterate, view transitions are meant for simple transitions like flipping, peeling, and so on. There are several "inherently animatable" properties of a UIView: the view frame, the view bounds, the center of the view, the transform (orientation) of the view, and the alpha (opacity) of the view. For example, to create a pulsing effect you could peform two alpha animations one after the other: the first one would transition the view from completely opaque to completely transparent, and the second animation would transition back from completely transparent to completely opaque. For the View Transition lab we used a "flip" animation to flip between views.

You use class methods of UIView to begin animations, setup view transitions, and commit the transitions. You use beginAnimations to begin animations. Then you define the animations using methods like setAnimationDuration and setAnimationTransition, which set the length of time over which the animation occurs and the type of animation such as peel or flip, respectively. Then you perform actions like add and remove subviews or perform transitions on any of the "inherently animatable" properties in an "animation block." To start the animations on screen you call commitAnimations after the animation block. This seems similar to how transactions work in a relational database, in that you begin a transaction, define what you want done, and then when satisfied you commit those "changes." Joe mentioned that Core Graphics essentially uses a hidden or offscreen view to store the actions and only shows the animations and actions when the animations are committed.

Core Animation

Next up: Core Animation. While view transitions are for simple animations, with Core Animation you can animate almost anything you want. The basic idea here is that when using Core Animation, the animation takes place on a CALayer rather than the UIView. There is one CALayer per UIView, and the CALayer is a cached copy of the content in the UIView. As soon as an animation begins, the UIView and CALayer are interchanged, and all drawing operations during the animation take place on the CALayer; in other words when an animation begins the CALayer becomes visible and handles drawing operations. After the animation ends the UIView is made visible again and the CALayer is no longer visible, at which point the UIView resumes responsibility for drawing.

You create animations using subclasses of CAAnimation and adding them to the CALayer. CABasicAnimation is the most basic form of animation: you can animate a specific property, known as a key path, to perform a linear transition from an original value to a final value. For example, using a key path of "opacity" you can transition an object from opaque to translucent, or vice versa.

You can also combine multiple animations using a CAAnimationGroup, which is itself a subclass of CAAnimation (hey, that's the Composite design pattern for anyone who cares anymore). You can use CAKeyframeAnimation to perform nonlinear animations, in which you have one or more "waypoints" during the animation. In other words, with CAKeyframeAnimation you define transitions for a specific attribute — such as opacity, position, and size — that have different values at the various waypoints. For example, in the Core Animation lab exercise we defined a rotation animation to "jiggle" an element in the Tile Game we created earlier to indicate that a tile cannot be moved. The "jiggle" animation uses the CAKeyframeAnimation to rotate a tile left, then right, then left, then right, and finally back to its original position to give the effect of it jiggling back and forth several times.

To finish up Core Animation, Joe covered how to use Media Timing Functions to create various effects during animations. For example, we used CAMediaTimingFunctionEaseIn while sliding tiles in the Tile Game. Ease-in causes the tile animation to start slowly and speed up as it approaches the final location. Finally, I should mention that, as with most iPhone APIs, you can set up a delegate to respond when an animation ends using animationDidStop. For example, when one animation stops you could start another one and chain several animations together.

Camera

After a cheeseburger and fries lunch, we learned how to use the iPhone's camera. The bad news about the camera is that you can't do much with it: you can take pictures (obviously) and you can choose photos from the photo library. The good news is that using the camera in code is really simple. You use a UIImagePickerController, which is a subclass of UINavigationController, and push it modally onto the top view controller using the presentModalViewController:animated method. Since it is modal, it takes over the application until the user cancels or finishes the operation. Once a user takes a picture or selects one from the photo library, UIImagePickerController returns the resulting image to its delegate.

There are two delegate methods you can implement. One is called when the user finishes picking an image and the other is called if the user cancelled the operation. The delegate method called when a user selects a picture returns the chosen image and some "editingInfo" which allows you to move and scale the image among other things. The lab exercise for the camera involved modifying the Tile Game to allow the user to take a photo which then becomes the game's tiled-image.

Accelerometer

The accelerometer on the iPhone is cool. It measures the G-forces on the iPhone. You use a simple, delegate-based API to handle accelerations. While the API itself is pretty simple, the math involved and thinking about the orientation of the iPhone and figuring out how to get the information you want is the hard part. But you'd kind of expect that transforming acceleration information in 3D space into information your application can use isn't exactly the easiest thing. (Or, maybe it's just that I've been doing web applications too long and don't know how to do math anymore.)

There is one shared instance of the accelerometer, the UIAccelerometer object. As you might have guessed, you use a delegate to respond to acceleration events, specifically the accelerometer:didAccelerate method which provides you the UIAccelerometer and a UIAcceleration object. You need to specify an update interval for the accelerometer, which determines how often the accelerometer:didAccelerate delegate method gets called.

The UIAcceleration object contains the measured G-force along the x, y, and z axes and a time stamp when the measurements were taken. One thing Joe mentioned is that you should not assume a base orientation of the iPhone whenever you receive acceleration events. For example, when your application starts you have no idea what the iPhone's orientation is. To do something like determine if the iPhone is being shaken along the z-axis (which is perpendicular to the screen) you can take an average of the acceleration values over a sample period and look at the standard deviation; if the value exceeds some threshold your application can decide the iPhone is being shaken and you can respond accordingly. For the lab exercise, we modified the Tile Game to use the accelerometer to slide the tiles around as the user tilts the iPhone and to randomize the tiles if the user shakes the iPhone. Pretty cool stuff!

Web Services

Well, I guess the fun had to end at some point. That point was when we covered Web Services, mainly because it reminded me that, no, I'm not going to be programming the iPhone next week for the project I'm on, and instead I'm going to be doing "enterprisey" and "business" stuff. Oh well, if we must, then we must.

Fortunately, Joe is defining web services as "XML over HTTP" and not as WS-Death-*, though of course if you really want to reduce the fun-factor go ahead Darth. To retrieve web resources you can use NSURL to create a URL, NSMutableURLRequest to create a new request, and finally NSURLConnection to make the connection, send the request, and get the response. You could also use NSURLConnection with a delegate to do asynchronous requests, which might be better to prevent your application's UI from locking up until the request completes or times out.

If you have to deal with an XML response, you can use NSXMLParser, which is an event-based (SAX) parser. (By default there is no tree-based (DOM) parser on the iPhone, but apparently you can use libxml to parse XML documents and get back a doubly-linked list which you can use to traverse the nodes of the document.) You give NSXMLParser a delegate which recieves callbacks during the parse, for example parserDidStartDocument and parser:didStartElement:namespaceURI:qualifiedName:attributes. Then it's time to kick it old school, handling the SAX events as they come in, maintaining the stack of elements you've seen, and writing logic to respond to each type of element you care about. For our Web Services lab we wrote a simple application that connected to random.org, which generates random numbers based on atmospheric noise, and made a request for 10 random numbers, received the response as XHTML, extracted the numbers, and finally displayed them to the user.

Address Book

Last up for today was using the iPhone Address Book. There are two ways to use the address book. The first way leverages the standard iPhone Address Book UI. The second uses a lower-level C API in the AddressBook framework.

When you use the Address Book UI, you use the standard iPhone address book interface in your application. It allows the user to select a person or individual items like an email address or phone number. You must have a UIViewController and define a delegate conforming to ABPeoplePickerNavigationControllerDelegate protocol. You then present a modal view controller passing it a ABPeoplePickerNavigationController which then takes over and displays the standard iPhone address book application. You receive callbacks via delegate methods such as peoplePickerNavigationController:shouldContinueAfterSelectingPerson to determine what action(s) to take once a person has been chosen. You, as the caller, are responsible for removing the people picker by calling the dismissModalViewControllerAnimated method. We created a simple app that uses the Address Book UI during the first part of the lab exercise.

Next we covered the AddressBook framework, which is bare-bones, pedal-to-the-metal, C code and a low-level C API. However, you get "toll-free bridging" of certain objects meaning you can treat them as Objective-C objects, for example certain objects can be cast directly to an Objective-C NSString. Another thing to remember is that with the AddresBook framework there is no autorelease pool and you must call CFRelease on objects returned by methods that create or copy values. Why would you ever want to use the AddressBook framework over the Address Book UI? Mainly because it provides more functionality and allows you to directly access and potentially manipulate, via the API, iPhone address book data. For the second part of the Address Book lab, we used the AddressBook API to retrieve specific information, such as the mobile phone number, on selected contacts.

Random Thoughts

Using View Transitions and Core Animation can really spice up your apps and create really a cool user experience. (Of course you could probably also create really bad UIs as well if overused.) The camera is pretty cool, but the most fun today was using the accelerometer to respond to shaking and moving the iPhone. Web services. (Ok, that's enough on the enterprise front.) Last, using the various address book frameworks can certainly be useful in some types of applications where you need to select contacts.

Something in one of the labs we did today was pretty handy, namely that messages to nil objects are ignored in Objective-C. There are a lot of times this feature would be hugely useful, though I can also see how it might cause debugging certain problems to be really difficult! There's even a design pattern named the Null Object Pattern since in languages like Java you get nasty things like null-pointer exceptions if you try to invoke methods on a null object.

Man, we really covered a lot today, and now I am sad that tomorrow is the last day of iPhone bootcamp. I should become a professional Big Nerd Ranch attendee; I think that would be a great job!

iPhone Bootcamp Day 3

Posted on December 03, 2008 by Scott Leberknight

Today is Day 3 of the iPhone bootcamp at Big Nerd Ranch, taught by Joe Conway.

See here for a list of each day's blog entries.

Media

Today we started off learning how to play audio and video files by creating a simple application that allows you to play a system sound, an audio file, and a movie. If all you want to do is play .caf, .wav. or .aiff audio files tat are less than 30 seconds in length, you're in luck because you can simply use AudioServicesCreateSystemSoundID to register a sound with the system and then use AudioServicesPlaySystemSound to play the sound. On the other hand, if you want to play almost any type of audio file, you can use the AVAudioPlayer, which really isn't all that much more complicated. You create a AVAudioPlayer and then implement AVAudioPlayerDelegate methods like audioPlayerDidFinishPlaying to respond to audio player events. You simply call start and stop to control playback and you can use isPlaying to check playback status. Recording audio is apparently more difficult, and we didn't really cover it in lecture or lab, though the exercise book has a whole appendix devoted to creating your own voice recorder application.

For movie playback you can use MPMoviePlayerController. It is also pretty easy to use. But, one caveat is that it completely takes over your iPhone application when you call play, and the user has no control until the movie ends or until the user exits your application.

Low Memory Warnings

We next took a (very) short detour talking about low memory warnings. When your application is taking too much memory, the iPhone sends your application the applicationDidReceiveMemoryWarning message. In that method you are supposed to release as much memory as you possibly can. However, according to Joe, the iPhone does not really provide much information about how much memory you need to release, or how much memory will cause a low memory warning in the first place! Joe says to just release as much memory as you possibly can immediately or else the iPhone can simply terminate your application. All your UIViewControllers are sent didReceiveMemoryWarning. The default implementation checks if you have implemented loadView, and if so releases its view instance. The next time the view needs to be shown on screen, it gets reconstructed. One last thing is that the iPhone simulator allows you to simulate a low memory warning via the "Simulate Memory Warning" option on the Hardware menu.

OpenGL

OpenGL ES is the implementation of OpenGL on the iPhone. Basically it is a low-level C API that allows you to draw 2D and 3D graphics on the iPhone in various colors and textures. Triangles, lines, and points comprise the basic geometrical shapes you can use to compose graphics. When coding OpenGL ES you basically need to define all the vertices in two or three dimensional space, then define the color of each vertex, and then, via the EAGLContext, render the graphic. The color of a vertex is defined in RGBA8 format, which allows you to specify red, blue, green color channels and an alpha transparency channel, using 8-bits per channel. The EAGLContext is the bridge between Cocoa Touch and OpenGL ES, and is what allows you to use OpenGL on the iPhone.

When coding OpenGL ES, you use various buffers. The frame buffer describes one frame of drawing. The render buffer contains the pixels created by drawing commands sent to the frame buffer. When you draw to the screen, you really draw to a CAEAGLLayer object, and you use a timer to request drawing updates, e.g. schedule a timer to update the drawing 60 times per second creates a 60 frame per second rendering. Another important thing is that you must call setCurrentContext on the EAGLContext before performing any drawing commands. Last, according to Joe, OpenGL is not at all forgiving if you screw something up, for example if you have an empty buffer or mismatched vertex data in the buffer. When that occurs, your application simply crashes.

The lab exercise for OpenGL was to create a "Fireworks" application that randomly generates "fireworks" using OpenGL and that simulates those fireworks exploding and then burning out. Pretty cool stuff, but man is it a lot of code to just to create relatively simple things because you must fully define all the geometry and colors and then use OpenGL functions to enable various drawing states and draw the geometry. You also of course need to implement logic to change the drawing over time, for example once a firework explodes you need to define the logic to animate the particles using lots of math and even some good old physics equations to compute position, velocity, and acceleration. I bet if they taught physics by having students implement particle engines on the iPhone more people would be into science!

Textures

After various types of pizza, including a really good barbeque pizza, for lunch, we learned about using textures in OpenGL ES. You use per-pixel coloring to spread an image file across geometric primitives (triangles, points, and lines). Textures add depth to a scene and can also be used to create shadow effects and is how you draw text using OpenGL. We extended our Fireworks application by adding texture to each exploding particle. Essentially you pin an image file to your geometry. Since adding textures is still OpenGL coding, it is low level and requires a fair amount of code to define the mapping coordinates for the texture in order to pin it to the scene geometry. But the end result is pretty cool!

Multi-touch Events

Before getting into touch events, we took an afternoon hike. It was really nice and I wished we had done the zip lines at Banning Mills today, since it is supposed to rain tomorrow. When we got back from the hike, we plowed into how to handle touch events on the iPhone. First up, Joe told us all about UIResponder, which is the base class for UIView, UIController, UIWindow, and UIApplication. Because of this inheritance relationship, subclasses gain similar functionality automatically for handling the different phases for UIResponser. The phases of UIResponser are: touchesBegan, touchesMoved, and touchesEnded. These phases allow you to handle all kinds of touch events, including up to five simultaneous touches.

The UITouch object is what you work with when handling touches. By default, multi-touch is not enabled, so you have to enable multi-touch either in code using setMultipleTouchEnabled:YES or you can set the property in Interface Builder. Each of the touch callback methods, for example touchesBegan, sends you the touches as a set of UITouch objects and a UIEvent object. You can use the event object to determine the number of touches and respond appropriately. In the lab exercise we extended the Fireworks application to respond to single touches, touch-and-drag, and multiple-touches. When handling multiple touches you need to use math and geometry to figure out things like how far apart the touches are; in the Fireworks application we made it so the further away the touches the faster we disperse the firework particles.

Core Graphics

Last for today was Core Graphics, which allows drawing in Cocoa Touch. Core Graphics is much simpler to use than OpenGL but is not as powerful. It is not designed to draw super fast animations and games like OpenGL is, and according to Joe should mostly be used for UIs where you need to do drawing.

You use the drawRect method to define your drawing commands. You draw to a CGContext graphics context. The main iPhone run loop, not you, is responsible for creating the graphics context and calling drawRect; you simply need to define what to draw, not when to do it. The basic process you follow to draw is as follows. First, get a reference to a CGContext graphics context. Second, create a CGMutablePathRef path object and then draw points, lines, curves, and shapes to the path. Next, you set the color of the graphics context, and finally you stroke or fill the path object.

During drawing you can save and restore the state of a CGContext. For example, you might perform some drawing commands, then save the context state, change a few drawing attributes and perform several more drawing operations, restore the context state, and continue drawing using the original state. According to Joe, you should not save/restore state a lot or it can really slow down your application.

Core Graphics uses the "Painter's Method" when drawing objects on screen, meaning objects are drawn from back to front. Objects drawn later are drawn over top objects that were drawn earlier, effectively replacing the existing pixels. One other thing to mention is that, using Core Graphics, you can do 2D transforms applied to CGContext to perform rotation, translation, scaling, and skewing of the shapes on screen.

Random Thoughts

My brain is starting to hurt after three days of hardcore learning and coding. Several of us started watching the Bourne Ultimatum after dinner on the nice big flat-screen TV in the Banning Mills lodge to relax a bit.

OpenGL, while powerful, is not what I'd call the most fun API I've ever worked with, and it would probably take a while to learn the ins and outs and become an expert in it. Adding textures using OpenGL can really spice up your application and make it look better. Our Fireworks application went from exploding popcorn before adding texture to looking like real, exploding firework particles after the texture was added to the particles.

Being able to handle multi-touch events is just plain cool.

iPhone Bootcamp Day 2

Posted on December 02, 2008 by Scott Leberknight

Today is Day 2 of the iPhone bootcamp at Big Nerd Ranch.

See here for a list of each day's blog entries.

Localization

After a nice french toast breakfast — which my dumbass CEO Chris couldn't eat because he is allergic to eggs and complained a lot and made them give him a separate breakfast — we headed down to the classroom and started off learning about localizing iPhone apps. (UPDATE: The "dumbass CEO" comment was made totally as a joke, since he was sitting right next to me in class as I wrote it. Plus, I've known him for over 12 years so it's all good. So lest you think I don't like him or something, it's just a joke and is not serious!) As with Cocoa, you basically have string tables, localized resources (e.g. images), and a separate XIB file for each different locale you are supporting. (Interface Builder stores the UI layout in an XML format in a XIB file, e.g. MainWindow.xib.) This means that, unlike Java Swing for example, you literally define a separate UI for each locale. We localized our DistanceTracker application that we built on day one for English and German locales. To start you use the genstrings command line utility to generate a localizable string resource and then in Xcode make it localizable; this creates separate string tables for each language which a translator can then edit and do the actual translation. You also need to make the XIB files localized and then redo the UI layout for each locale. Sometimes this might not be too bad, but if the locale uses, for example, a right-to-left language then you'd need to reverse the position of all the UI controls. While having to create essentially a separate UI for each locale seems a bit onerous, it makes a certain amount of sense in that for certain locales the UI might be laid out completely differently, e.g. think about the right-to-left language example. Finally, you use NsLocalizedString in code, which takes a string key and a comment intended for the translator which is put into the string tables for each locale - at runtime the value corresponding to the specified key is looked up based on the user's locale and is displayed. If a value isn't found, the key is displayed as-is which might be useful if you have a situation where all locales use the same string or something like that.

View Controllers

After localization we tackled view controllers. View controllers control a single view, and are used with UITabBarController and UINavigationController. We created a "navigation-based application" which sets up an application containing a UINavigationController. The difference between UITabBarController and UINavigationController is that the tab bar controller stores its views in a list and allows the user to toggle back and forth between views. For example, the "Phone" application on the iPhone has a tab bar at the bottom (which is where it is displayed in apps) containing Favorites, Recents, Contacts, Keypad, and Voicemail tab bar items. Tab bar items are always visible no matter what view is currently visible. As you click the tab bar items, the view switches. So, a UITabBarController is a controller for switching between views in a "horizontal" manner somewhat similar to the cover flow view in Finder. On the other hand you use UINavigationController for stack-based "vertical" navigation through a series of views. For example, the "Contacts" iPhone application lists your contacts; when you touch a contact, a detail view is pushed onto the UINavigationController stack and becomes the visible and "top" view. You can of course create applications that combine these two types of view controllers. In class we created a "To Do List" application having a tab bar with "To Do List" and "Date & Time" tabs. If you touch "To Do List" you are taken to a list of To Do items. Touching one of the To Do items pushes the To Do detail view onto the stack, which allows you to edit the item. Touching "Date & Time" on the tab bar displays the current date and time in a new view.

It took me a while to get my head wrapped around combining the different types of view controllers in the same application and how to connect everything together. Since I am used to web app development, this style of development requires a different way of thinking. I actually like it better since it has a better separation of concerns and is more true to the MVC pattern than web applications are, but I'm sure I'll have to try to build more apps using the various view controllers to get more comfortable.

Table Views

We had a good lunch and after that headed back down to cover table views. Table views are views that contain cells. Each cell contains some data that is loaded from some data source. For example, the "Contacts" application uses a UITableView to display all your contacts. The "To Do List" application we created earlier also uses a UITableView to list the To Do items. Basically, UITableView presents data in a list style.

Table views must have some data source, such as an array, a SQLite database, or a web service. You are responsible for implementing data retrieval methods so that when the table view asks for the contents of a specific cell, you need to offer up a cell containing the data appropriate for the row index. Since the iPhone has limited real estate, UITableView re-uses cells that have been moved off screen, for example if they are scrolled out of view. This way, rather than create new cells every single time UITableView asks for a cell, you instead ask if there are any cells that can be re-used. If so, you populate the cell with new data and return it.

UITableView also provides some basic functionality, like the ability to drag cells, delete them, etc. You only need to implement the logic needed when events, like move or delete, occur. Another thing you typically do with table views is respond when a user touches a cell. For example, in the "To Do List" application a new "detail" view is displayed when you touch a cell in the table view. This is probably the most common usage; table views display aggregate or summary data, and you can implement "drill down" logic when a user touches a cell. For example, when a user touches a To Do item cell, a new view controller is pushed onto the stack of a UINavigationController showing the "detail" view and allowing you to edit the To Do item. Another cool thing you can do with table views is to subclass UITableView in order to lay out subviews in each cell. In the "To Do List" app, we added subviews to each cell to show the To Do item title, part of the longer item description in smaller font, and an image to the left of the title if the item was designated as a "permanent" item. The "Photo Albums" iPhone application also uses subviews; it shows an image representing the album and the album title in each cell.

Saving and Loading Data Using SQLite

By this time, it was already late afternoon, and we hiked out to the old paper mill and back. It stared getting colder on the way back as the sun started to set. When we got back, we learned all about saving and loading data using SQLite. First, we learned about the "Application Sandbox" which can only be read/written by your application, for security reasons. There are several locations that each iPhone application can store data. Within you application's bundle (e.g. <AppName.app>) there is Documents, Library/Preferences, Library/Caches, and tmp folders. Documents contains persistent data, such as a SQLite database, that gets backed up when you sync your iPhone. Library/Preferences contains application preference data that gets backed up when you sync. Library/Caches, which Joe and Brian (the other instructor who is helping Joe this week) just found out about before our hike, is new in version 2.2 of the iPhone software, and stores data cached between launches of your application. The tmp directory is, as you expect, used for temporary files. You as the developer are responsible for cleaning up your mess in the tmp folder, however!

After discussing the sandbox restrictions, we learned how you can locate the Documents directory using the NSSearchPathForDirectoriesInDomains function. Finally, we learned how to use SQLite to add persistence to iPhone applications using SQLite, which I continually misspell with two "L"s even though there is really only one! SQLite supports basic SQL commands like select, insert, update, and delete. The reason you need to know how to find the Documents directory is because you'll need to create a copy of a default SQLite database you ship with your application into Documents. Basically, you supply an empty database with your application's Resources — this is read-only to your app. In order to actually write new data, the database must reside in a location (such as Documents) where you have access to write. So, the first thing to do is copy the default database from Resources to the Documents directory where you can then read and write, and where the database will then automatically be backed up when the user syncs her iPhone.

We then added SQLite persistence to the "To Do List" application. While I think SQLite is cool in theory, actually using it to query, insert, update, and delete data is painful, as you have to handle all the details of connecting, writing and issuing basic CRUD queries, stepping through the results, closing the statements, cleaning up resources, etc. It feels like writing raw JDBC code in Java but possibly worse, if that's possible. Someone told me tonight there is supposedly some object-relational mapping library which makes working with SQLite more palatable, though I don't remember what it was called or if it is even an object-relational mapper in the same sense as say, Hibernate. Regardless, I persisted (ha, ha) and got my "To Do List" application persisting data via SQLite.

WebKit

Joe apparently gave a short lecture on using WebKit in iPhone applications at this point. Unfortunately I decided to go for a run and missed the lecture. In any case, WebKit is the open source rendering engine used in Safari (both desktop and mobile versions) to display web content. You use UIWebView in your application and get all kinds of functionality out-of-the-box for hardly any work at all. UIWebView takes care of all the rendering stuff including HTML, CSS, and JavaScript. You can also hook into events, such as when the web view started and finished loading, via the UIWebViewDelegate protocol. In our lab exercises after dinner, we implemented a simple web browser using UIWebView and used an activity progress indicator to indicate when a page is loading.

In addition to loading HMTL content in UIWebView, you can also load things like images and audio content. It is ridiculously easy to add a UIWebView to your application in order to display web content.

Random Thoughts

Another long day has come and gone. It is amazing how much energy everyone has to basically learn all day and keep going well into the night hours.

Today we learned about localization on the iPhone. The most important thing is that you need to create a separate UI for each locale you are supporting. This means you should not work on localizing until right before you are ready to ship; otherwise you'll spend all your time continually tweaking all the localized UI after every little change you make.

Tab and navigation view controllers are powerful ways to implement application navigation using a tab paradigm or a stack-based, guided navigation scheme, respectively. Combined with table views, you can accomplish a lot with just these three things.

While I think having a relational database available for persistence in your iPhone apps is nice, I really do not want to write the low-level code required to interact with SQLite; once you get used to using an ORM tool like Hibernate or ActiveRecord you really don't want to go back to hand-writing basic CRUD statements, marshaling result sets into objects and vice-versa, and managing database resource manually. Guess I'll need to check into that SQLite library someone mentioned.

It is surprisingly easy to integrate web content directly into an iPhone application using UIWebView!

Tomorrow looks to be really cool, covering things like media and OpenGL. Until then, ciao!

iPhone Bootcamp Day 1

Posted on December 01, 2008 by Scott Leberknight

Today is the first day of the iPhone bootcamp at Big Nerd Ranch at Historic Banning Mills B&B in Whitesburg, GA. It is being taught by Joe Conway. My goal is to write a blog entry for each day of the class, so we'll see how that goes.

See here for a list of each day's blog entries.

Simple iPhone Application

We started off the course by creating a simple iPhone application using a bunch of UI controls that come out of the box. We dragged and dropped controls onto an iPhone Window application, hooked up some events in Interface Builder, and wrote a bit of event handling code. We ran the application on the simulator (i.e. not on our phones) and played around a bit with it. Cool to get something up and running in the first hour of a five day course!

App Icon and Default Image

After getting the initial iPhone app up and running we added an icon for the application which is what displays on the iPhone "desktop" and added a default image, the purpose of which is to "fool" the user into thinking the application launched immediately without delay. So, in other words, when an iPhone app launches, the first thing that happens is the Default.png image is displayed. Joe mentioned some people use this for so-called "splash" screens, perhaps to display a company logo or some advertising. While this is nice in theory, Joe mentioned this is about the worst thing you can do - when Apple makes the iPhone faster over time, instead of a two or three second splash screen, now you might see something flicker into and out of existence in a split second and users won't know what's going on. In any case the best thing to do is take a screenshot of your application. By the time the user gets around to actually clicking something the Default.png image will have been replaced by the actual application, and your app has the appearance of an immediate startup, which users always like.

Objective-C

Then it was onto a short introduction to the Objective-C language. Having attended the Cocoa bootcamp last April and read through Programming in Objective-C I was already comfortable with Objective-C. You basically learn that Objective-C is a dynamically-typed language built on top of C, adding objects and messaging, i.e. you send messages to objects. We blew through the basics of creating objects, initializing them, and creating accessors. Thankfully Objective-C 2.0 added properties which gets rid of the getter/setter method tedium. We briefly covered some of the basic classes like NSString and NSArray and NSMutableArray. After all that, it was on to memory management. Although Apple introduced a garbage collector in Mac OS Leopard, it is not available to iPhone applications and you must manage retain counts of objects manually using retain, release, and the autorelease pool. This is tedious at best, but Joe provided several concrete and relatively simple rules to follow when writing iPhone apps, which I'm not really going to delve into right now, but suffice it to say that you have to pay a lot more attention to memory issues writing iPhone apps than, say, writing web apps in a garbage-collected language like Ruby, Java, or C#. We wrote several sample applications that demonstrated Objective-C basics, counting object references, and using the autorelease pool.

Using Text Controls

Next up was the chapter on "text" on the iPhone, which focused on using the UITextField and UITextView controls. For this section we created an application that allowed you to search a large block of text in a UITextView for specific text typed into a text field. During this section we learned how to deal with the (virtual) keyboard and how it automatically becomes the "first responder" when a user touches a text UI control. Joe showed how the Responder Chain delegates events until some object handles it, or if no objects are interested, the event simply and silently drops off into nothingness. (For Gang of Four aficionados, this would be the Chain of Responsibility pattern. Does anyone actually care anymore?) This allows you to delegate events up a chain of objects. When an text-capable object becomes the first responder, the virtual keyboard automatically appears for the user to type something. To remove it you can write code to resign the first responder status. Last in the text section was using notifications and the notification center to observe when the keyboard is about to show (be displayed on screen) and write a log message.

Delegates

Even though it was cold outside (about 40 degrees Fahrenheit) we took a 30 minute hike around 3 o'clock and got refreshed. Then we continued to learn about using delegates and protocols. Basically a delegate handles certain functionality passed off to it by another object. The delegate essentially extends the functionality of an object without needing to resort to all kinds of subclassing everywhere; in other words it is a way to perform callbacks and extend functionality. For example, an XML parser knows how to parse the XML and it might send messages to a delegate object. The delegate object in this case knows what to do when it sees specific elements or attributes, while the parser remains completely generic. This enables re-use of the parser without it needing to know anything about how your application actually responds to various elements and attributes. We extended our text search application using delegates to perform the actual searching logic as well as resigning and becoming the first responder.

Core Location

The last, and coolest, topic today was Core Location, which is the framework that allows your iPhone to figure out where you are in the world. We wrote an application that uses your current location and tracks the distance you've traveled after you enable tracking. Since I have not actually enabled my iPhone device yet, I had to run this only on the simulator which was not quite as cool since it only reports one location (which IIRC was the lat/long of Apple's headquarters but I might have caught that wrong.) Basically you use a CLLocationManager which sends location updates to a delegate (good thing I paid attention to the section on delegates earlier); the delegate does pretty much whatever it wants to. Again, the delegate implements application-specific logic and the CLLocationManager just sends you the updated lcoation information, resulting in a clean separation of concerns. You can configure the manager for the level of accuracy you'd like, for example ten meters, a hundred meters, or "best" possible accuracy. The higher the accuracy, the faster your battery will drain, so setting this to best accuracy and leaving it on continuously might not be the best thing to do. Joe also mentioned that if you turn off the iPhone using the top button, the active application can still be doing things and so you want to make sure you check for the "application will passivate" event and stop updating the location to prevent excessive battery drainage! (Maybe that's what happened to me a few weeks ago when my full battery completely drained overnight.) You can also configure a distance filter if, for example, you only want to receive updates after the phone has moved a certain distance. Cool stuff!

Provisioning Profiles

After dinner we returned to the classroom to set up our provisioning profiles, which is going to allow those of us who have not yet registered with Apple to actually run apps on our iPhones (known as "devices"). I am planning to buy my developer certificate but for now Big Nerd Ranch has provisioning profiles for students since apparently it is now taking a day or two to get your developer certs and other required stuff.

Random Thoughts

After a long day, I have a few random thoughts in no particular order. Interface Builder is really, really nice for building UIs. Of course I already knew this having taken the Cocoa bootcamp last April. Regardless, building UIs using a sophisticated and refined tool like Interface Builder is sooooooo much nicer than the current kludge of web technologies that splatter HTML, CSS, and JavaScript all over the place and various data exchange formats like XML, JSON, or whatever combined with eight different JavaScript frameworks and a potentially very different server-side programming model and finally trying to jam it all together. Can you tell I've been doing web development for a while?

Delegates and Core Location were probably the coolest things from today, and it is really nice that after only one day I can build iPhone apps, even if they are pretty simple. Then again, it is really cool how easy it is to integrate location into an iPhone application. Objective-C as a language is actually not all that bad, and is really easy to read since the methods (at least the ones from the Apple SDK) tend to be very well-named. Of course I like the fact that Objective-C is dynamically typed and I don't have to be told by the compiler what I can and cannot do at every step of the way, e.g. I can send any message to any object and so long as it responds, no problem. Of course the code does still have to compile in the XCode IDE so it isn't a total dynamic language free-for-all. The thing I don't like is having to manually manage memory. After doing malloc and free in C on a VAX for the first several years out of college, I was quite happy to not have had to do that for over 10 years. Oh well, I suppose the autorelease pool and simply sending retain and release messages is better than malloc and free.

Big Nerd Ranch is all about coding. While there is lecture, you code hands-on most of the time, and this is why you learn so much. The hikes really relieve the tendency to want to go to sleep after lunch! It's 10:25 and I've finally got ten my iPhone provisioned and the apps we developed today all working directly on the phone. I also finally got the SDK updated to the latest version as well as update my phone's software to version 2.2. All in all a great first day!

iPhone Bootcamp Blogs

Posted on November 29, 2008 by Scott Leberknight

Check out my blog entries this week while I'm attending the iPhone bootcamp at Big Nerd Ranch.

Get Comfortable Being Uncomfortable

Posted on November 25, 2008 by Scott Leberknight

Renae Bair's post on The Ranting Rubyists hits a lot of nails on the head. I will freely admit to being a developer who is interested in continually learning new technologies - perhaps even at the expense of the ones I currently develop in - and I try to contribute a little back by blogging and speaking at conferences like No Fluff Just Stuff on a semi-frequent basis. But Renae's point is that many people in the development world seem to be all about the New, New Thing and ready to dismiss the old things without a second thought. My feeling is that the old things don't go away, often we just end up piling more things on top. (It's new technologies all the way down.) Sometimes there certainly is wholesale replacement, but from what I've experienced usually you just mix in the new things and things become that much more heterogeneous.

I think it's fine to continually push "forward" to newer and better technologies that help you do the same thing in half the time, or in half the code, or allow things to execute on twice the processors, or scale twice as much. But at the same time it is simply not cool or very intelligent to dismiss the very tools that get you paid and perhaps got you where you are today. Sometimes the intent is just that; to dismiss the old in favor of the new for the purpose of making money. Sometimes the intent is merely the intellectual curiosity the best developers usually possess, and in fact the best people in any field possess. A few years ago I told a friend "Get Comfortable Being Uncomfortable." What I meant was to learn new things and push yourself to think about doing things better and more efficiently than you currently are doing them. Sometimes this means switching or advocating a new tool; sometimes it means using your existing tools more effectively. And always it means you can't rest on your laurels and you are always challenging the status quo. Many people don't like this. Well, too bad, because reality is that things change and Resistance is futile.

My day job is still mainly Java and web applications, though I also have managed to squeeze Ruby, Groovy, and Python in there (and of course realized the power of JavaScript) over time. I speak on mostly Java-related stuff like Hibernate and Spring and Groovy a bit. And currently I'm learning about new things (to me anyway) like functional languages such as Lisp and Clojure and Scala. Not because I think I'm going to rewrite the application I'm currently working on in a different language and/or framework, but because over time I feel learning new and different things makes me a better developer, architect, designer, etc. I know that the Java code I write today, while still crap, is way better than the crap I wrote several years ago, and has been influenced by learning Python and Ruby and Groovy and others. While it is still Java, I don't try to write overly generic, overly engineered things like I used to (well, perhaps not as much as I used to anyway). I just try to get the tasks I need to get done, done. If I need to make something more generic later, I can do it. But in addition to the power of just learning new things, I think the more well-rounded you are the better off you are and the better equipped you are to solve new problems. And maybe you'll find a much better way to solve them because you have a more diverse knowledge "portfolio" at your disposal.

So, getting back to Renae's post, I think it's a great idea to continue learning new things and pushing better ways of doing things, if for no other reason than to ensure your own relevance and marketability as a developer but hopefully because you enjoy it! But while it's OK to voice your opinion and seek new and better things, don't just rip to shreds the things that got you to where you are. In the past I've made comments to people like "Java sucks" and "I'd rather be doing Blub programming" and I've tried to curb that and realize that things change, we know more today than yesterday, and to just "Get Comfortable Being Uncomfortable." You might not always get to program in Blub but that shouldn't stop you from expanding what you know, and by the way the sphere of your knowledge should include more than just technical knowledge and probably should include things like economics, finance, culture, art, literature, sports, etc. Whatever. Just make yourself more well-rounded and you'll be better for it, in all aspects of life.

Polyglot Persistence

Posted on October 15, 2008 by Scott Leberknight

In late 2006 Neal Ford wrote about Polyglot Programming and predicted the wave of language choice we are now seeing in the industry to use the right language for the specific job at hand. Instead of assuming a "default" language like Java or C# and then warring over the many different available frameworks, polyglot programming is all about using the right language for the job rather than just the right framework(s). For a while now I've thought about the fact that, paralleling Neal's description of polyglot programming, a relational database seems to be the accepted and default choice for persistence. Sometimes this is due to the fact that organizations have standardized on RDBMS systems and there isn't even any other choice. Other times it is simply what we're used to doing, and possibly we don't even consider alternatives. But now, with things like Amazon SimpleDB, Google Bigtable, Microsoft SQL Server Data Services (SSDS), CouchDB, and lots more, it seems like we're now seeing the beginning of Polyglot Persistence in addition to polyglot programming.

Polyglot Persistence, like polyglot programming, is all about choosing the right persistence option for the task at hand. For example, some co-workers of mine on one project are effectively using Lucene as their primary datastore, since the application they've built is mainly to do complex full-text searches very fast against huge datasets. Most people probably don't think of Lucene as a data store and just consider it as their full-text search engine. But for this particular application, which aggregates multiple disparate datasets, glues them together, and performs full-text search against the consolidated view of the data, it makes a good deal of sense. It also helped that in a bake-off against a very popular traditional RDBMS system's full-text add-on product, the Lucene search solution blew the doors off the traditional RDBMS in terms of performance, and that was even after a team of consultants from the vendor came in and tried to optimize the search performance. So, in this case a non-relational data store made more sense in terms of the problem context, which was data aggregation and fast full-text search.

Within the past few years we've started to see and hear about how companies like Amazon and Google are using non-traditional data stores such as SimpleDB and Bigtable for their own applications. Google App Engine in fact provides access to Bigtable, described as a "sparse, distributed multi-dimensional sorted map," as the sole persistent store for Google App Engine applications. Other organizations like the Apache Software Foundation have gotten into the non-relational data store market as well with things like CouchDB which is described as "a distributed, fault-tolerant and schema-free document-oriented database accessible via a RESTful HTTP/JSON API." One of the common threads among all these non-relational stores is that they are distributed, designed for fault tolerance, embrace asynchronicity, and are based on BASE (Basically Available, Soft State, Eventually Consistent) and CAP (Consistency, Availability, Partition Tolerance) principles as opposed to traditional ACID (Atomicity, Consistency, Isolation, Durability) properties found in traditional RDBMS systems. In addition, they are almost all either "schemaless" or provide a flexible architecture that promotes ease of schema changes over time, again as opposed to the rigid and inflexible schemas of traditional relational databases.

I don't think it's a coincidence that the companies creating and now offering these alternative data stores - free, commercial, or hybrid models like Google App Engine which is free up to a certain point - are all giants in distributed computing and deal with data on a massive scale. My guess is that perhaps they initially deployed some things on traditional RBDMS systems and outgrew them or maybe they simply thought they could do it better for their own specific problems. But as a result, I think over time that organizations are going to start thinking more and more about the type of persistence they need for different problems, and that ultimately the RDBMS will be but one of the available persistence choices.

Apache Commons Collections For Dealing With Collections In Java

Posted on October 03, 2008 by Scott Leberknight

If you are (stuck) in Javaland, which for my main project I currently am, and you'd like a little of the closure-like goodness you get from, well, lots of other languages like Ruby, Groovy, C#, Scala, etc. then you can get a tad bit closer by using the Apache Commons Collections library. Ok, scratch that. You aren't going to get much closer but at least for some problems the extensive set of utilities available can make your life at least a little easier when dealing with collections, in that you don't need to code the same stuff over and over again, or create your own library of collection-related utilities for many common tasks. Note also I am not intending to start any kind of religious war here abut Java vs. Java.next, which is how Stu aptly refers to languages like Grooovy, JRuby, Scala, and Clojure.

As a really quick and simple example, say you have a collection of Foo objects and that you need to extract the value of the bar property of every one of those objects, and you want all the values in a new collection that you can use for whatever you need to. In that case you can use the collect method of the CollectionUtils class to do this pretty easily.

List<Foo> foos = getListOfFoosSomehow();
Collection<String> bars = CollectionUtils.collect(foos, TransformerUtils.invokerTransformer("getBar"));

This simple code is equivalent to the following:

List<Foo> foos = getListOfFoosSomehow();
Collection<String> bars = new ArrayList<String>();
for (Foo foo : foos) {
    bars.add(foo.getBar());
}

Depending on your viewpoint and how willing you are to ignore the ugliness of passing a method name into a method as in the first example, you can write less code for common scenarios such as this using the Commons Collections utilities. If Java gets method handles in Java 7, the first example could possibly be more elegantly rewritten like this:

List<Foo> foos = getListOfFoosSomehow();
// Making a HUGE assumption here about how method handles could possibly work...
Collection<String> bars = CollectionUtils.collect(foos, TransformerUtils.invokerTransformer(Foo.getBar));

Of course, if Java 7 also gets closures then everything I just wrote is moot and irrelevant (which it might be anyway even as I write this). Regardless, with the current state of Java (no closures and no method handles) the Commons Collections library just might have some things to make your life a bit easier when dealing with collections using good old pure Java code.

The "N matchers expected, M recorded" Problem in EasyMock

Posted on September 30, 2008 by Scott Leberknight

EasyMock is a Java dynamic mocking framework that allows you to record expected behavior of mock objects, play them back, and finally verify the results. As an example, say you have an interface FooService with a method List<Foo> findFoos(FooSearchCriteria criteria, Integer maxResults, String[] sortBy) and that you have a FooSearcher class which uses a FooService to perform the actual searching. With EasyMock you could test that the FooSearcher uses the FooService as it should without needing to also test the actual FooService implementation. It is important in unit tests to isolate dependent collaborators so they can be tested independently. One thing I pretty much always forget when using EasyMock is that if you use any IArgumentMatchers in your expectations, then all the arguments must use an IArgumentMatcher. Going back to the FooSearcher example, you might start out with the following test (written in Groovy for convenience):

void testSearch() {
  def service = createMock(FooService)
  def searcher = new FooSearcher(fooService: service, maxAllowedResults: 10)
  def criteria = new FooSearchCriteria()
  def sortCriteria = ["bar", "baz"] as String[]
  def expectedResult = [new Foo(), new Foo()]
  expect(service.findFoos(criteria, 10, sortCriteria)).andReturn(expectedResult)
  replay service
  def result = searcher.search(criteria, "bar", "baz")
  assertSame expectedResult, result
  verify service
}

The above test fails with the following error message:

java.lang.AssertionError: 
  Unexpected method call findFoos(com.acme.FooSearchCriteria@ea443f, 10, [Ljava.lang.String;@e41d4a):
    findFoos(com.acme.FooSearchCriteria@ea443f, 10, [Ljava.lang.String;@268cc6): expected: 1, actual: 0
	at org.easymock.internal.MockInvocationHandler.invoke(MockInvocationHandler.java:29)
	at org.easymock.internal.ObjectMethodsFilter.invoke(ObjectMethodsFilter.java:45)
	at $Proxy0.findFoos(Unknown Source)
	at com.acme.FooSearcher.search(FooSearcher.java:19)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    ...

We expected a call to findFoos which takes a FooSearchCriteria, an integer, and a string array describing the sort conditions. But from the error message, EasyMock told us that the expected method was not called and so verification of the mock behavior failed. What happened? Well, basically the string array that was expected was not the string array actually passed as the argument. Look back at the stack trace and specifically the array arguments: the actual argument was [Ljava.lang.String;@e41d4a while the expected argument was [Ljava.lang.String;@268cc6. The FooSearcher.search method's signature is List<Foo> FooSearchCriteria criteria, String... sortBy) - the varargs that are passed to FooSearcher.search are getting packed into a new array when called and that new array is subsequently passed into the FooService which is what causes the difference between the expected and actual array arguments!

To make sure that arguments such as arrays and other complex objects are matched properly by the mock object, EasyMock provides IArgumentMatcher to compare the expected and actual arguments to method calls. Essentially, it is like performing a logical "assertEquals" on the arguments. One of the matchers EasyMock provides is obtained via the static aryEq method in the EasyMock class. So for example if you had a method that took a single array argument, you could make an expectation of mock behavior like this:

def myArray = ["foo", "bar", "baz"] as String[]
expect(someObject.someMethod(EasyMock.aryEq(myArray)).andReturn(anotherObject)

Here you tell EasyMock to expect a call to someMethod on someObject with myArray as the sole argument, and to return anotherObject. Cool, so let's try to fix the failing test above using EasyMock.aryEq (which was imported statically using import static):

void testSearch() {
  def service = createMock(FooService)
  def searcher = new FooSearcher(fooService: service, maxAllowedResults: 10)
  def criteria = new FooSearchCriteria()
  def sortCriteria = ["bar", "baz"] as String[]
  def expectedResult = [new Foo(), new Foo()]
  // Try to use EasyMock's aryEq() to ensure the expected array argument equals the actual argument...
  expect(service.findFoos(criteria, 10, aryEq(sortCriteria))).andReturn(expectedResult)
  replay service
  def result = searcher.search(criteria, "bar", "baz")
  assertSame expectedResult, result
  verify service
}

This test also fails with the following error message:

java.lang.IllegalStateException: 3 matchers expected, 1 recorded.
	at org.easymock.internal.ExpectedInvocation.createMissingMatchers(ExpectedInvocation.java:41)
	at org.easymock.internal.ExpectedInvocation.(ExpectedInvocation.java:33)
	at org.easymock.internal.ExpectedInvocation.(ExpectedInvocation.java:26)
	at org.easymock.internal.RecordState.invoke(RecordState.java:64)
	at org.easymock.internal.MockInvocationHandler.invoke(MockInvocationHandler.java:24)
	at org.easymock.internal.ObjectMethodsFilter.invoke(ObjectMethodsFilter.java:45)
	at $Proxy0.findFoos(Unknown Source)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    ...

Wait, shouldn't that have made EasyMock ensure that the supplied argument was verified using an IArgumentMatcher, specifically an ArrayEquals matcher? Well, sort of. And this is where I always forget what the "N matchers expected, M recorded" error message means and fumble around for a few minutes while I remember. In short, the rule is this:

If you use an argument matcher for one argument, you must use an argument matcher for all the arguments.

So in the above example, we recorded one matcher via the call to aryEq. Now EasyMock will expect an argument matcher for all the arguments in the expectation, and there are three arguments. Now this makes sense. We need to add argument matchers for the other arguments as well. So let's now fix the test:

void testSearch() {
  def service = createMock(FooService)
  def maxResults = 10
  def searcher = new FooSearcher(fooService: service, maxAllowedResults: maxResults)
  def criteria = new FooSearchCriteria()
  def sortCriteria = ["bar", "baz"] as String[]
  def expectedResult = [new Foo(), new Foo()]
  // If you define one matcher for an expected argument, you need to define them for all the arguments!
  expect(service.findFoos(isA(FooSearchCriteria), eq(maxResults), aryEq(sortCriteria))).andReturn(expectedResult)
  replay service
  def result = searcher.search(criteria, "bar", "baz")
  assertSame expectedResult, result
  verify service
}

Now the test passes as we expect it to. We used several other common types of argument matchers here via the static isA and eq argument matchers. The isA matcher ensures the argument is an instance of the specified class, while the eq matcher checks that the actual argument equals the expected argument via the normal Java equality check, i.e. expected.equals(actual). So in summary, if you ever receive the dreaded "N matchers expected, M recorded" error message from EasyMock, you know you need to ensure that all arguments to an expectation use a matcher. And, if you got this far and were dying to mention that if you're using Groovy to test Java code there are easier ways in Groovy to test than using a framework like EasyMock, you're right for the most part. There are still some things you cannot do when testing Java code using Groovy. I plan to go into that more in a future blog post.