Glad You're Ready. Let's Get Started!

Let us know how we can contact you.

Thank you!

We'll respond shortly.

Adam Milligan

Posts By

Why not to use ARC

If you have done any development for iOS in the past few years you have at least some familiarity with ARC. The overall response to ARC since Apple released it with iOS 5 has been little short of orgasmic. You can’t swing a dead internet cat without hitting a blog post from someone explaining how ARC saved his/her dying grandmother and if you’re not using it on every project you touch then you’re helping the Commies win.

I’ve seen some projects do perfectly well with ARC, but at the same time I feel it provides its own set of challenges which we should not overlook. Here are some reasons why you might want to consider not using ARC on your next Objective C project.

Cedar Expectations

As I wrote here we've used OCHamcrest matchers for some time for writing expectations in Cedar, but have found them unsatisfying. We wanted convenient matchers, like the ones Jasmine provides for JavaScript, but for Objective C. To that end, we added Cedar-specific expectation functions and matchers that specifically solve the problems we had with OCHamcrest.

To use Cedar's expectations you need to make a couple small changes to your spec files:

1) Cedar matchers use C++ templates. Tell the compiler to expect some C++ code by changing the file extension on your spec files from .m to .mm.

2) Cedar matchers live in a C++ namespace. At the top of your spec file, after the includes, add this line:

using namespace Cedar::Matchers;

The Trouble With Expectations in Objective C

At Pivotal we write a lot of tests, or specs if you prefer. We TDD nearly everything we write, in every language we write in and on every platform we write for, so we actively work to improve every aspect of our testing tools. Personally, as I've written tests in Objective C I've found that the syntax of expectations has left much to be desired.

Like many people who write Objective C, I've spent a fair bit of time with languages like Ruby and JavaScript. When I write specs I often yearn for the simplicity of the expectation syntax in those languages. For example, some simple expectations in JavaScript, using Jasmine:


In comparison, the same expectations in Objective C, using OCHamcrest:

assertThat(, equalTo(@"Ludwig"));
assertThatUnsignedInt([composer.symphonies count], equalToUnsignedInt(9u));
assertThat(composer.symphonies, hasItem(@"Eroica"));
assertThat(composer.symphonies, isNot(hasItem(@"Appassionata"));

Cedar vs. Xcode 4 (round one: the command line)

I've finally found a bit of time to update Cedar to work with Xcode 4, and I hope to have it working smoothly some time in the next few days. However, I've already come across my first significant issue with the Xcode 4 changes: the location of build products.

Not unexpectedly, the problem has to do with command line builds using xcodebuild. By default, Xcode 4 now puts build products into a project-specific directory in the "Derived Data" folder; this looks something like /Users/pivotal/Library/Developer/Xcode/DerivedData/Cedar-somegianthashstring/Build/Products/Debug-iphonesimulator/libCedar-StaticLib.a. This isn't a problem, generally, because the BUILD_DIR compiler variable contains the build directory, should you need to find this location during the build process.

Sadly, when you build from the command line, using the xcodebuild command, the build products still go into the old Xcode 3 build location, but the BUILD_DIR compiler variable contains the new Xcode 4 build directory. This means any script that looks for the build results in the directory specified by BUILD_DIR won't find anything.

The build target for Cedar's static framework is simply a script that uses xcodebuild to build the static library for both the simulator and the device, and then uses lipo to make a fat binary from the results. Because it can't find the build results at the location specified by BUILD_DIR it now fails messily.

The easiest workaround I've found is to change where build products go using the Locations setting in the Xcode 4 preferences (details below). Unfortunately, this isn't a project-specific setting, so you'll have to change your preferences similarly to make it work. I haven't found any problems with changing the location of the build products, but this does mean the Cedar static framework (as well as the related static frameworks for OCHamcrest and OCMock) won't build with the default settings. Unsatisfying.

The longer term solution is for Apple to act on the bug I filed. We'll see how that goes.

UPDATE: Thanks to Christian Niles for pointing out the SYMROOT environment variable in a pull request. Setting this for command line builds forces Xcode to use the specified location for all build products, and updates the BUILD_DIR compiler variable.

Steps for changing the build location in Xcode 4:

  • Open Xcode preferences (Command-,)
  • Select the "Locations" tab
  • Change the "Build Location" drop down from "Place build products in derived data location" to "Place build products in locations specified by targets."

Colorized output for Cedar

Thanks to Sean Moon and Sam Coward Cedar now has colorized output on the command line:

Colorized Cedar report

If you'd like to display colorized output like this you can specify the appropriate custom reporter class using the CEDAR_REPORTER_CLASS environment variable. We do this in our Rakefiles, like so:

task :specs => :build_specs do
  ENV["CEDAR_REPORTER_CLASS"] = "CDRColorizedReporter"
  system_or_exit(File.join(BUILD_DIR, SPECS_TARGET_NAME))

You can set the environment variable in whatever way works for you. You can also set it to any reporter class you choose, so customize away.

iPhone on blocks: UITextFields

If you've ever used a UITextField in an iPhone project (or, I suppose, an NSTextField in a Cocoa project) you know that you pass it a delegate object in order to respond to events. Handling the "Return" key press from the on-screen keyboard may look something like this (probably implemented in your view controller):

- (BOOL)textFieldShouldReturn:(UITextField *)textField {
    if (0 == [textField.text length]) {
        return NO;
    [self doSomethingWithText:textField.text];
    [textField resignFirstResponder];
    return YES;

The delegate pattern is de rigueur for Cocoa classes, so you've likely never given this much special thought. Unless, that is, you decided at some point to have two text fields on screen at once. With two text fields you need to handle two sets of callbacks. You have a couple options for how to do this:

  1. Use the same delegate to handle both sets of callbacks, and use conditionals or switch statements to differentiate between the text fields.
  2. Create UITextField subclasses for each text field, each of which knows how to handle its own events. Each subclass will need a reference to the view controller, and you'll need to expose methods in the view controller's public interface for the subclasses to call, in order to effect some change in the system.
  3. Create a separate delegate class for each text field. As in the previous option, each delegate class will need a reference to the view controller and a way to send it messages to effect changes in the system.

None of these options feel particularly satisfactory: the second overuses inheritance, which the delegate pattern exists largely to avoid; both the second and third can result in class explosion; and the first feels so... procedural. Isn't Object Oriented Programming supposed to save us from problems like this?

iPhone UI Automation tests: a decent start

Apple's inclusion of the UI Automation component in Instruments with iOS 4 is a definite step in the right direction. It's the first reasonable way to write tests that externally exercise your actual app, rather than weirdly injecting test code into it. It's also the only way to programmatically test lifecycle issues, such as how your app behaves when put in the background, when rotated, when the device locks, etc. Good stuff. Unfortunately, the current implementation of UI Automation also has some significant problems:

  1. There's no way to run tests from the command line. The subtitle of the WWDC talk for UI Automation was "find bugs while you sleep;" unfortunately, you can't find bugs while you sleep if you have to wake up to click the "Run" button.
  2. There's no way to set up or reset state. The lack of fixtures which set up a known state at the beginning of iPhone tests has been a problem for unit testing (with OCUnit, Cedar, or what have you), particularly for apps that use CoreData. Now it's worse than ever, because UI Automation manipulates the actual state of the app on the device, much like Selenium does in a browser. Sadly, UI Automation provides no method for reseting the device's state, making it nigh impossible to prevent tests from affecting one another.
  3. Part of the previous problem is that UI Automation has no concept of discrete tests; it provides no form of organization for your test scripts. No test methods, no set up or tear down methods, just one big stream of consciousness line of execution. Obviously you can break this up into functions as you see fit, but why reinvent the wheel? Since the test script is JavaScript, I like the idea of using Jasmine for this.
  4. There's no way to programmatically retrieve the results of the test run. You could debate the value of solving this issue at the moment, considering there's no way to programmatically start the tests either. However, even if you were to write some clever AppleScript to kick off the tests automatically the only indication of the pass/fail status is in the Instruments UI, so you still have to wake up to check the results. I searched around a bit for information on deconstructing the protocol that UI Automation uses to talk to the device, but I came up empty.

I'll definitely use UI Automation, particularly for app lifecycle testing. But, not being able to add those tests to a CI build definitely stings. I very much hope Apple keeps their momentum for automated testing and makes it more developer-friendly.

iPhone UI Automation tests with Jasmine

Since the language of the new iPhone UI Automation component is JavaScript I figured the easiest way to organize tests is to use a JavaScript testing framework, such as Jasmine. So, I created jasmine-iphone, which is little more than a few simple scripts to make UI Automation and Jasmine play nice.

Once you clone jasmine-iphone from GitHub (it includes Jasmine as a submodule, so be sure to git submodule init && git submodule update) you can copy the example-suite.js file, import your spec files, point Instruments at your suite.js and go.

As an example, I set up a trivial example in the Cedar project. The directory structure looks like this:

Project Directory
- Spec
    - UIAutomation
        - jasmine-iphone     <--- submodule
            - jasmine        <--- nested submodule
        - suite.js
        - thing-spec.js
        - other-thing-spec.js

The suite.js file is relatively simple (note that I moved it up one directory from where the example-suite.js file is, so the #import statements are slightly different):

#import "jasmine-iphone/jasmine-uiautomation.js"
#import "jasmine-iphone/jasmine-uiautomation-reporter.js"

// Import your JS spec files here.
#import "thing-spec.js"
#import "other-thing-spec.js"

jasmine.getEnv().addReporter(new jasmine.UIAutomation.Reporter());

You can write the specs themselves the same way you'd write Jasmine specs for anything else. The UIAutomation subclass of the Jasmine Reporter takes care of marking the start of each spec, as well as reporting if it passes or fails. You'll need to use the UIAutomation classes and methods for driving your application's state, of course.

That's it. Try it out and see what you think.

Cedar device specs and CI

One of the most common complaints I've read about OCUnit, the unit testing framework built into Xcode, is that the tests you write with it won't run on the device. In addition, I personally have found the process of setting up a target for tests that depend on UIKit confusing and onerous. So, one of our goals for Cedar was to make testing UI elements easy (or easier), by making it easy to run specs in the simulator or on the device.

Probably the second most common complaint I've read about OCUnit is that the tests run as part of the build. This makes the test output difficult to separate from the build output, and makes it impossible to use the debugger when running tests. So, in addition to making it easy to run specs on the device, we wanted to be able to run them as a separate, debuggable executable.

Finally, we consider it important that our specs run in our CI system. That means we wanted to be able to run Cedar specs from the command line, and get an exit code signifying success or failure. At the same time, some of us appreciate the value of the green/red feedback for specs passing and failing, so sometimes we like a nice UI. As of today, Cedar will accommodate all of these various requirements.

Objective-C exceptions thrown inside methods invoked via NSInvocation are uncatchable

Whether you're using Cedar or not, if you've upgraded to the iOS 4.0 SDK you may have run into some odd behavior with exception handling blocks not catching exceptions. Strangely, the problem isn't due to the exceptions themselves (at least not in any obvious way), but with how you call functions that raise exceptions. An exception thrown from within a method you invoke directly will function as expected. However, if you invoke that same function indirectly using NSInvocation any exception thrown becomes uncatchable, crashing the current process regardless of any exception handling code.

This happens only when running against the currently available iOS 4.0 SDK. Exception handling for both direct and indirect invocations performs as expected when using the OS X 10.6 SDK and previous versions of the iPhone SDK.