After some time to digest the news about Macs with Intel Inside, I’ve observed something that makes me worry. Not worry about the transition, but about some developer’s view on software testing. First from Will Shipley the long time NeXT developer at Omni Group and now cofounder of Delicious Monster. Speaking of his work at Omni when NeXT moved from the Motorola 68K CPU to the Intel processor, he says:
My then-company ported all of our apps to run on Nextstep for Intel Processors in a couple days. I think OmniWeb took the longest; if I recall the others pretty much recompiled and ran.
Then he talks about his experience working on the Mac Intel prototypes at WWDC as follows:
It turns out all Apple has ready now are prototype machines, which are in plentiful supply here at WWDC. So, we went into their lab, opened up our source code for Delicious Library, clicked on the “compile for 10.4 only” option, and compiled our program for Intel processors.
And it ran. It looked great to us.
So, to sum up: no source code changes. We clicked, it compiled, it ran. The program we built will now run on both PowerPC and Intel machines.
And, when you see an app recompile on the new machine and run just fine—or even when you see one and see issues, but know that it’s just a matter of cleaning up some bits and bobs—well, it’s not a scary proposition anymore. What’s left to do is straight forward coding. There’s nothing that requires massive, or even minor, changes to the design an application. It’s just making sure that the code doesn’t make any assumptions it shouldn’t.
For a more concrete example, Apple have produced documentation for developers who use PowerPlant. PowerPlant uses various resources to describe the UI objects that are created–windows, menus, dialog boxes, etc. Reading these resources correctly in the context of a universal binary will be crucial. They’ve included some sample code to flip bytes in PowerPlant resources. It’s more than 780 lines of code–780 lines of tedious, byte flipping code.
No. The PowerPlant resource example isn’t extreme. It’s just a common, specific example for which Apple have been kind enough to provide a solution. Anyone shipping serious applications that have not already had to solve this problem for cross-platform scenarios has quite a bit of tedious work to do. While I’ve welcomed this change for my own circumstances, I don’t envy the position in which a number of my compatriots down in San Jose find themselves.
Maybe it’s my perspective as a full-time software tester (more about that in another post) and I might be missing the light here, but Universal Binaries just doubled the amount of testing I need to do to verify, as Will put it, that the “code doesn’t make any assumptions it shouldn’t.” This is simply a non-trivial task. Sure, I’ve been around a while and maybe I’m jaded, but when someone says to me, “Hey, I clicked the check box, it compiled and booted, looks good!” I get all sorts of bad feelings about what hideous bugs are lurking inside.
Software testing is perhaps one of the most difficult disciplines in the software industry and good testing costs. It costs time and money. Thanks to a new CPU, a new complier for Intel and a new run time translator for PowerPC applications on an Intel box, Apple has single handedly increased the cost to test a Mac software product by at least twice if not three times. That, my friends, is not a good thing. Let me hasten to add that, elegant romantics aside, I think this is a good move, but for a software developer and especially as a software tester, it’s going to cost.
I should mention that I think all of these developers I mention do test and work very hard to produce high quality software. But high quality software is SO much more than a compiled and launching application, and I think that aspect is being missed in the discussion.