Perfect Forwarding

Software dev, management, and gearhead ego stroking

Debugging XCode app terminations from NSFileHandle

I had a situation when initializing my app where large sets (>1400) of files being cataloged were causing the app to quit immediately, with no formal crash log or exception triggered in the debugger. 

"Terminated due to unknown Memory condition"

read the dialog box, and a Google search didn’t turn up much.  Looking at the incomplete crash log in XCode, the inciting process was listed as “unknown”, but several of the active processes had “[vnode-limit]” written in the “reason” column of of active processes.  I noticed also that the “fds” read an even 1600.  

In my async startup code which read files out the Documents folder and added entries to an sqlite database tagging their metadata, I had a loop that went something like this:


for all files
{
if ( file not in db )
{
NSFileHandle* file = loadFile:filePath;
// … do stuff with file, add to db
}
}

I thought the scope of the if block would have the file handles purged and closed, but this doesn’t seem to be the case.  Adding a [file closeFile] to the end of the if block remedied the random crashes.

Unknowningly I suppose I hit a limit of consecutive open files, though the debugger didn’t make this quite clear.  If you run into a similar abrupt termination to your app, check how many file handles you have open.

GaMBL Alpha Released

I’m currently working on an open-source game music player called GaMBL (Game Music Box for Lion), which has reached an alpha state and I’m looking for testers and feedback.  

Supported systems include 8 and 16-bit Nintendo and Sega systems (NSF/GYM/SPC/etc.). 

僕が開発してるOS Xゲーム機音源エミュレータがα版に辿り着いた。ファミコン類、ゲームボイ、メガドライブなど対応しています。今テスターのバランティアーを募集していますが、誰か興味があれば連絡してください。

You can grab the binary and documentation from the Github project page, please open issues there for any problems you have.  

https://github.com/dgventura/gambl/wiki

Perforce client path error resolution

Last week when running my Jenkins master/slave locally I ran into some Perforce issues that were breaking my builds.  While trying to sync I got an error message along the lines of

error: File
/Users/david/.jenkins/jobs/test/workspace/Projects/blah/lib/libFlurryAnalytics.a 
is not inside permitted filesystem path 
/Users/david/Perforce/perforce_1666/david_personal

is not inside permitted filesystem path" is the error, and I couldn’t find this string easily in a number of Google searches, so I asked Perforce support for help.

The issue at hand is that if you have P4CLIENTPATH defined in the scope of your Perforce process, basically any commands to grab files (sync) outside of that scope will produce the error above.  This happened for me because I added some new workspaces for different Jenkins jobs, and they lay outside of my original Perforce home folder.

This environment variable can be set through the shell of your using, or via the P4CONFIG file.  In my case it was defined in my .tcshrc which ran every time I opened a terminal session.  

You can remove the definition of the variable, or undefine it before you run Jenkins to resolve the issue.

More info about the variable can be found on the Perforce website, though the particular error string it can prompt is not (hence my blog entry to hopefully aid someone else with a similar problem).

http://www.perforce.com/perforce/doc.current/manuals/cmdref/env.P4CLIENTPATH.html

Automating VPN for Jenkins SCM

Making Jenkins builds effortless is quite easy with the build steps and depth of plugins available.  I’ve been working the nomad lifestyle for my employer the last several months, so VPN connection before source control management is a given.  When coding which produces the resultant continuous integration builds this is already taken care of, however my nightly builds would usually fail as the machine wasn’t on the company’s network at the time.  So Perforce fails to connect and the build tanks.

Fortunately, using a Jenkins plugin and a little scripting this is easily remedied.

Steps:

  1. Get the pre-scm-buildstep plugin and add it to Jenkins
  2. Write a script/batch of your choice to connect to the VPN
  3. Add a pre-build step to run your script per the plugin documentation
  4. Enable the Jenkins job SCM retry count if your script spawns without waiting for a return value.
  5. (Optionally) Add a post-build step to disconnect from the VPN.

In my case, I’m running Jenkins on my MacBook, so I use an AppleScript to connect to the VPN, and then osascript to run it.  Here’s an example of an AppleScript to connect to the VPN.

There’s probably a more sophisticated way to handle VPN connection failures and report them, but this does the trick right now.

Professionalism in code maintenance

Several weeks ago, a fellow I went to school with was decrying bad spaghetti code, sending out a call to trash the entire project and start over.  I reminded him the age-old aphorism that the majority of software cost is spent on maintenance, which after not finding in me a shoulder to cry on, he irritably disregarded.

As engineers we dream of building something fantastic: a tower, a tribute to the heavens that will be elegant in form and function, something our peers can marvel at.  But as powerful as our favorite language’s paradigm is, it’s nothing but vain ego-stroking unless it gets the job done.  I’m not devaluing clean, fresh code.  On the contrary, every time we add another inspector to a class, for the sake of maintainability the ramifications need to be thought through.  But the truth is most every kind of module you need already exists, and much of it is in a licensing model compatible with what you’re working on.  The best code you can write is the code that you don’t.

Fact being money-wise you’re going to spend most of your resources maintaining something written long before, so the least you can do is muster a positive attitude and see it as a chance to bring more value with less effort.  Reading, understanding, and utilizing other developers’ source (which by the way is pretty much the same as anything you’ve written and moved on from more than six months ago):  these are some of the most important skills you can refine as a software engineer.  Knowing what the code was originally intended for, and understanding the thought behind its creation will pay off many times over.  Fail to heed this and along the way you’ll find yourself rewriting a lot of modifications.  The other side of the coin is unless you’re working on a throwaway prototype for a personal project, it pays to spend some time documenting your code and make it easier for other developers to reuse down the line.

Recently I started work porting a Carbon-based frontend of the popular game console emulation library GME to my native 64-bit OS X Lion.  In about eighty hours of work I got from a zip file of code that didn’t compile to an app rewritten in Cocoa and Core Audio that replicated the core functionality.  I did this in less than 500 lines of code switching out only the GUI, application timing, and audio interface, while maintaining the essential application flow.  This was possible because the source was clear, consistent, and the key points were well documented.  I didn’t agree with a lot of the mechanisms and models used, but I understood the motivation behind their selection and respected the unity of the entire package.

The key challenge for my project moving forward will not be restoring the stubbed-out functionality or modernizing the UI, it’ll be preserving the code that works intact, and clearly demarcating my modifications and extensions.  Because in another ten years when I’ve long since stopped working on the project and some other developer wants to cut their teeth on programming for the latest platform’s APIs, I owe them the same professionalism the original developer showed me.

Responsible coding is accepting the realities of software maintenance and doing your part for the next generation.  The more attractive we can make the the implementations that exist, the less chance someone will waste time reinventing the wheel.

Premature optimization >= common sense

There is a famous saying credited to Donald Knuth that many programmers take at face value and use as an excuse to ignore the serious topic of software performance.  Performance considerations should not affect software design adversely, and unrolling every loop you write is not the way to go, but that’s not what I’m talking about.  There is a simple set of common sense practices that every engineer is responsible for knowing and following, regardless of his position in the workplace.  Programming is a science and computers are the powerful medium, as much as hydrogen is to the chemist.  Here is a number of performance ground rules to help prevent the software equivalent of smoking near a gas valve. 

1. Understand and respect the basic principles of the hardware you’re writing for

Software engineers need a thorough knowledge and reverence for their medium just as a carpenter does wood and a blacksmith metal.  In general for consumer devices processing throughput will hit a wall before you run out of memory.  Computers are good at doing the same thing in large batches with minimal changes to state (rendering polygons, multiplying numbers, testing planar intersections, etc.).  Integers and floating point calculations are both possible, but mixing them willy-nilly is almost always a bad idea.  Data with good locality is fast (just like with the materials on your desk), whereas going out to memory and far flung parts of it frequently (getting up from your seat every time you need to check a book on the shelf), really slows things down.

Some good habits to get into:

  • Standardize data types across subsystems by default (float, int, quaternions, unicode strings), but make exceptions for exceptional reasons
  • When memory is abundant, cache intermediate and frequently referenced state
  • Layout and group data together in memory when the size and access is going to be significant
  • Do loop processing in an order that keeps the same pages of memory in the cache (swapping the X and Y order for iterating over a 2D region represented by a 1D array can make a big difference)
  • Use the constructs built into the language and libraries you have, don’t ignore them (C++ has tons these, and they get better with every revision)

2. Processing data is the same as any other labor, don’t foster bureaucracy out of laziness

A function returns a current position is guaranteed to be valid if internally it computes the derived result every time, but if it could only change once every update of the main loop, why calculate it every time it’s referenced?  Building block functions that return a group of objects based on a key are useful, but if they involve iterating over some large set in the process, they’re not the the kind of code that should be the backbone for processing one of many instances.  This sounds obvious, but it’s not uncommon to find a junior programmer searching through a huge list inside of a doubly-nested loop, producing the same result tens of thousands of times.  This goes for all function sizes and the more commonly used the more important.  Constructors are functions too!  Throwing around complex types passed by value in C++ can incur hundreds of unseen copy operations, occurring with parameters as well as return values.  Be aware of what each statement is doing under the hood, this is the difference between a professional and a hack. 

3. Use the right tools for the job

Would you use a wrench to drive in a nail just because you’d been working with it all week and the hammer was upstairs?  If you have several languages at your disposal, choose the right one for the task at hand.  Script languages are great for experimentation and prototyping because of their iteration speed, but don’t write entire UI frameworks that perform thousands of calculations every cycle in them.  And don’t even think of writing another container class or max/min function when there are numerous industry-proven implementations available.

4. Take responsibility for your actions

Most importantly, be aware of the performance issues at hand, and have the professionalism to be honest about the ramifications of your code.  If you need to use inefficient means to implement some critical feature for the time being, put in a comment that you’re aware of it and it needs to be revisited soon, better yet enter a quick item in your task management system and set a due date.  Finally, when it is your code that is found to be inducing a three second blackout between menus, humbly do the repair and acknowledge that you will endeavor to avoid a repeat blunder. 

Again, all this should be second nature to any journeyman developer, but unfortunately weak leadership can lead to a lot of bad habits and puzzled stares when these sort of topics come up about four days before a milestone (which sadly they often do).  If you’ve been hired as a software engineer, whether you find it sexy or not, it’s your obligation to maintain awareness and diligence with your craft.  

Writing performant software doesn’t have to be a separate task.  When using best practices becomes the instinctual way you write, it takes no longer than sloppy code and saves you the effort of coming back to it later.

The effective society starts with you

Drucker’s The Effective Executive was one of my favorite books that I got into last year.  I liked it because it makes sense to me and is filled with actionable items that I can try out.  Whether my implementation of those practices will actually lead to their intended purpose remains to be seen long-term, but I’m a sucker for that kind of rhetoric.  

I want to believe that effectiveness can be bred by example.  This is the kind of dogma that podcast Manager Tools preaches: taking responsibility for not only your actions, but your team’s; being frank about your perspective and the results of your actions, a straightforward toolkit to cutting out the subtext and making crystal clear policy.  But the validity of that mindset is another post.

The point I want to get at is that if we accept that the behaviors of the leader propagate to his team, then the way we can build an effective company, and through it an effective society, is by example.  Time management, evaluating on results, and focusing on strengths has to be done first by the hand and followed with the mouth.  Unfortunately there is a mountain of doctrine and brain-washing we have to fight against to make this second nature.

The education system I was raised in was centered around repetition and rewarding effort, not results.  This works for building confidence in youth, but without the right mix of teaching effectiveness it’s nothing more than intellectual thumb-sucking.  After years of knowing full well that time used needs to be applied to priorities and concrete goals, I still catch myself smiling happily every time I squeeze another five minutes of reading some book or finishing a tutorial while waiting for a train to arrive.

The message of The Effective Executive is the same as the lean movement: eliminating waste and directing resources in a manner proven to lead to the satisfaction of our goals.  And this is something we need ingrain in every aspect of our professional lives, because until it becomes second nature to verify that what we’re doing now is directly contributing to the greater purpose, it’s absurd to expect that our directs are going to lead the charge (if that becomes the case then a change in roles is probably in order).

Do not waste my time

Recently I finished reading The Lean Startup after a colleague of mine referred it to me.  The core message of lean practices is something that I’ve felt for a while, but had not been able to articulate effectively.  In the macroscopic as well as the micro-, effort directed to work that does not add value to the customer is wasted.  The turning point came when I realized that it doesn’t matter how right I may be in policy or practice, if no one buys into the facts they might as well be fallacy.
 
Currently in addition to my own development projects, I am working as an optimization engineer on the side.  This is the quintessential example of how easy is to be misled from vanity metrics.  For every millisecond of processing time I can shave, if the gains are lost due to some other glaring deficiency of the product, I might as well not bother.  In these sorts of cases I begin to think that optimization of systems has more value as examples for educating staff towards better practices, in turn hopefully obviating the need for optimization in the future.  However, this is an entirely different goal and again needs the buy-in of what is almost always a beleaguered and already defensive team thrashed about by never-ending crunch.
 
So perhaps the greatest optimization then is as Lies says, the banishment of building anything tied to that which does not contribute directly to grand goal…receiving remuneration for value given to the customer.  Then the deck of priorities reshuffles placing product design and vision at the extreme top.  Without strong executives ensuring this with verified learning, waste propagates proportionally to the project size, no matter how adroit the craftsmen.
 
The question I have not resolved is who is responsible for this?  Is it in the leadership or should it be canon to the role of every member in the product chain?  If even the lowest graduate programmer could look at his efforts and be able to astutely query, “Is what I’m doing directly contributing to the ultimate goal?”, would waste be that less likely to creep into the system?  Or does this sort of second-guessing undermine the inherent efficiency of his craft and expend energy better served in unquestioningly building what the Product Manager deems worthy?  From an agile standpoint it would seem to be the latter:

the grand bargain of agile development: engineers agree to adapt the product to the business’s constantly changing requirements but are not responsible for the quality of those business decisions.

With seasoned and effective Product Managers, maybe these sorts of checks-and-balances aren’t necessary, but the lean manufacturing principle of andon requires that workers maintain the vigilance to stop production when a problem with quality or process is discovered.  How far should andon be taken in software development.

These are the questions I have been asking myself of late, and though I don’t have an definitive answers, I look forward to exploring them with Tayloresque rigor and analysis in the workplace.

Patterns and predispositions

I have been working with Pd as the audio engine for my Android sequencer app.  Learning about the language, I find my path to understanding it is a mixed route of functional learning mixed with theory.  This blended approach to gain experience with it is the result of my passive-aggressive tendencies which produce a turbulent mix of theory and practical application.  In my youth I was of the “learn the theory then start creating” school of thought, but the gradual slide into over-saturation of information has made me at times impatient and aggressive, seeking a quick payoff for a nominal investment of time.

How this relates to Pd is that instinctively I see it as an array of functionality more than a language, and not all of the constructs that exist in the visual programming environment are at the forefront of my decisions.  A prime example is that I am at the point where I have an idea of the sort of features that I require to accomplish the tasks in my next sprint, but to sketch this out I find myself wanting to go to a chart or mindmap, something at a macro level to define the flow and subsystem requirements.  Pd though not an object-oriented programming language has methods to service this.  I can create boxes and comments around elements, and I can define hierarchical patches simply by adding the pd statement inside of an object and giving the sub-patch a name.

For now I think that I still see Pd as a toolbox, not a machine shop or a draft board.  To understand how to best interpret the language, I think it’s just going to take more time working with matters at hand, and reading documentation as I go along.

Git, Android, and Eclipse, build path issues resolved?

I recently started adding source revision control to my home project, and chose Git.  I created a repository in my Eclipse project folder, at first only adding files under the src tree.

However after committing a change, next time I went to build I ran into a couple of issues.

MyApp/gen already exists but is not a source folder. Convert to a source folder or rename it.

and afterward clearing that

Errors running builder ‘Android Resource Manager’

Checking the Java build path I noticed that the default add for the Android project wizard is a simple MyApp, and everything below it (assets, src, gen, etc.).  My hunch was that the .git folder was getting added into this and the toolchain couldn’t handle it, but adding exclusions didn’t do the trick.  As mentioned in other places (though not directly related to Git), if you modify the build path to just include the src and gen folders specifically, you can clear the two problems above with a delete of the gen folder and a clean build.  

image

The upshot is that some files introduced by Git appear to break the standard resource builder, but fortunately there is a simple workaround.


Edit:

There is an issue with tumblr where I can’t answer a question on my own blog, so I’m just going to modify the original post.

As torus points out, it may be better to add the gen folder to .gitignore list.  This is probably a cleaner solution, as autogenerated code is one of several types we would never want to track in the repository (also I can think of local user preference files, intermediate files, etc.).