Can you afford in-house custom software development?

I’ve spent most of the past 10 years as a CIO, CTO or Technical Consultant in the Small to Medium Business (SMB) market.  This experience has been eye opening and challenged my historical biases toward issues such as in house software development vs. commercial off the shelf (i.e. COTS) solutions.  Note, I include well-supported open source products in the latter category.

My biases were that in-house software development was perfectly appropriate (and perhaps even an advantage) for small to medium businesses.  After all, the SMB market usually didn’t suffer from the big bureaucratic SDLC that seemed to turn any development project into a 3 year initiative that would be outdated before ever going into production.  These organizations could leverage the latest tools, frameworks, and platforms to build whiz-bang applications that would be light years ahead of larger competitors.

While to a certain extent this remains true, my personal experience had led me to a different conclusion.  You see, in-house, custom applications are almost always built to a set of specifications that match the requirements of the business at a point in time.  While these applications may be state of the art in terms of user interface, programming models, etc. they are rarely designed with the same level of abstraction that is typical of a COTS solution.  Why? Economics.  There is obviously pressure to include as many features as possible during the development process.  Rarely is the time allocated to build in the configuration capabilities to evolve the application seamlessly with dramatic changes in the business.

This kind of capability takes significant investment, the magnitude of which the SMB can rarely afford or justify.  Consider the effort required in building a data backed web form.  Even with nifty component based frameworks like JSP or Spring+JSTL or Flex, developers are typically binding hardcoded object properties or fields with hand designed UI layouts.   Now consider how much more effort is required to build a layer of abstraction that allows forms to be generated or updated based upon meta data changes.  Don’t forget the validation and form processing logic!

The justification for a development effort has to consider the revenue and cost savings.  Unless the SMB is willing to sell the product to other potential competitors, it is difficult to rationalize the investment required to build a level of abstraction such as this throughout the application.  Good COTS must have this level of abstraction as it is the ONLY way they can sell the software to more than one customer.

Since these features are rarely incorporated, the SMB commonly finds itself in a position of weakness after a few curves in business plan.  Incremental “quick fixes” become the norm until one day the application starts fighting against the business model.  Changes start to feel “awkward”.  Data elements appear in contexts that don’t seem to make sense.  Code complexity starts to spiral until one day everyone acknowledges that the application is crap and we need to build a new one.  Of course, this usually occurs well before the payback is achieved.

This leads to an even greater risk to the SMB: the loss of key employee.  Because the SMB development staff is small or even outsourced, the ability to make changes reliably begins to rest in the hands of one or two surviving developers.  Not only does this hamstring effectiveness, it also exposes the SMB to a great deal of business risk in the event of a departure, retirement, etc.

When does in-house development make sense for a SMB?  Obviously, if the business model is so specific that no COTS exists in the marketplace, the SMB has no choice but to turn to in-house custom development.  Another reason for doing in-house development is when a co-founder or key stakeholder is the primary architect and technical resource in the company.    In this case, there MAY be a certain degree of confidence that continuity will exist at least through a liquidity event.  If the key stakeholder is a passionate technician and keeps the software model in lock step with the inevitable changes in the business model, this strategy may end up becoming a real advantage.

It is always difficult to look at software license costs or maintenance fees and not be tempted to “build it yourself”.  However, be careful in considering the risks that this path will surface over the long run.

1 Comment

Application Layering to Improve Business Flexibility

Technical architects have long understood the benefits of architectural laying within an application to reduce coupling, increase modularity and maintainability, etc.  Typically, that layering looks something like:

However, as IT strategists we must also recognize the benefits of layering when it comes to looking across applications.  After all, most organizations (including Storeroom Solutions) rely upon a large number of applications to perform its responsibilities.  Most of these applications were assembled over the years based upon the requirements and business strategy at that time.  Rarely do enterprises have the opportunity to build everything at once (oh my, how I miss the dot-com days when it seemed everything was green field).  The same way layering within an application can improve modularity and maintainability, layering of applications can increase business flexibility without spiraling increases in complexity and cost.

Say, for example, a service provider offers an application that it has developed over the years to perform procurement and inventory management functions for clients.  It has proven to be a real competitive advantage, especially for those seeking a turn-key solution.  The application has also proven to be relatively low maintenance and trouble free.  It looks like this:

Alas,  times change and the number of new clients seeking a turn-key solution starts to dwindle.  Fortunately, the company has developed some real expertise in its procurement capabilities that are attractive to an entirely new segment of the market.  This segment already owns an inventory management application but wishes to tap into the advanced procurement features offered in this application.  The company is faced with a challenge.  Because application layering wasn’t considered in the original design, separating the functionality is a problem.  In order to overcome these challenges, the company must aggressively re-factor its procurement/purchasing functions to allow inputs from a source other than its embedded inventory application.  In fact, it may be easier to re-write the procurement application, leveraging the existing business rules. The new application topology may look something like this:

What is this advantage of this approach? In this case, this application layering approach allows for new business opportunities that would have otherwise been difficult to serve.  Furthermore, by employing SOA concepts and maintaining a disciplined API in its procurement application, the company can keep maintenance manageable.  After all, its internal inventory application could use the same API as the external inventory application to communicate!

Downsides? It is is easy to gloss over the downsides but they are rarely greater than the benefits of moving to this layered application approach.  One practical downside is the complexity that could be introduced to some users due to the new architecture.  For example, if users were accustomed to a single application with an identical look and feel they may be exposed to a little more variation due to the new modular approach.  However, even this can be managed through Single Sign On technology and common libraries and standards between the teams responsible for maintain their respective applications.

While this approach may seem to be a “no-brainer”, I continue to encounter architectures where little, if any, emphasis is placed on “application layering”.  Hopefully, more architects and CIO/CTO’s will start to place greater value on the benefits of this approach.

No Comments

Contact Synchronization Solution?

So I stumbled into Soocial while I was doing a search related to synchronizing contacts across multiple applications.  Soocial appears to be a simple alternative to something like Plaxo, which is trying to expand into the whole social networking craze. I don’t need or want another social networking application.  I use LinkedIn for my professional networking and will most likely add a Facebook account sometime this year for more social networking (is there a difference anymore?).  I just want to synchronize my contacts and calendars across multiple applications/services as simply and unobtrusively as possible.

Here is what I am trying to do.  I am sure I am not the only one out there trying to do the same thing.  In our organization we use Exchange.  This is the keeper of my office calendar which syncs perfectly with my iPhone via ActiveSync.  My wife and I also use our My Dog Boris Google Apps account for our family calendaring and personal email.  She also uses Outlook/Exchange at her office.  We both use the Google Calendar Sync plug in for Outlook to synchronize our office calendars with dedicated calendars set up in Google Apps.  On Google Apps, we can view each others office calendar (read only) as well as make entries to a 3rd calendar that we’ve set up for our Family events.  All that is working perfectly with few glitches.  Google Calendar Sync works in the background and rarely causes any problems (thanks Google).

Of course, most of you know that I am a Mac-guy and I do as much as I can (including all my development work) on the Macbook that usually sits next to my office PC.  When Snow Leopard came out last year, Active Sync was included in the operating system.  It allowed me to synchronize my mail, contacts, and calendars with Exchange.  Of course, Mac OS X has had support for syncing contacts and calendars with Google for some time.  I thought my prayers had been answered.  Well, hold your horses.  You see, while there is support for multiple calendars in iCal and multiple account in Address Book, they remain completely isolated from each other.   You cannot sync your Exchange Contact file with your “On My Mac” address book account that syncs with Gmail.  Hmmm.

The reason this isn’t an issue for calendaring is because of the Google Calendar Sync plug in for Outlook.  When this runs in the background in Windows, it is actually mapping from one domain (Exchange) to another (Gmail).  The Google Sync tool doesn’t (yet) support contact syncing.  There are other plug ins available that have support for syncing contacts between Outlook and Gmail but I have found them to be more trouble than they are worth.  I have tried both OggSync and gSyncit but I ended up scrapping them as they just seemed buggy to me.  Plus, they weren’t free (not a good combination, IMHO).

Enter Soocial.  So while it is a little more intrusive than I would like (a third party having access to my contacts plus another point of failure), it seemed really simple and wasn’t trying to get me to join another social networking cult.  After setting up the free account (is there any other type on Soocial?), I added connectors to my Gmail account and downloaded the plug-in for Outlook.  Note there are a bunch of warnings about this being in beta but I went for it anyway.  I will confirm that there are some issues that I hope will be worked out (read on).  I also didn’t need to add a connector to my Macbook Address Book since I was already successfully syncing with Gmail as above.  Gmail would already act as that conduit for me.  After a little bit of trouble with the Outlook plug-in I was able to get everything working… well almost.

After an initial sync with Outlook I was getting duplicates in my Outlook contacts and on my iPhone.  However, my contacts on the Soocial website appeared to be very clean (no dupes).  So what I decided to do was make a backup .vcf file from the tools available on the Soocial site (just to be safe).  Then I deleted all my Outlook contacts (whereby removing them from Exchange and my iPhone).  I also deleted all my Gmail contacts (after making another backup from the Gmail site).  Next I initiated a manual sync with Gmail and after that appeared to successfully re-populate my Google contacts from my Soocial “master file”, I initiated a sync from the Soocial plug-in inside Outlook.  After a lengthy process (mine took almost 30 minutes), my contacts re-appeared in Outlook with no duplicates!  Now when I make a change to my contacts in either (1) the Soocial website, (2) Gmail, (3) Outlook, (4) iPhone, or (5) the Address Book on my Mac THEY WILL ALL BE SYNCHRONIZED.  Of course, there is some latency depending on the client you are using to update.  Exchange only gets notification of contact updates from Gmail or Soocial if Outlook is running and the plug in is syncing every so often (and visa versa).  I think I can live with that.

All is good now in contact synchronization land (at least until Soocial crashes or runs out of money).  Now if I can only get visibility to my Family and wife’s calendar (through Google Apps) in Outlook and on my iPhone.  I recall having that visibility when I was using gSyncit (it supports multiple calendar syncing between Google and Outlook).  However, I just couldn’t live with the hassle and bugs in the application.  Plus, I like the comfort of knowing that Google wrote the application that is trying to sync my calendars in Google.

1 Comment

Drools JBoss Rules 5.0 Developer’s Guide – Book Review

Drools JBoss Rules 5.0

Drools JBoss Rules 5.0

Drools has come a long way over the past 5 years. Mark Proctor and his team at JBoss have really done an outstanding job bringing Drools up to enterprise grade and making this technology more accessible to the mainstream development community. When I was doing hard core rules development projects using Blaze Advisor (now part of Fair Isaac) and ILOG JRules (now part of IBM) it always bothered me that this technology was way out of reach for most companies due to cost. It seemed ILOG and Blaze/Fair Isaac had become comfortable with an oligopoly and significant price competition was unofficially avoided. The market needed a high quality, low cost competitor to shake things up.

Unfortunately, the open source rules engine space was still very immature. Jess (which was technically not “open source”) was somewhat “academic” with more of a command line orientation and little, if any, tooling. Drools was promising, but it clearly lacked the features that would make it a realistic competitor to commercial solutions. That all started to change in 2005 when Drools was “federated” to JBoss.

Just about when JBoss acquired the development roadmap, version 2.0 of Drools was released. This release dramatically improved performance as it was based upon the popular RETE pattern matching algorithm. Unfortunately, the rules were still authored by hand in XML which was a real deficiency compared to commercial offerings. I found this out first hand when I was doing a small consulting project for Jackson National Life Insurance and conducting impromptu developer training sessions.

Drools is now on version 5 and it has really come a long way. Drools has a solid underlying architecture, clean API, authoring platform, Eclipse-based tooling, rule flow support, decision tables, CEP (Complex Event Processing), and Rule Repository management. If you so desire, you can buy enterprise support from JBoss. I can honestly say that Drools is ready to compete with the big boys.

The presence of this book from PACKT Publishing is a sign that Drools is gaining market share and awareness. The book is well timed and current (it is based upon Drools 5.0). There is another book on Drools called JBoss Drools Business Rules from the same publisher that appears to be geared toward Business Analysts and/or Rule Authors.  I have not read that book, but this one is clearly targeted at developers.

Author Michal Bali does an excellent job on this book, in my opinion. He covers most relevant topics in detail, with the exception of Drools Guvnor (a BRMS that is touched upon briefly in Chapter 10, Testing). In the early Chapters, Michal covers some of the basics including “why business rules” and the basics of setup and creating your first rule/execution.  My only complaint here is that not enough consideration is given to the key concepts behind rules engines.  Things like pattern matching, agenda, refraction, etc.  While most of these topics are covered in spots throughout the remainder of the book (including Chapter 12, Performance), I think it would have been a good idea to layout some of the concepts here along with Drools API equivalent.  Perhaps a simple explanation of what is actually happening in the Drools Expert when you “fire all rules”.

Over the next couple of chapters, Michal goes into further detail by developing use cases for a banking domain and data transformation.  The banking domain setup is also extended later in the book to introduce more complicated concepts. I have to admit that I breezed over the data transformation chapter as I am used to other, simpler approaches to data transformation, usually leveraging purpose built tools.  The banking domain model example was pretty good, although I don’t know that I would have modeled the problem in quite the same way.  The author’s approach drove me to think too much about the objects involved in the solution rather than focusing on the point of the book – the interaction with Drools. This was a minor issue and ultimately Drools was introduced adequately.

Next, Michal covers the Domain Specific Language capabilities, Decision Tables and introduces Rule Flow.  I found it interesting that Michal chose to introduce Decision Tables and Rule Flow in the same chapter as DSL.  I thought DSL could have been covered in its own chapter.  The same could have been said for Decision Tables.  No big deal.  Everything was explained clearly.  In particular, I thought the Decision Table coverage was solid.  It cleared up some confusion I had in a couple of areas.

As we get into the middle chapters, we start getting more advanced.  Logical assertions and stateful rule sessions are covered along with serialization techniques.  Many readers will not have to deal with difficult serialization issues until scalability becomes an issue.  This could have been covered in a later chapter on performance tuning and advanced topics.  Chapter 7 covers Drools Fusion, the temporal and event based (CEP) capabilities of Drools.  Let me warn you.  At this point the content starts to get a little “heady”.    Many readers will get lost about now.  I wouldn’t let it get you bogged down though.  Rules engines can be very valuable in isolating rules without even getting into CEP (for most businesses).

After more detailed coverage of Ruleflow in Chapter 8, a sample application is presented in Chapter 9.  Kudos for choosing common platforms (Spring) and persistence APIs (JPA).  Frankly, this is what most contemporary Java Enterprise application developers will be dealing with anyway.  I also applaud Michal’s usage of previous examples to keep from introducing yet another domain problem.  In this way, the Chapter can focus on incorporating the Drools technologies into a conventional web application.  I didn’t attempt to run all the code but it did look thoroughly explained.

Chapters 10, 11, and 12 go into Testing, further integration with Spring and build tools like Ant and a deeper dive into the rule engine (for performance tuning, etc.).  In the last chapter, there is a much greater explanation of how rules engines work.  As I said earlier, perhaps a simplified introduction could have been incorporated early in the book for a context.  This would have left Chapter 12 to focus on strictly performance tuning and higher end concepts.  Nonetheless, Michal clearly knows what he is talking about in these chapters and provides a solid reference for issues that may come up in production deployments.

Overall, author Michal Bali can be commended for producing a much needed, quality book on Drools.  I believe it will help contribute to greater acceptance of Drools and rules engines in general.  But be warned.  Rules engines are not mainstream for a reason.  The concepts can be difficult to absorb as most people (including developers) don’t “think like a rules engine”.  For some, this creates a blockage that will never be overcome.  The “black box” nature of rule execution is too much for some organizations to handle.  However, there is no doubt that certain problems can only be reasonably solved and managed with a rules engine.  For those challenged with these problems, this book will help unlock the secrets of using Drools.  RECOMMENDED.

1 Comment

Using JQuery ‘data’ feature to detect form changes

You may be familiar with using javascript events such as ‘onblur’ or ‘keyup’ to fire handler methods or even ajax methods to update UI elements. One of the challenges I faced recently was detecting changes in a web form so that I could prevent the user from navigating away from the page without first checking if they want to save their changes. This is no big deal in a desktop app but a bit more challenging in a browser. Thankfully, JQuery comes to the rescue again. This time with the help of the ‘data’ property. Here is how it works.

Let’s say I have a form with an id of ‘my_form’ and I have a number of input fields, including select lists, text, and textarea tags. By simply assigning a class of ‘editable’ (you can use whatever you want), I can detect when a form has changed and enable the save (submit) button accordingly. The magic happens in standard JQuery document ready function. Here is the entire javascript listing:

var formChanged = false;

$(document).ready(function() {
     $('#my_form input[type=text].editable, #my_form textarea.editable').each(function (i) {
          $(this).data('initial_value', $(this).val());
     });

     $('#my_form input[type=text].editable, #my_form textarea.editable').keyup(function() {
          if ($(this).val() != $(this).data('initial_value')) {
               handleFormChanged();
          }
     });

     $('#my_form .editable').bind('change paste', function() {
          handleFormChanged();
     });

     $('.navigation_link').bind("click", function () {
          return confirmNavigation();
     });
});

function handleFormChanged() {
     $('#save_or_update').attr("disabled", false);
     formChanged = true;
}

function confirmNavigation() {
     if (formChanged) {
          return confirm('Are you sure? Your changes will be lost!');
     } else {
          return true;
     }
}

What is going on here? First, a standard variable ‘formChanged’ is set to false when the page is loaded. In the JQuery document load function each text and textarea form element with a class of ‘editable’ is collected and the JQuery ‘data’ property (keyed with ‘initial_value’) is set to the values that were populated upon form load (likely using your favorite server side framework).  This is a HUGE feature that JQuery added.  It allows a ton of flexibility as we will see.

Next, I bind the keyup function to examine the contents of the text or textarea input and compare it against its respective ‘initial_value’.  If it is different, I call a simple ‘handleFormChanged()’ function to enable the submit button (id = ‘save_or_update’).  Binding the keyup event in this way allows me to not wait until the onblur event to check for changes.  The onblur event is what triggers the normal ‘change’ event in JQuery.  However, I also want to bind the ‘change’ and ‘paste’ events to any editable events as the user may use the cut and paste features to change the text or textarea values (here I simply assume a paste is always changing the value and don’t feel a need to check).  The change also picks up any change to a select list or a checkbox.  Of course, you should confirm that all your UI widgets are adequately covered by the various event handlers in JQuery.

The next JQuery bind is to any clickable component that is classed with ‘navigation_link’ (again, my arbitrary terminology).  I added this class to the submit button, as well as all my links in the navigation.  You could also simply use a ‘a’ selector to catch everything but I wanted precise control over which links triggered the action.  The ‘confirmNavigation()’ method simply examines the value of the formChanged variable to know whether to confirm that the user wishes to navigate away from the form page.

The users really love the way this helps prevent losing changes by accident.  What is even nicer is that this code could exist in a javascript include and provide this functionality through your application.  Perhaps you could just simply assign a class attribute to the form such as ‘checked_form’ and adjust your JQuery selectors accordingly, even keeping you from having to assign an ‘editable’ class to each of the input elements.  Then, just make sure you mark your navigation links and/or buttons with a class attribute such as ‘navigation_link’, or if you want everything checked, just adjust your selectors to pick up every anchor tag and every button.

JQuery is really a huge time saver.

20 Comments

Capistrano Bliss

I’ve recently had the opportunity to build an application for a startup using Rails.  While this certainly wasn’t my first experience with Rails, it would my first exposure to administering a Rails application in a soon to be production environment.  Since I am inherently lazy (like most techies), I wanted to make sure I had an automated process for pushing out changes and re-deploying the application.  After all, startups are notorious for rapid fire modifications based upon early feedback loops.  This seemed like a great opportunity to try out Capistrano.  

Capistrano is a build automation tool, similar to Ant.  However, while Ant doesn’t make many assumptions about what (or how) your building and deploying, Capistrano makes a lot of assumptions (sound familiar?).  As a result, Capistrano can quickly become the centerpiece of your deployment process.  Capistrano leverages SSH to execute remote shell commands on one or more of your deployment servers, checking out code from your SCM (usually SVN or git), managing a directory structure of releases and creating symbolic links to your current release.  It will even help you remotely start and stop your app and erect a maintenance page! All of this is accomplished via Capistrano Tasks and Recipes that are scoped by a namespace to avoid conflicts.  Of course, you can easily create your own Recipes, but I didn’t have the need in my particular situation. The out of the box deploy script generated by Capistrano for me was more than adequate, although I did make some tweaks (see below) based on the fact that I was using Passenger (i.e. mod_rails) in my Apache configuration on the deployment server.    Here is what I had to do to get my new application up and running with Capistrano:

Installation

Installation was a snap.  Capistrano is ONLY installed on your development machine.  It is a simple gem install:


$ sudo gem install capistrano
...(output omitted)...
Successfully installed capistrano-2.5.5

Setting up my project

Next I navigated to my rails project root and executed the following command:

$ capify .
[add] writing './Capfile'
[add] writing './config/deploy.rb'
[done] capified!

I’ve been capified! Well that wasn’t too painful.

Tweaking for environment

After capification (?) I updated the ./config/deploy.rb file generated by Capistrano. Mine looked similar to this (some values changed to protect the innocent). Note that I also put some nice comment dividers to help organize the information. They are not required:

#############################################################
#	Application
#############################################################

set :application, "myrailsapp"
set :deploy_to, "/var/www/#{application}"
set :rails_env, "production"

#############################################################
#	Settings
#############################################################

default_run_options[:pty] = true
set :use_sudo, true

#############################################################
#	Servers
#############################################################

set :user, "some_user_name"
set :domain, "yourdomain.com"
server domain, :app, :web
role :db, domain, :primary => true

#############################################################
#	Subversion
#############################################################

set :repository,  "svn+ssh://www.yourdomain.com/repos/myrailsapp/trunk"
set :svn_username, "svn_user_name"
set :svn_password, "svn_password"
set :checkout, "export"

#############################################################
# Passenger
#############################################################

namespace :deploy do
  task :start, :roles => :app do
    run "touch #{current_release}/tmp/restart.txt"
  end

  task :stop, :roles => :app do
    # Do nothing.
  end

  desc "Restart Application"
  task :restart, :roles => :app do
    run "touch #{current_release}/tmp/restart.txt"
  end
end

Everything is pretty self explanatory. However, at the risk of pointing out the obvious… In the Application related properties, set the name of the rails app. This name can be anything but it will be used in the path to the deployment location in the next line so you may want to put some thought into this. I set the rails environment to ‘production’. I didn’t mess with the Settings related properties as I knew I would need to execute commands on the remote server as sudo. The server related properties include the user name that will attempt the SSH session. Domain could be something that resolves via DNS or an IP address, of course. I also needed to update the SVN properties with the path to my repository and my username/password.

For the most part, everything up to this point was out-of-the-box. All I needed to do was adjust some user names, passwords, and server paths. The last section (Recipe?) I added to accommodate the fact that I was using Passenger (i.e. mod_rails) on my Apache-based server as the glue between Apache and Ruby/Rails. This is really simple. The namespace used by Capistrano is ‘deploy’. All I am doing is overriding the default implementation of the :start, :stop, and :restart tasks. In the case of Passenger, all that is necessary is to “touch” a special file in the {RAILS_ROOT}/tmp directory called restart.txt. Passenger monitors this file and will reload when it sees that it has been modified.

That’s just about it. Well, almost. You’ll want to perform the task ‘cap deploy:setup’ from your RAILS_ROOT directory (again, on your development machine) first. This will set up a small directory structure under your target deployment location (adding directories ‘releases’, and ‘shared’). I had a small problem with permissions on the server so I had to manually SSH to update the file permissions in my target directory (i.e. ‘/var/www/myrailsapp’ directory to allow for writing. That was the only glitch for me and it only had to be done once.

Deployment

Next, I executed the following commands:

$ cap deploy:cold
...(output ommitted) [be prepared to respond to the SSH password and sudo password prompt]...

This launched a remote SSH session, checked out the latest version from SVN, built a new timestamp directory under the ‘releases’ subdirectory and created a symbolic link called ‘current’ to the current release. This way, my Document Root setting in my Apache configuration could simply refer to the ‘/var/www/myrailsapp/current/public’ directory (don’t forget that you need to include the public directory in the Apache config). Future releases could be done with:

$ cap deploy

…from the RAILS_ROOT directory of the development machine and voila. It’s automation bliss. There are loads of out of the box tasks (execute ‘cap –tasks’ to see the full list) to help you in the deployment and management process. A couple that I have been using a lot are ‘cap deploy:web:disable’ and ‘cap deploy:web:enable’. This will have Capistrano put up a maintenance page when you ‘disable’ your website for maintenance and remove it when it is back online ‘enabled’. However, be sure to update your Apache mod_rewrite rules to enable this. Here is what I added to my Apache virtual host configuration:

RewriteCond %{REQUEST_URI} !\.(css|jpg|png)$
RewriteCond %{DOCUMENT_ROOT}/system/maintenance.html -f
RewriteCond %{SCRIPT_FILENAME} !maintenance.html
RewriteRule ^.*$ /system/maintenance.html [L]

My experience was very positive with Capistrano thanks to the power of opinionated software. Your mileage may vary.

No Comments

Focus and Discipline == Better Chance of Success

Throughout my career I have had the opportunity to work with a number of companies and clients.  Early in my career, I earned my stripes working in larger organizations, primarily as a profit center manager in the insurance industry.   This provided a perspective on the challenges faced by big companies trying to shave a few points off its expense ratio or steal a fraction of a percent market share from a competitor.  Over time, especially as I began to do more consulting work, I experienced life in smaller companies including numerous startups.  This is an entirely different world altogether, where survival is often the primary short term objective.

Regardless of the size of the organization, I have observed 2 qualities that significantly increase the likelihood of success: focus and discipline.  Others may say “operational excellence” or “people” are the biggest factors in driving success, however my experience suggests these attributes are by-products of focus and discipline or, at the least, difficult to achieve when there is a lack of focus.

Some of the basics I learned at Columbia Business School related to the fundamental discipline of market strategy, often hammered home via writings from Harvard’s Michael Porter.  It goes something like this:

  1. Identify (and validate) a market need for a product or service
  2. Assemble a set of products and services that specifically address these needs
  3. Attack the market with unwavering discipline and focus
  4. After you’ve had a chance to assess the prospects for long term success, re-evaluate and repeat

Unfortunately, most companies skip step 3 or, at the least, compress this cycle into a [almost] daily activity.  Usually, the temptation of a prospect with deep pockets or some other distraction will divert the focus of the organization while it branches out into a self-rationalized engagement.  Meanwhile, resources of the organization are re-directed away from the original focus, watering down the ability of the organization to deliver best in class products and services to that segment.  Unfortunately, saying “no” takes discipline and far too many CEO’s lack this quality in today’s short term world.

Of course, at some point you have to evaluate step 4 above and ask if you are on the right path.  However, this should be a rigorous process executed with the implication that a change in course requires a significant investment over an extended period of time.  This evaluation shouldn’t occur at weekly staff meetings or in response to a sales prospect that appears on the radar screen and wants something “a little different” than what you offer.  If the sales prospect is too large to be ignored, then the organization should decide at that time if this new offering is the better way to move forward and re-channel its energy in this direction.  Again, if this is done too often, employees will become confused and frustrated and you’ll have little change of actually developing any intellectual property or expertise.

When we started Kemper Auto and Home in 1996, as a direct marketing subsidiary of Kemper Insurance, we were certain that direct mail was the way to go.  After all, that is what we knew and it seemed like a reasonable model if we could keep expenses down.  Unfortunately, we learned within the first few months that times were changing and our assumptions regarding response rate were way off.  We executed step number 4 at that time and decided to focus on the internet.  We invested heavily in the web site, hitting remote rating servers, real-time communication with underwriting agencies, etc.  Within 2 years we became one of the leading providers of auto insurance on the web (through relationships with portals such as Insweb).  We did a nice job of adjusting course and focusing our energy.

However, we didn’t follow this formula about 2 years later.  We allowed the prospect of writing a big chunk of business through an airline carrier affinity relationship to divert our attention.  We weren’t in the affinity insurance business (a completely different underwriting and pricing model) and we got burned.  We lost focus.  It almost brought the company to its knees.  Fortunately, we were able to scramble to get out of the relationship and re-direct the resources back to the internet business before it was too late.  But it set us way back in our plan and schedule.

This experience had a profound re-enforcing impact on my academic learning.  Unfortunately, I see it occur over and over.

No Comments

Learning Rails – Book Review

catOkay, so I have to admit I am not entirely a Rails noob.  I have built a couple of applications using Rails and continue to maintain an interest in its emergence in the enterprise.  However, since Rails 2.x has come out, I’ve fallen a bit behind and needed a quick refresher.  Learning Rails, by Simon St. Laurent and Edd Dumbill seemed to be a good candidate.  It didn’t disappoint.

Learning Rails is an excellent book for quickly picking up the nuances of Rails, especially if you are looking for something that is an easy read and is based upon Rails 2.x+.  The chapters are relatively short and the authors do an excellent job of knowing how far to push a concept before deferring (referring) to alternate texts for more information.  Nonetheless, I picked up a few new tidbits along the way, including some practical suggestions for dealing with nested resources and a couple of ideas for routing.  However, make no mistake about it, this is not an advanced Rails text.  For more detail, I would look into Obie Fernandez’ excellent “Rails Way” or the myriad of books on specific topics such as AJAX, Active Record, etc.

What could have been better?  I thought the chapter on AJAX was a bit weak.  The example code didn’t seem as elegant as you may expect from a Rails purest.  Also, there was not much coverage on the concept of layouts and how the location of the various view pages relate to MVC routing, etc.  I suspect even the most inexperienced Rails developer will quickly embrace the idea of layouts and partials (which were covered to some extent in the forms section).  Finally, there was not much mention of some of the newer features such as localization (probably more of an enterprise feature anyway).

One thing the author cautioned is that the book is written from the “outside in”.  In the preface, this was clarified to mean that the approach taken in this book was to present it from more of a traditional web developer’s perspective rather than, say, an enterprise application developer’s perspective.  It was cautioned that certain readers may be turned off by this approach.  I suspect that I am likely in the category of enterprise developer since I do think in terms of MVC architectures, etc.  My background is NOT in building simple scripted PHP web apps for low budget clients.  However, I was NOT turned off by the approach.  I thought everything was explained nicely and accurately, regardless of the audience.  So for you Enterprise Java purists out there, don’t be put off by the subtitle.

This book is highly recommended for those just starting out in Rails.

No Comments

Pro Flex on Spring – Book Review

I just finished reading Pro Flex on Spring by Chris Giametta (APress).  Unfortunately, I can’t bring myself to recommend it.  I really wanted to, as I am getting more into Flex and I have been a Spring evangelist since before it was called Spring.  My primary concerns center around organization and approach to the example application.  I believe these things get in the way of presenting the core concepts.

The book starts off with the obligatory chapter on RIA and introduction to Flex and Spring.   This is likely necessary to provide a context.  Chris then proceeds to spend a chapter developing a lengthy project plan for the application to be developed throughout the book.  This includes project objectives, wireframes, etc.  I didn’t mind the chapter except that I was ready to launch into construction at this point.  Hold your horses.  First there is a chapter on exploring the various tools and means for developing Flex/Spring apps.  At the end of the chapter there is a rather awkward lesson on creating 3 different Eclipse projects to hold the client, server (Spring/Hibernate), and library projects.  They are rather oddly named but that is a personal preference.  Okay now we should be ready to start.

Not quite.  There are another 6 or 7 chapters to endure first, including another introduction to Flex applications (am I supposed to be adding these snippets/examples to the projects I just created?).  The Flex deep dive gets off on some tangents that I believe would be best left for later.  For example, diving into graphic manipulation (Matrix, Graphics, etc.) and skinning at this point is a little premature.  I would have focused on the basics, perhaps demonstrating the technique of custom components by overriding a behavior or creating a component that combines a couple of UI widgets.  So I am typing along, not knowing if this stuff is going to be used down the line or not, or where the heck I should be saving it.

Next is a chapter on Spring.  Again, nothing wrong with Chris’ description.  Unfortunately, he starts creating sample applications again.  I am typing away thinking this somehow all ties together and if I quit now, I’ll only have to come back later.  I don’t have any issues with his use of Spring in the examples, it’s just hard to cover the concepts in such limited space much less try to create a sample multi-tiered application within the same chapter.  Again, I would have stuck to explaining the concepts and left the detailed code for the sample application (which I presume we will get to at some point).

Moving right along, we dive into Flex and Spring Integration Architecture.  Now we are getting somewhere.  This is probably one of the more valuable chapters in the book although I probably could have done without another example (RSS feed reader).  Chris does a good job explaining the various means for interacting with the server side.  He also includes a nice description of the Spring BlazeDS integration project designed to help simplify the configuration.  

Chris then digs into Cairngorm and PureMVC, taking the opportunity to introduce yet another sample application – the Contact Manager.  Again, I love examples.  However, trying to introduce so many unrelated sample applications throughout the book draws attention away from the overall concepts.  There is too much space wasted describing code required to make each example work.  After all, once you dive into an example application you have to provide the complete code base or that will create other frustrations.

Chapter 8 is on Data Persistence and Chris feels obligated to describe (with examples) each technology that may be used to access data (iBatis, Spring JDBC, Hibernate, Hibernate with annotations, etc).  

Chapter 9 is on Security and this was pretty nice as it focused on Flex integration with Spring Security.  However, it did not mention the Spring BlazeDS Integration project and how it helps simplify the communication between Flex and Spring Security.  Also, I don’t believe he needed to get as detailed on the implementation of the authentication example.  There are simple pre-defined beans in the Spring Security library for facilitating authorization against an in-memory or persistent user directory.  Perhaps those would have been adequate enough to explain the concepts.

In Chapter 10 we finally get around to the sample application set up in Chapter 2.  The remaining chapters deal with building out various aspects of the example, including the database model, Spring Service tier, presentation, etc.  I would say that Chris’ strength appears to be in the presentation tier with Flex/ActionScript.  I’m not crazy about some of his design patterns/strategies on the server side.  In particular, I have to admit that I never quite embraced his “object identifier” concepts and use of an “assocobjectID” field on every entity table.  It seemed obtuse and non-descriptive.  I’ve never seen this used in practice.  I much prefer for green field projects to stay with conventions such as “id” or “xxxx_id” for the primary key and “xxxx_id” for the foreign key fields in many to one relationships.  Anyway, I am picking at straws here.

The bottom line is that I am happy to start seeing books on Flex as an enterprise application technology.  Unfortunately, this one makes it a little hard to get to the meat of the content.  If you are thirsting for any content on the subject, I would pick up a copy.  However, I would focus on reading through it rather than trying to work the examples.  You may get frustrated.

No Comments

Looking Forward

Yesterday we learned that U.S. GDP shrank at a rate of 6.1% in the 1st quarter of 2009.  This follows a 6.3% decline in the 4th quarter of 2008 and marks the first time since the 1974-75 recession that we saw 3 consecutive quarters of negative GDP growth.  Things seem bad all around.  

As leaders during this turbulent time it is important to remain focused on the end goal and not become distracted by the “radiation” of the economic slowdown.  Sure, dips in revenue make it difficult to imagine launching that new infrastructure project or hiring that new employee.  However, while most are hunkered down or retreating in a futile attempt to meet short term budget or executive compensation goals, long range thinkers see this as an opportunity.  

Rather than running for the hills, we should be utilizing this opportunity to cut the fat from the organization and reallocate those resources on projects and personnel that will prepare your organization to capitalize on the eventual turn in the market.  It will happen, trust me.  Someone else in your industry will be thinking this way.  If it isn’t you, you’ll become a dinosaur or at least seriously left behind.

As technology executives, we shouldn’t be afraid to let our CEO’s know when we think we are looking in the rear view mirror rather than through the binoculars at the distant horizon.  Be proactive in suggesting appropriate savings opportunities, while simultaneous revisiting the CBA (Cost-Benefit-Analysis) for the most promising projects.  Be a champion for these initiatives.

No Comments