How to Predict the Candidate’s Success on the Job

Wouldn’t it be wonderful if success on the job could be predicted and bad hires could be avoided all together? The truth is, there are some things that hiring managers can do to forecast the candidate’s future success, but it takes some planning to know what to look for during the interview. Let’s begin with the job analysis. How to Predict the Candidate’s Success on the Job

1.  Conduct a job analysis.

The purpose of a job analysis is to gather and organize information about the job that you are preparing to fill. The information you gather will be used to prepare a detailed job description, determine pay rate, and help you decide on training needs, as well as summarize expectations for successfully handling the job. As you prepare to conduct a formal job analysis you will want to be sure to avoid some of the major mistakes along the way that include:

•    Asking or requiring an untrained person to conduct the analysis
•    Not allowing enough time to complete the task
•    Using an unreliable process to collect data
•    Failure to involve the incumbent in providing information
•    Not getting upper management’s support in a commitment of resources to complete the analysis

2.  Prepare a list of job-related criteria.

After you have completed the job analysis you will be able to prepare a list of job-related criteria. These are the skills and qualities that a successful candidate must possess in order to meet the specific requirements of the job. This list should include “technical” requirements as well as non-cognitive criteria such as: leadership capabilities, interpersonal skills, objectivity in decision-making, and skills in problem solving. For non-exempt positions you may have criteria such as: computer skills, people skills, organized thinking and ability to get along well with others. Once you have identified the criteria against which you will measure candidates, you are ready for the next step.

3.  Develop questions that are related to the job criteria.

As you decide which questions you will be asking for each position, think in terms of which questions will elicit the information you need as related to each criterion. What you hear will not only help you make a sound and defensible hiring decision, but it will also help you predict future success on the job.

4.  Develop a simple candidate rating form so that you can quickly and easily evaluate each candidate against your specific job criteria. The form should include:

•    The job title
•    The candidate’s name
•    Date of the interview
•    A list of the job-related criteria
•    The specific things you are looking for as you evaluate the candidate against the job criteria. For example, if creativity is one of your criteria you may be looking for examples of creative endeavors on behalf of the candidate. You will probe for specific examples to support his or her claim of being a creative individual
•    Room for recording notes
•    A rating system which can be as simple as P for a positive rating, A for an average rating, and N for a negative rating. Or, you might assign point values to candidates as you rate them against each criterion. Your rating scale could be one to three or one to five.

5.  Don’t forget that past behavior is a predictor of future behavior. The more recent the past the more reliable the information.

This point is critical for predicting success on the job. Therefore, you will be asking for examples from the candidates’ recent past that show you that they can do what they claim. Many hiring managers find themselves short on information about the candidate because they don’t ask behavior-based questions that delve into the past. Instead, they ask hypothetical questions which in effect, invite candidates to fabricate their answers. Some hiring managers limit their questioning to what’s on the surface and they never attempt to probe deeper for the reasons behind the candidates’ decisions and actions.

6.  Keep in mind that the behavior that you are observing during the interview represents the candidate at his or her best.

If you are not happy with what you are seeing and hearing, it’s not going to get any better. A candidate who is dressed inappropriately or who has a difficult time answering your questions should raise a “red flag” in your mind. As you evaluate each candidate, recognize that he is putting his best foot forward. If that effort is not good enough, keep looking until you are satisfied that the individual you choose is the best person for the job.

About the Author: 

Carol Hacker is the former Director of Human Resources for the North American Division of a European manufacturing company, Employee Relations Manager for the Miller Brewing Company, and County Office Director for the US Department of Labor. Headquartered in Atlanta, GA, Carol has been the President and CEO of Hacker & Associates since January 1989. She specializes in teaching managers, supervisors, team leaders, HR professionals, and executives how to meet the leadership challenge. Carol is the author of over 400 published articles and 14 books including the bestseller, Hiring Top Performers-350 Great Interview Questions For People Who Need People. She earned her BS and MS with honors from the University of Wisconsin. She can be reached at or 770-410-0517.

Posted in: 
Hiring Manager

Looking for an Innovative SPA Framework? Meet Durandal

With so many SPA-style frameworks emerging in the last couple years, you may wonder what makes Durandal unique or different. Why would you want to choose it for your next project? I’d like to share a few key points to that end…Durandal

A Natural Alliance
Rather than re-invent the wheel, Durandal starts by combining and building on top of three existing libraries. Each of these libraries is very mature, has a large, active community and was strategically chosen because it meets one specific need very well. First, we start with jQuery. In some ways you can think of it as a better DOM or the “missing JavaScript core library”. Next, we add in RequireJS, which gives us rich, declarative JavaScript modules. Finally, Knockout provides us with powerful two-way data-binding.

With these three libraries as a foundation, Durandal constructs a thin integration layer and transforms them into a powerful SPA framework. In addition to strategically connecting these libraries, Durandal adds new functionality. Some things Durandal adds are: a client-side router, rich view composition, screen state management, pub/sub, simple conventions, modals/message boxes and more.

Maximum Skill Reuse
If you’ve worked with any of the three libraries listed above, then you already have skills you can leverage on a Durandal project. You already know part of the framework. This makes it relatively easy for existing web developers to get started. The time you’ve invested learning the three core libraries on prior traditional web projects translates directly to Durandal apps. Choosing Durandal is almost a “no brainer”.

Suppose you’ve never worked with jQuery, RequireJS or Knockout. Is Durandal worth your time to learn? Why not pick a different SPA framework that is “all inclusive”? The simple answer is YES, it is well worth your time. Here’s the longer answer: Almost every major SPA framework has a way to work with the DOM, create modules and declare data bindings. No matter what framework you pick, you are going to have to make the effort to learn these things both conceptually and in terms of the chosen library’s API. The difference with Durandal is that we get those capabilities from other libraries which were originally designed for traditional web development. That means that when you learn those things in the context of Durandal, you are also learning things you can directly apply to traditional web development too. It’s a huge return on investment.

Powerful Modularization
All Durandal code is modularized based on the AMD standard, which is supported by our use of RequireJS. To my knowledge, Durandal is the only SPA framework based on this de-facto standard for JavaScript modularization. As a result, Durandal’s capabilities in this department far outshine everything. Because we use the AMD standard, there is no presentation-framework-specific code required to create modules. This not only keeps your own code clean, but also makes it more portable: You can write a Durandal module for use both in a SPA and in a traditional web application where Durandal itself is not even used.

But that’s only the beginning of the advantages…

The AMD spec is pluggable via the notion of loader plugins. These plugins can acquire and transform any resource in any way and supply it as a module dependency. Want to load some JSON data? You can do that declaratively. Want to load CoffeeScript? You can use a loader to compile it on the fly if you want. You can write loaders to do just about anything. Most of Durandal’s view engine is in fact supplied by the text loader plugin. What’s the really awesome thing about loaders? They can execute code both at runtime and at build time. This means you can have a loader optimize content as part of a build process, but not have to change your application code at all. It’s extremely powerful.

Speaking of the build process….RequireJS supplies a build tool called r.js. It essentially takes a list of modules as inputs and spits out 1-n optimized files. For a small or medium-sized application, you might choose to build to one optimized file. For larger apps, you might choose to build a shell with each feature area optimized into its own file…and download features on the fly based on live user usage. That and many more deployment scenarios are what this tool was designed for. On top of that, there are many higher level build tools that work with RequireJS. You can use Grunt, Mimosa or even Durandal’s Weyland to automate the process.

Language support for AMD is also great. If you are using CoffeeScript, you’ll quickly notice that it’s function and block syntax makes defining modules really clean. But an even better  experience is had when using TypeScript because it has direct language support for the concept of modules. When you compile your TypeScript code you just tell it that you want AMD modules and it will spit out JavaScript ready to work with RequireJS. But that’s not the end of it. The work on TypeScript modules, RequireJS and the AMD spec has been influencing the next version of JavaScript directly. RequireJS is already planning to provide a direct migration path for code written with it today to the native module implementation of the JavaScript of the future.

Beyond Unobtrusive
Because Durandal relies on AMD modules and a lite set of conventions, you actually don’t see Durandal itself in your code very much. Yes, you use its APIs to configure the framework and set up your application’s router, but beyond that you don’t invoke Durandal much at all. You won’t be calling into Durandal to create modules, controllers, models or anything else. You don’t need to inherit or extend from any special classes or objects. Most code in a Durandal application is vanilla JavaScript and you could take it out and use it without the framework at all. It’s particularly powerful when used in combination with the observable module, which allows you to also remove all traces of Knockout code.

Flexible Composition
Not only do you need the ability to break down complex applications into small modules, but you need to be able to “compose” these small pieces together again at runtime. The declarative features of the AMD specification enable you to do this with your JavaScript objects in much the same way that you would leverage IoC and even simple name-spacing in other languages. The result is powerful and flexible object composition right out of the box.

But you don’t just need to compose objects, you need to compose views. Fortunately, Durandal has the most powerful, declarative view composition engine available in any framework today. Here’s a short list of some things you can do declaratively:

  • Statically/Dynamically compose a child view into a parent, allowing the binding context of the parent to be applied to the child.
  • Statically/Dynamically compose a child view with it’s own binding context into a parent view.
  • Statically compose a model with a dynamically changing view.
  • Statically compose a view with a dynamically changing model.
  • Statically/Dynamically compose in a view while overriding parts of the view with custom HTML on a case-by-case basis in the parent. It can have it’s own binding context or inherit its parent’s.

These are just a few examples of what can be done. Keep in mind that in all these cases the composition can be configured with transition animations, optimized per composition site to cache views or be driven entirely by static or dynamically changing data.

Elegant Asynchronicity
Building rich clients usually involves asynchronous requests for data or other resources. Durandal was designed to handle this with elegance from the very beginning. To that point, Durandal uses promises throughout. Durandal’s own API exposes all potentially asynchronous commands via promises. Internally, Durandal also understands when you use promises and it can therefor cause the binding system, router and other key features to respond appropriately. Out of the box, our promise implementation is provided by jQuery. However, existing site documentation explains how you can switch that out in favor of your favorite promise library, such as Q.

If you are targeting an ES5 browser, you can then enable the observable module. When this module is active, it teaches the binding system to data-bind directly to promises. The result is that you can set up a foreach binding over a promise for an array of data. In your own code, you don’t have to handle the promise yourself at all.

Navigation & Screen State Management
Durandal’s router is perhaps the most powerful router available today. It is configured with simple route-to-module mappings but can also be configured for convention-based routing. It automatically handles bad routes, supports hash and push state navigation and provides a lot of capabilities around data driven routes such as parameters, optional parameters, splats and query strings. Additionally, we support the notion of “child routers” allowing you to structure and encapsulate entire areas of your application, reducing the overall complexity of the navigation structure.

In real applications you need more than just routing though. You need something which is occasionally called “screen state management.” What is that? Imagine you’ve got a customer filling out a form in your application. Before they save, they attempt to navigate to a new screen. The current screen is in a “dirty” state and the application may want to prevent the user from navigating away…or at least temporarily halt the process and ask the user what it should do with the data. In Durandal, the router supports something we call the “screen activation lifecycle” which allows any screen to synchronously or asynchronously control flow into and out of screens. But this functionality is implemented so that it’s decoupled from the router itself, thus Durandal also uses it to handle the lifecycle of its modal dialogs. In fact, you can use it anywhere in your app, even controlling fragments of a screen and individual component activations. This is a complex bit of functionality to get right but it is critical in real applications. Most frameworks just ignore it entirely, but not Durandal.

Enterprise Ready

Consistent Testability

SPA’s can be complex code-bases and such projects need to be tested. Durandal has this area covered well. Because we’ve built on RequireJS and all of your code is built as AMD modules, you can easily fake, mock, or stub any part of the system. This applies both to your code as well as all of Durandal’s modules. The test strategy is consistent.  In fact, if you want to see how to write unit tests for your application, all you have to do is fork the Durandal test suite, change some file paths and you are up and running with a test setup for yourself. If you are interested in testability, keep an eye on the site. Our upcoming release will contain additional documentation showing multiple strategies for testing.

SEO Optimization

From the beginning of work on Durandal 2.0, SEO was considered. Interestingly, much of this work has to be handled on the server for a SPA. That said, Durandal supports all the necessary client-side hooks and configuration options to enable full Google crawling of your application. Our next site release, coming this week, will show you how to do it.

Globalization and Localization

Modern applications need to be made available to diverse people groups. Today Durandal is being used by companies all over the world who are rolling out apps to multiple cultures. Since Durandal was designed to be pluggable, it actually only takes a few minutes to plug a localization solution into the binder. With just a few lines of code centralized in one part of your code-base, you can ensure that everything displayed on the screen is properly localized; no hassle, no fuss. As part of our release this week, you’ll see some new documentation showing you just that.

Responsible Versioning

The Durandal project follows Sematic Versioning with great rigor. APIs do not break on minor or patch releases. Minor releases contain only additions and patches contain only bug fixes. Only major version changes signal potential breakage. Those aren’t going to happen very often. When you depend on Durandal, you know exactly what the version numbers mean and what you can expect when updating. The docs from previous versions are made available perpetually and conversion guides are provided for major version changes. We handle integration of the dependent libraries for you as well.

Commercial Support and Training

Durandal has an active community that is happy to help you learn the framework as you work through your application’s unique challenges. Much discussion is taking place already in our Google Group as well as on Stack Overflow. However, if that is not enough for you, or if you or your business need a safety net, Durandal has a few options available to you.  First, we have commercial support. This is a monthly subscription you can cancel any time and is priced based on the team size. We usually have clients purchase the commercial support for the few months they are working on the project and then discontinue after a successful rollout. It’s a great bargain compared to traditional consulting prices and turnaround time is very good. Additionally, my company provides customized training either delivered in person at your place of business or virtually through a series of web meetings. Pricing is usually negotiated on a case by case basis depending on the depth, time length and number of students. Finally, in the next couple of months you are going to start seeing official video training become available. Some of this will be free and some of it will be available for a reasonable price, providing you not only with a way to learn directly from me, but also to financially support the project.

While there are several SPA frameworks available today, only Durandal has all the benefits and characteristics listed above. But not only is it one of the most powerful and flexible options today, it also provides you with a great return on investment for your non-SPA web work. On top of all that, it’s enterprise ready and the kind of training and support you would expect is readily available. And this is just the beginning. Wait until you see what’s next…

About the Author: 

Rob Eisenberg is a JavaScript expert and .NET architect working out of Tallahassee, FL and he is the President of Blue Spire Consulting. Rob got his start with computer programming at the age of nine, when he thoroughly fell in love with his family's new Commodore 64. His fascination with programming started with the Commodore Basic language, then moved to Q and QuickBasic and quickly continued on to C, C++, C# and JavaScript. Rob publishes technical articles regularly at and has spoken at regional events and to companies concerning Web and .NET technologies, Agile software practices and UI engineering. He is coauthor of Sam's Teach Yourself WPF in 24 Hours and is the architect and lead developer of the Durandal and Caliburn.Micro frameworks. When not coding, Rob enjoys swing dancing, making artisan cheese and playing drums. Follow him on Twitter @EisenbergEffect

Posted in: 

Battle of the Smartwatches: Sony vs Pebble

I used to be one of those people that collected wristwatches when I was younger. A nice wristwatch can make a great fashion statement. Unfortunately, the greatest strength of the wristwatch, was also its greatest weakness. It only told time. Yes, you had some that had a timer function, a stopwatch function, and even a calculator, but those devices were incredibly ugly. With the introduction of cell phones, wearing a wristwatch became redundant. It’s for this reason I stopped wearing them. That is, until recently. In the last few years, several companies have been working on new wristwatches called “smartwatches”. These watches do everything from displaying your text messages and email to showing local traffic camera footage. Today we’re going to take a look at two of what I think are the most affordable and good-looking smartwatches currently on the market: the Sony SmartWatch and the Pebble SmartWatch.

The Pebblepebble
The Pebble started out as a Kickstarter project and actually set a Kickstarter funding record. 85,000 people backed the development of the Pebble and over 250,000 of these things have been sold. Having these kinds of numbers, my expectations were probably too high for this device to meet. I was very aware of this fact going into this review.

Here’s what I liked about the Pebble.

•  The 144x168 pixel display is always on due to a new low-energy technology known as e-paper.
•  It’s the classiest-looking SmartWatch I’ve seen other than the Samsung Galaxy Gear (not reviewed here because it only works with Samsung smartphones).
•  It’s waterproof! You can go diving with this thing!
•  Battery life is rated at 5 to 7 days. In fact, I never charged it the entire week I was testing the device. 
•  It has an open software development kit so you could develop your own software for it if you were so inclined. Other users have already begun doing this at

So what didn’t I like about the device?

•  Unfortunately, the device is only black and white because of its e-paper display.
•  It doesn’t use a standard cable for charging. No Micro-USB cables here. 
•  Not many applications can be installed on the device. During testing, I could only install 8 applications at any one time. 
•  All the “good” applications have to be side-loaded from a third-party website.
•  It’s not a touchscreen device. It uses four buttons on the side of the watch to navigate it.

Out of the box, the Pebble comes with a lot of different watch face applications to change its appearance.  From the Pebble application on your smartphone, you can download more watch faces. You can download other third-party applications for the device here. The Pebble is currently $150 as of this writing.

The Sony SmartWatch sony
My first smartwatch was the Sony Ericsson LiveView, the predecessor to the Sony SmartWatch. When Sony released the Sony SmartWatch, it was better in every way. It’s the first color smartwatch I’ve gotten my hands on and the applications for it are pretty amazing.

Here’s what I liked about the Sony SmartWatch.

•  It has a color screen. The 128x128 pixel 65,000 color screen is very pretty to look at.
•  It’s a touchscreen device. To navigate on the watch, use a combination of swiping or tapping an onscreen button just as you would on a smartphone.
•  Finding and installing applications on the watch is easy. To install an application on the device, just start the Google Play Store on your smartphone and search “Sony SmartWatch”. 
•  It has many good applications. This is where the Sony SmartWatch shines in my opinion. It has applications for local traffic cameras, instant messengers, remote recording applications, and even a web browser. 
•  It has 7 colorful and swappable watchbands via the clip (all sold separately).

So what didn’t I like about the Sony SmartWatch?

•  Ugh, the watchband clip. While I like the colorful bands, I would rather they have implemented this without adding additional height to the watch. This really makes it rise off your wrist too far for your shirt sleeve to cover it. 
•  It’s not waterproof, it’s splashproof. You can get water on it, but I wouldn’t go diving with it on.
•  This watch also has a non-standard charging cable. If you lose it, you’ll need to order it specifically from Sony. 
•  The battery life on the device is about the same as that of your smartphone. You are going to have to charge this thing every day after use. Sony’s website says you’ll get 3 to 4 day’s typical usage and up to 14 days standby. In their dreams.
•  It doesn’t send email notifications without installing a third-party application. While I understand why this wasn’t included, the Pebble seems to do this flawlessly and out of the box. There are a slew of really good third-party applications that do this for free and are easy to install.

With the Sony SmartWatch, it has just a few notifications installed and enabled right out of the box. To get the real functionality out of this device, you’ll need to download a variety of third-party applications. The good news is that most of these applications are free and all of them are easy to install and find. The Sony SmartWatch is $129.99 as of this writing.

Other good smartwatches on the market right now are the Samsung Galaxy Gear, I’m Watch, and Sony SmartWatch 2. Unfortunately, the Samsung only works with Samsung phones, the I’m Watch is fashionably horrible, and the Sony SmartWatch 2 was only just released so I have not yet reviewed this device.

When I started writing this blog, I had originally intended to pick a winner between these two devices. However, they both serve their purposes in a different way. While I love the battery life and always-on display of the Pebble, it just didn’t have any “killer” applications and you can only install 8 apps on it. And while I did get a lot more “cool watch bro’s” from the Sony SmartWatch and its list of apps is stellar, the battery life is terrible and it’s not waterproof.

So maybe the answer is the perfect smartwatch doesn’t exist yet and maybe never will. However, with new devices from Apple, Samsung, and Sony in the works, I’m confident we are just a few years away from everyone talking on their “watchphones” while trying to eat a cheeseburger next to you in a restaurant. One day, that will be a good problem to have.

About the Author: 

 Legend Wilcox is a Senior Systems Engineer for MATRIX who has been working in the IT field for over 15 years. He’s an avid technology gadget collector and total geek.

Posted in: 

Want Job Security? Be Underpaid

For as long as anyone can remember, there has been one primary driving force in every career: money.Job Security

I’ve been in the staffing industry for over 20 years. During that time, I have helped thousands of candidates seek their next position and hundreds of companies secure top talent for their organization. My observation over that span probably won’t surprise you - salary is always one of the, if not the, top criteria candidates consider before taking a job. And why not? The amount of money you make (or don’t make) can have a dramatic impact on how you live. For most of us, the calculus is simple: more money = more security = better life.

As if the “better life” argument isn’t compelling enough, the ubiquity of technology (smartphones with corporate mail, VPN, SaaS-based systems), has blurred the line between personal and work time – like our devices, we’re now expected to be “always on”. Maximizing income isn’t just smart, it’s fair . . . right?

Unfortunately, it’s all too easy to lose perspective on the real driver behind compensation. People typically aren’t paid more simply because they have new skills, longer tenures or more experience. Instead, they are paid more because those skills, tenure and experience bring greater value to their employers.

Which leads me to my point:  Be underpaid.

This probably sounds strange coming from a career headhunter, since we get paid based on what you get paid. But, I passionately believe that if you over-deliver… or are underpaid, there will always be a seat for you at the table. Now, I don’t believe you should be taken advantage of in this worker/company relationship – rather that you should be delivering more real value to the company than what they are paying you. Relationships are based on mutual need and mutual benefit. You need a job and the associated money - and companies need your skills, abilities and experience.

Think about the top NFL quarterbacks today and their sky high salaries. The value they bring to the organization must still be greater than what they are paid, or there won’t be a spot on the team for them. Sure, the money paid to Tom Brady, Aaron Rodgers and Tony Romo is almost too much to wrap your head around. But, clearly the value they bring to their respective organizations is greater than the millions they receive in salary. You see it all the time. As soon as their value to the team drops below the millions they are paid, they are cut or traded.

Whether you’re a top NFL quarterback, IT developer, finance manager or marketing guru – to keep your seat at the table – always strive to over deliver for your employer. I know it sounds counterintuitive, but I suggest that you feel just a little underpaid for your contributions. Keep in mind that your career is more than just your current job. If you approach your selected profession as a marathon and not a “what’s in it for me – show me the money” sprint, the long term gains you’ll enjoy will give you not only the income you desire, but job satisfaction and security for many years to come.

About the Author: 

Jon Davis is Executive Vice President of National Sales for MATRIX Resources. He has over 20 years of experience in leading sales teams and corporate recruiting efforts in all verticals ranging from startup companies, mid-market organizations and the Fortune 100. Follow Jon on Twitter for more career tips: @JonDavis12.

Posted in: 
Job Seeker

Distributed Agile Teams: The ART of the START

I’ve been sharing about agile methods for over ten years at conferences and workshops. One of the top three questions I always receive from attendees is:

Does agile work with distributed teams?Distributed Agile Teams

And sometimes the question is phrased another way:

The notion of co-located teams is nice in theory, but in real life we have team members all over the world. We need to cobble together teams based on our business needs from wherever they are. Does agile support that level of high distribution?

Photo credit: Flathead Beacon

I often smile at the repetitiveness of the question. It indicates clearly that enterprise level software development is often distributed. It also indicates that outsourcing is still alive and well. I’ll try to provide some answers to these questions by sharing two stories of distributed teams I have experienced.

A Tale of Two Distributed Teams

The “Good”

I was lucky enough to be invited to do an agile jump-start for a new client. They are a rather large firm that builds hardware and software devices supporting mechanical control systems. They were kicking off several projects that encompassed many teams, some of them offshore and many distributed. They were looking to leverage Scrum as the method for starting these projects, and they invited me in to do some training and get the teams sprinting in this new style of product development.

When I arrived at my first class session, I learned that they had invited four complete Scrum teams to attend. In fact, one of the teams was based in India, and they had flown the entire team in for several weeks. The first week was for our Scrum boot camp, and the next few were to work with local teams as everyone started sprinting together.

I distinctly remember at the time thinking how novel this was. My typical experience with firms kicking off agile in distributed teams was more along the lines of the following:

  • Throw disparate individuals (local and remote) together into “teams”
  • Tell them they’re “going agile”
  • Send them some references on agile; at best, run them through a short class
  • Expect the team to start sprinting … ASAP
  • Expect great results
  • Rinse and repeat if you still have a customer…

Clearly I’m joking a bit here, but there is a good bit of truth in these steps. Many firms don’t start up their distributed agile teams very well. So I was understandably surprised at how thoughtful my client was in investing in their teams’ start.

I returned to the client several sprints later to do an informal assessment.  By now the remote India team had returned home and was happily working with the local teams. I sat in on some of their stand-ups and other meetings. I was incredibly impressed with how well they were working as an agile team. I was even more impressed with how they integrated and collaborated with the local teams — it was virtually seamless.

It struck me that the cost the company had incurred in bringing everyone together to generate a solid start was paying them back big time. That solid project start-up had put everyone on the same footing and really solidified them as a set of cross-functional teams that had the same vision and were working toward the same goals.

As an aside, I’ve seen this same investment pay similar dividends at multiple clients. Now let’s explore another story.

The “Bad to Ugly”

I was invited to visit another set of teams to help them with some difficulties they were having working across distributed locations. They were executing sprint number five of a 12-sprint release sequence. There was a distributed UX/design team, two front-end UI development teams (one in the US and the other in Brazil), and a back-end development team in Singapore. In short, a very distributed mix across four distinct teams working on a single project.

One key challenge I remember was that the front-end and back-end teams were really struggling to figure out how to work together. They were using email and documents as their primary means of collaboration. But quite often, it would take days to sort out a simple interaction that was required to move a user story forward. And the issues weren’t focused on one team or one locale. There were pervasive communication problems across the teams.

One idea came up in a local (US) team's retrospective. It turned out that nobody had ever “met” the offshore Singapore team that they had to integrate with (at an API level and at a project collaboration level). The idea was to have a video conference call across the two teams as a means of introduction and familiarization. Everyone thought it was a great idea and we scheduled the intro.

I volunteered to serve as the facilitator of the video introduction. There were eight members on our local UI team. We fired up the video and zoomed into the room in Singapore, eagerly expecting to meet a team roughly our size and composition. And they started filling the room — and filling, and filling!

In the end, over 30 people came into the room from Singapore. We were amazed and during the course of the introduction quickly realized some things:

  • We had assumed some of them were men and they were actually women and vice versa … who knew?
  • There were two expectant mothers in the room … who knew?
  • We thought there were roughly 8-10 members of the team, and there were 30. Even funnier, we’d been working with them as if their capacity was 8-10. Who knew?
  • Clearly nobody on either side had ever seen or met each other. Apparently ALL collaboration was via email, text messages, and documents. Not a phone/video call to be had.
  • We learned about their background and skill levels, realizing that they knew much more than we had been giving them credit for … who new?
  • We learned that they were heavily multitasking and being interrupted across many projects. Ours wasn’t their highest priority … who knew?
  • We finally realized that they didn’t like this “agile stuff” and preferred more traditional development approaches. So this was a major and difficult change for them and their leadership … who knew?

It was a fantastic, baseline setting call for the two teams. It created a much better understanding and led to much improved empathy, understanding, teamwork, and results going forward. But why this wasn’t the first thing that happened when the teams were formed and the project begun? They could have avoided a tremendous amount of frustration, waste, and missed progress. Who knew?

Three CORE Starting Patterns for Distributed Agile Teams

I hope my two real world stories have convinced you that a fundamental aspect of successfully implementing distributed agile is how you start your teams and projects. Here is a bit of a checklist to help you improve your distributed agility:

Establish the Team(s)

  • Formation – Take some time to thoughtfully form your teams. Introduce them. Allow for social collaboration of some sort.
  • Leadership – Take a look at the leadership within your organization and ensure that each team has some experienced technical leadership. Also ensure that each team's local functional leadership is aligned with agile leadership fundamentals.
  • Co-Locate in Clusters – Look across the members you have to work with and try to cluster team members together (geographically) as much as possible.
  • Skills Aligned with Backlog – Remember that team skills need to align with the Product Backlog and that each team must have sufficient skill and domain experience to deliver high quality results.
  • Cross-Cutting Concerns - Consider how the team will handle cross-cutting concerns like UX design, architecture, and integration testing and deployment.

Train the Team Together

  • Basic Training – If possible, training should be approached as a whole team effort and is best done face-to-face. Everyone needs to hear the same things. Simulations should be a part of the training so that the teams get the opportunity to work together.
  • Roles & Responsibilities – Developing clarity around expectations is crucial for agile teams to start up. Taking the time to establish team and cross team roles and responsibilities and/or rules of engagement early will pay dividends during sprint execution.
  • Focus on Scrum Master and Product Owner – These are the most important and specialized roles within agile teams. Ensure you’ve selected wisely, don’t overload other roles, and provide sufficient training and ongoing coaching for these individuals. It’s crucial in distributed teams!
  • Start the First Sprint Together – If at all possible, start your first sprint with the team in the same locale. If that’s not possible, then start slowly, so that teams aren’t rushing to produce working software, but rather a “working team” should be their first goal.

Establish Norms, Standards, and a Charter

  • Team Norms – Set norms for listening, respect, behaviors, collaboration, quality, retrospectives, etc. Establish these as a team, post them on walls, and continuously remind yourselves of your agreements.
  • Meeting Norms – There can be an awful lot of meetings when moving to agility and conducting them can be exacerbated by time zone differences. Place heavy focus on just enough, just in time, and well facilitated meetings. Don’t forget the power of a time box.
  • Definition of Done – I have a nice presentation that depicts four levels of Definition-of-Done (DoD) or Done-Ness consideration within agile teams. There’s a link in the references. This is an area to truly focus on when working in a distributed team.
  • Tooling – Tools become more important in support of distributed teams, but they can also get in the way of collaboration and learning. Carefully select a minimal set of tools, while reinforcing face-to-face collaboration. Then grow your tools over time based on team feedback and needs.
  • Commitment to Agility – It is clearly harder to support agile tenants and tactics when participating within a distributed team. It will test your mettle. Establish broad commitment to your agile principles across teams and stick to them.
  • Chartering & Release Planning – These can be critical for cross team integration, dependencies, sharing a mission and vision, and determining and measuring success. The more time you can spend in up-front chartering activity, the better your results will be. So resist sprinting too soon!

Sprint Review Together!

One final point, distributed teams should perform their sprint demo/reviews together as much as possible. That would include members of each team and teams that are working together to deliver a project or product. The more you can integrate the demonstration of results, the more you will drive effective cross-team collaboration.

And improvements surfaced during the reviews will naturally cascade into the teams’ retrospectives, driving collaborative improvements.

Wrapping Up

Going back to my theme of what attendees ask me about distributed agile teams, there’s often another question:

Do we really have to do ________________?
It’s really hard to support that agile practice because we’re distributed!

You could fill in the blank with any of the following: swarm, collaborate, stand-up, groom, sprint plan, code review, design review, pair, test, talk to each other, etc.

My consistent answer is always — yes, you do. Now you may need to get creative with the how and the when in your support of solid agile team collaboration tactics, but skipping them when the going is tough is rarely good practice.

I’ll leave you with this thought.

Is agile harder to do in distributed teams? Of course it is. But is it possible to do it well in distributed teams? Absolutely. It’s truly up to the business to commit to properly starting their projects and the teams to commit to agility and figure out how to drive great results.

It’s simply another choice as you “go agile”. Please choose wisely.

For Further Reading

  1. A related blog post from Johanna Rothman and Shane Hastie:
  2. A presentation on agile done-ness criteria:
  3. I highly recommend a wonderful book related to Agile Chartering. The title is Liftoff: Launching Agile Projects & Teams by Diana Larsen and Ainsley Nies.
About the Author: 

Bob Galen is a software engineer by trade and a technical leader by experience. He is a principal in RGalen Consulting Group, based in Cary, NC. For the past 15 years, he has held significant leadership roles at Bell & Howell, ChannelAdvisor, EMC, Lucent, Thomson/Dialog, and Unisys. Currently his focus is Agile Methods Coaching, Training, Evangelism & Speaking. He has a track record of effectiveness in leading diverse technical teams towards project success in an energized and humane fashion utilizing Agile & Traditional methods. If there is a key to Bob’s style its Balance and Effectiveness. His is a member of the Agile Alliance, a Certified Scrum Master & Scrum Product Owner, and an experienced XP Coach.

Posted in: