The Agile Project Manager—Viewing RISK Through a Different Lens

I often find myself teaching various classes on agile methods. One of my more popular sessions surrounds the mapping of traditional project management techniques towards the agile methods. I do this quite often in a wide variety of venues. One metric I’ve noticed is that the more PMPs in the audience the more spirited the discussions become.

One of the core translation areas between traditional and agile project management relates to risk management. I often get a lot of pushback from the PMPs telling me that the agile approaches to risk management simply won’t work. Their arguments are more based on traditional thinking, PMBOK guidance, and “we’ve always done it this way” pattern; rather than a situational response to individual project needs. I usually just leave it that agile risk management is “different” and move onto safer topic areas. But I usually want to add more flavor and this post seemed like a good opportunity to do so.

Traditional Risk Management

In this view, the Project Manager is managing the risk. The premise is that you need a highly skilled and experienced PM to adequately control and manage project risks. That risk can be anticipated, planned, reduced, avoided, transferred, positive, triggered, and mitigated. One of the first activities with any project is to begin the compilation of a risk list so one can manage the project risks. These risks can be gleaned in a wide variety of methods—team brainstorming, speaking directly to key stakeholders, analyzing previous projects, and from the technology and business climate. Once identified, teams often evaluate the size of each risk. A common mechanism is to gather likelihood and impact from the project team, then multiply the likelihood of the risk occurring against the impact the risk would have.

So, something like the following table:

Agile risk

Once you’ve accumulated a “complete” list of risks and analyzed their “priority”, then a plan is put in place to detect and mitigate the risks that are most likely to occur. Quite often, this is a very detailed plan that is formulated with all of the projects functional teams—expending quite a bit of time and effort towards these hypothetical risks both in upfront planning and in ongoing monitoring and discussion.

A Quick Point of Context

Oh, and I need to get this point out of the way. My primary context for this post is technology projects. If you’ve managed a technology-driven project, IT project, software development effort, or product integration project you will certainly agree that these beasties are “different” than other types of projects. And yes, their risks are different too. Rarely can you predict risks early on in these projects. Why? Because you simply haven’t done it before—so you have little to no experience with this specific type of project or technology within the team chartered with implementing it.

In my view, the variability in complex software projects is what undermines traditional risk management. We try to expend all of our effort in up-front risk management. Rather, we should expend most of our efforts towards figuring out what we know and don’t know with respect to our technologies--or specifically how do we design & code this particular widget. (i.e. do some of the project work before trying to predict risk—so let the risks emerge from our design and coding efforts rather than trying to predict them.)

It’s this focus on iterative, working code that raises agile software project risk management from conjecture to real-time risk discovery. So, let’s move onto agile risk management.

Agile Risk Management

 In traditional waterfall projects risk essentially surfaces late—usually in the latter 20% of a project and sort of all at once. For agile projects, they’re much more front-loaded. Yes, for an identical project the same risks will probably surface in agile. So, it’s not a prevention mechanism.

This nicely leads into the essence of agile risk management being an emergent approach. First of all, you rarely hear the agile methods focus on risk at all. Why? Because we flip the notion of planning risk on its ear a bit. How is that? Well instead of guessing or planning for risk, one of the central themes of the agile methodologies is to deliver, real working software, in very thin slices of functionality via time-boxed iterations.

Realization of risk occurs quickly and abruptly. It hits you in the face as a team at the earliest possible moment. Quite often it surfaces in your daily stand-up meetings—so very little lag time.

Is it solely the responsibility of the Project Manager? No, in the agile case the self-directed team is primarily responsible for detecting, acting on, and mitigating risk…and here’s the important part, in real-time, as each risk occurs. It’s a whole-team and highly reactive stance towards risk.

  • The team often engages in strategies surrounding risk by how they approach iteration planning. They have the choice of what to build first, so very often the development strategy is to load risky items first
  • To complete research oriented user stories well in advance of delivering the related feature story
  • To do early experimentation to see how the team will respond to technical challenges and the unknown.
  • To measure the velocity of the team to understand their capacity to meet business milestones.
  • To engage stakeholders in assessing requirements as they evolve…in real-time and on working software.

The Rework Balance

One of the most important aspects of agile risk management is effectively balancing your rework. This is one of the key indicators that your agile project is being run well or running off the rails. Agile teams have a tendency to either sprint too early, before they fully understand what they’re about to build, or they sprint too late, as they over-analyze the work.

Agile speed is a rework balancing act. If you have zero rework, then you’re playing it too safe. You analyzing everything in advance and taking no risk. For example, you deliver a fully operational messaging framework component for use without ever having sent a message through it. This is sort of that BDUF (Big Design Up Front) waterfall-esque approach to architecture. It appears less risky, but it isn’t. You’ve just delayed your information gathering on how well your strategy works.

But if you start too early, without even thinking about some of the dynamics of your messaging architecture, instead simply slinging code, then your rework is likely to be high. As you make discoveries in your testing you’ll need to go back and rework large swatches of your framework ideas.

So somewhere in between these two end-points lies an appropriate rework balance for each unique agile team. If they don’t think, then they’ll suffer from too much rework risk. If they go to slow, then they’ll not achieve the delivery and speed promises of agility. They’ll also still have rework—as they will not have anticipated every eventuality.

Wrapping Up

Now all of that being said, I don’t think we throw out all of the traditional risk approaches in agile contexts. For example, I think it a very healthy exercise for larger-scale agile projects to assess and understand the Top 10 risks they might be facing. In addition, to also look for more systemic or organizational risks and do a bit of up-front planning for them.

But don’t spend too much time there. Nor in exhaustive detection strategies or mitigation plans. Set up a straw man risk structure and then start leveraging the emergent nature of agile iterations to surface risks quickly. And once they surface, then ACT immediately on the risk that are real risks and not those you planned or anticipated.

Now for you traditional project managers listening in, I wonder if some of these agile approaches might be applicable in your current project contexts. I’d guess yes!

About the Author: 

Bob Galen is the Director, Agile Practices at iContact and founder of RGCG, LLC a technical consulting company focused towards increasing agility and pragmatism within software projects and teams. He has over 25 years of experience as a software developer, tester, project manager and leader. Bob regularly consults, writes and is a popular speaker on a wide variety of software topics. He is also the author of the book Scrum Product Ownership – Balancing Value from the Inside Out. He can be reached at bob@rgalen.com

Posted in: 
PM-Agile

Dependency Injection in ASP.NET MVC

Welcome to part one of a two part series on dependency injection in ASP.NET MVC. Part one of this series will focus on the basics of dependency injection and code structure. In part two, we will dive into the specifics of DI in an ASP.NET MVC application.

Although dependency injection frameworks (DI/IoC containers) have been in use for quite some time in many development platforms, it has grown in popularity in recent years within the .NET community. Many of the early DI frameworks were ports of their Java brethren. Some of those early ports still exist today such as Spring.NET, albeit with several differences from the Java version. Microsoft released their own framework some time ago called Unity, although it's future is uncertain. Microsoft's focus in recent years on solid design patterns such as MVVM in the Silverlight/WPF world and MVC in the more recent incarnations of ASP.NET, has really brought the use of dependency injection into the spotlight as a tool for writing loosely coupled code.

For those unfamiliar with dependency injection or the more general principle of inversion of control, here is a brief example. Suppose we have the following AlbumSearch class. Given an album name, this fictitious class will return a track listing.

dependency inj _1

 

 

 

 

 

 

 

As you can see, the FindTracksByAlbum method creates an instance of FreeDBService. Presumably, this FreeDBService goes out on the web and retrieves the appropriate track listing from the FreeDB internet database of CD data. Since we are creating an instance of the FreeDBService directly, we can classify its use as a dependency for our AlbumSearch class.

While functional, there is an issue with this approach. What if we wanted to use the CDDB instead of FreeDB for some or all of our searches? What if we wanted to write a unit test for our Find method that could take advantage of mock data? Why would we want to do either of those two things? Although this is a trivial example, let's assume our AlbumSearch class is part of a larger library that will be consumed by an unknown application. Further, let's say that the unknown application will be responsible for deciding which CD database to use.

How do we fix this? First, we need to make a few simple modifications to our sample code.

 

dependency inj _2

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Note that we have introduced an interface, IDBService, to represent our track listing service. Plus we have added two implementations of IDBService, one for the FreeDB database and one for the CDDB database. This allows our AlbumSearch class to rely on a generic implementation internally rather than a hard-coded instance of the FreeDBService. Finally, notice that we now have a constructor that requires an instance of IDBService. This is where the magic or “injection” occurs. By structuring our class this way, we have given the consuming application the ability to determine what implementation of IDBService is appropriate.

The consuming application or code can use a variety of techniques and frameworks to determine what implementation of IDBService to use at runtime. This could simply be an implementation of the service locator pattern or preferably some sort of dependency injection framework such as Structure Map, Unity, or Ninject. Modern DI frameworks rely on configuration to determine what the appropriate implementation of a given interface should be at runtime. Configuration is generally xml file based or code-based using some sort of fluent API. If we were using Ninject, you may find a fluent API line like this somewhere in our applications configuration code;

dependency inj _3

 

 

When we actually create an instance of our AlbumSearch class, you may find some code like this:

dependency inj _4

 

 

 

 

 

 

This is a simplistic example, but you get the point. Within the Ninject Kernel or registry we have basically said that all requests for an instance of IDBService will actually return an instance of FreeDBService. We could have implemented this a multitude of ways, but this is the most straightforward. In part two, we will take a look at a cleaner approach in ASP.NET MVC.

We now have a handle on the general code structure needed to support some simple dependency injection scenarios. Check back for part two of this series to learn how to implement dependency injection in an actual ASP.NET MVC application.

About the Author: 

Russell Thatcher is a Software Architect for a medical software provider. He possesses over 10 years of software development experience in a variety of industries, including healthcare, financial, and defense industries. With expertise in the latest Microsoft technologies and Agile development practices, Russell consistently delivers high quality, on target software solutions for his clients.

Posted in: 
Development

Ten Years Working at MATRIX, Whoa!

I will be celebrating a milestone at the end this year that many people rarely achieve.

In December, I will have been with MATRIX for 10 years. While this tenure is not the norm for most, it’s actually fairly commonplace here where celebrating those with 10 or more years of service is an annual event.  Each tenured employee is recognized (and roasted to a degree) at the event that includes speeches from peers and managers, funny photos from our past, food and drinks, and the opportunity to mingle and reminisce about our time with MATRIX.

At the event, I had several people ask me why I had stayed at the company for so long. It got me thinking. I suppose the answer is pretty simple yet vague – I’m happy here. With the economy showing signs of improvement, companies need to work harder to retain their best employees. Compensation, bonuses, and perks are certainly factors, but there are many other intangibles that should not be overlooked.  I can share the factors that contribute most to my “happiness” here and are likely important to a lot of people:

  • Advancement. I’m currently in my fourth role at MATRIX. Started as the receptionist, moved on to support the executive team, transitioned into marketing leading proposal development, and now I focus on our CRM Marketing strategy. Moving into new roles, gaining new skills, and challenging myself are key to my tenure here.
  • Appreciation. I’ve always been made to feel that my contributions are appreciated and the work I do here is highly valued. We have various recognition programs here, but more than that, I am actually told this on a frequent basis from my manager and upper management.
  • Willingness to Listen. Over the years, I’ve become more apt to share my opinion here and I am made to feel my opinion matters. I work with people who are willing to make changes to help me do my job better including access to training and resources.
  • Great People. I genuinely enjoy the people I work with on a daily basis and feel my co-workers have a similar work ethic and value system to me. I’ve made many great friends here that I spend time with outside of work. He doesn’t work here anymore, but my husband met me at MATRIX, and he was a Recruiter here for seven years.
  • Work-Life Balance. This has been a popular buzz phrase for years. For me, it means I have a flexible schedule when needed and can work from home or leave for a doctor’s appointment, and I have accrued vacation time to take time off when I need to (and, I’m not made to feel guilty when I take my time off!).

When it comes down to it, I think truly valuing each employee and making them feel appreciated goes a very long way in their retention. As the economy continues to improve, I’d recommend taking note of these factors and how well the needs of your employees are being met.

About the Author: 

Michelle Spears is CRM Manager at MATRIX Resources. She coordinates targeted candidate, consultant, and client campaigns and oversees the proposal development process for MATRIX. She also researches and creates quarterly briefings on IT market and compiles a bi-annual IT salary survey.

Posted in: 
Hiring Manager

SQL 101: Sub Queries

In my next post I will introduce you to JOIN statements. After you've been introduced to both these techniques consider the performance of a sub query against the performance of a JOIN. You will always want to choose the option that runs quickest with the correct results.

If you would like to create my test table on your own server, run this script.

There are two types of sub queries, Correlated and non-Correlated. Until you learn about joins, I'm only going to teach you about non-correlated queries.

Since a sub query is one complete query inside another. Let's start by writing a query to show us all the productNames in the productSale Table.

SELECT DISTINCT
       productName
FROM productSale

This is a complete query. Now, if we take that query we can set up a query that will show us what purchaseDates these were purchased.

SELECT DISTINCT
     buyer
FROM productSale
WHERE
productName IN (
      SELECT DISTINCT
           productName
       FROM productSale)

buyer --------------
Shannon Lowder

Please notice that the comparison used is the IN comparator. Since the sub query returns multiple values you must use IN. If your query will only ever return one value, you could use =, etc.

The query now shows you my name, since I'm the only buyer defined in the table. What this technique will allow you to do is find data in one table, based on another table. This is where you will see people use a sub query instead of a JOIN.

While it's not wrong, it can take longer to execute than a JOIN.

Consider the following query, if you run the attached script on your SQL server, you can see the output for yourself!

SELECT *
FROM productSale
WHERE
              productName IN (
              SELECT
                         productName
               FROM products
               WHERE
                          price <= '1.00')


buyer                       productName      purchaseDate       qtypurchased  pricePaid
--------------  -----------  -----------------------  ------------  ---------
Shannon Lowder    paper        2000-01-01 00:00:00.000  2                1.00
Shannon Lowder    pencil        2000-01-05 00:00:00.000  1                0.25
Shannon Lowder    pencil        2000-01-07 00:00:00.000  1                0.25

The sub query will retreive all the productNames in the products table where the price is less than or equal to a dollar. Then the main or outer query lists all the information in the productSale table where the productName matches. This is definitely a query where a JOIN could be used, and next time we will rewrite this query to use a JOIN. But in this case it clearly shows you how to write a subquery to retrieve information in one table, based on values from another.

If you have any questions, send them in! I’m here to explain everything I can about SQL, and how to use it more effectively!

About the Author: 

Look no further for expertise in: Business Analysis to gather the business requirements for the database; Database Architecting to design the logical design of the database; Database Development to actually build the objects needed by the business logic; finally, Database Administration to keep the database running in top form, and making sure there is a disaster recovery plan. Connect with Shannon Lowder.

Posted in: 
Development

The Agile Project Manager—Fail NOW as a Strategy

I was at a conference not that long ago speaking and sharing on various agile topics. As often happens, a young man stopped me to ask a few questions after one of my presentations. We struck up a nice conversation that eventually slipped out into the hotel corridors.

failureWe started talking about sprint dynamics within Scrum teams and I happened to mention that I usually coach teams towards declaring their sprints a success…or pause for meaningful effect…a failure. That we do this as part of the teams’ Sprint Review, with the Product Owner being the final determinant basing it on whether the team achieved their Sprint Goal(s).

He was visibly upset with my view. He said that they (he was working at a well-known Atlanta company) had never failed a sprint. Never! They could not nor would not use that WORD in their culture. I asked him point-blank – have you ever failed a sprint? He said of course they had. Many times. But instead of using the term fail, they used the term ‘challenged’. That way, stakeholders wouldn’t get the wrong idea and question the skills or motives of the team.

We went round-and-round for another 10-15 minutes in our discussion, but we never really settled our differences. We simply agreed to disagree. Although it wasn’t a terribly wide chasm between us, I distinctly remember walking back to my room shaking my head. I simply didn’t understand the big deal about failure. About using the word. About a team saying…we failed. In my coaching practice and in my “day jobs”, I’ve been able to steer and evolve our views so that failure is not a bad word. I.e. failure is good. Failure is ok. Failure leads to improvement. Failure is a part of life.

So in this post, I want to discuss failure from a few perspectives. The discussion isn’t intended to be exhaustive. Instead, I just want to share some thoughts and to get you thinking about failure…how you view it in your organization, what is your tolerance for it, and re-considering your normal reactions to it. I also think this leads you towards your risk handling as well, because I think the two are inextricably linked.

Coaching to Avoid Failure

In his blog post from June 20, 2011, entitled Coaching is Not Letting Your Teams Fail, Giora Morein makes the case that agile coaches should be leading or guiding their teams away from failure. He brings up the analogy of a Sherpa guiding mountaineers. And yes, in the mountain climbing example I will have to agree that failure is probably not the result we want.

However, in the non-life threatening cases I think I disagree with Giora. I wholeheartedly believe that failure can actually be good for a team. I also think the role of the coach is to help a team look at their performance through two lenses. The easier of the two is the success-lens. This is the lens where you give the team positive feedback; where you tell them that they need to repeat those practices that work for them. Indeed, what practices they need to amplify and do “more of” so as to achieve greater and greater results.

These conversations are clearly easier.

But what about the failure lens? As a coach, do you provide constructive criticism? Do you show a team where they miss-stepped? Both individually and as a team? I think so. But certainly not in a malicious or heavy-handed manner. I think if you’re effectively coaching a team you must explore their errors & mistakes with equal passion and energy to how you handle their successes.

And I don’t think you do this quietly, hiding behind doors and not externally acknowledging their challenges. No. I think you approach it in a completely transparent and matter-of-fact manner. Laying the groundwork that failure is appreciated and welcome. That from it, your teams look for improvement possibilities and move forward quickly towards delivering improved results.

Agile Exposure

In agile teams, there are two key ceremonies that are focused towards success & failure results from a team. In Scrum, that is the Sprint Review (demo) and the Sprint Retrospective (lessons learned). Typically, the sprint review is exposed to the world, so you might want to be careful in how you couch your failures – so that stakeholders don’t misconstrue the impact or the effort the team is putting forth. Nonetheless, I believe you should declare sprints either a success or failure as part of the introduction to the teams review.

In Scrum, it’s the Product Owners role to determine this—and it’s relative to the goal(s) the team committed to at the outset of the sprint. Hopefully those goals were flexible enough to allow the team to adjust their work focus to creatively attain it.

For example, I think a very poor sprint goal is something around the team delivering a set number of user stories—or other indicators of by-rote execution. I think this leads to potential sand-bagging on the part of the team to hit a numeric goal rather than thinking of the true problem they’re trying to solve. Instead, I think better goals revolve around achieving some sort of demonstrated behavior that solves a specific set of customer problems. So success is measured against how well the team met the spirit of the goal and how well they applied the agile principles in their execution.

For example, I’ve seen teams that commit to 10 user stories, but who had an extra 3 days at the end of their sprint of idle time, fail their sprint. Sure, they delivered to their commitment, but their commitment was flawed. They sandbagged and over-estimated. They also didn’t make their additional capacity available to their Product Owner and ask for more work within their sprint time-box. Instead they planned ahead or gold-plated their deliverables.

I have also seen teams that commit to 10 stories, but deliver 7 have a very successful sprint. In it they work hard against complexity and adversity. They’re incredibly transparent and engage their Product Owner in making daily adjustments on priority vs. their new understanding of capacity. And as a team, while they didn’t deliver the originally perceived quantity, what they did deliver aligned with their goals and met the spirit of the Product Owners intent.

Both of these cases should be discussed in the teams’ retrospective and ways to improve discussed. Not small ways and not ignoring the first teams’ behavioral problems. No. All of it—the good, the bad (mistakes & failures), and significant improvement ideas will be discussed in order for the team to decide what points are worthy of their improvement focus in the very next sprint.

But is Failure Embraced?

Continuing with my earlier coaching example, I remember not that long ago I was talking to a group of our Scrum Masters at my “day job” iContact. If you don’t know about Scrum, the Scrum Master is the primary coach & guide and agile leadership voice within the agile scrum team. They’re also responsible for maintaining core agile values within the team and for the teams’ overall performance. What I mean by that is—guiding the teams improved performance over time. Continually asking questions like—is their team improving in their overall performance? Is their velocity improving? Is their work quality improving? Is their teamwork and collaboration improving? And, is their focus on delivered customer value improving?

So my point to the Scrum Masters was I felt we hadn’t failed in quite a while. I defined failure in this case as a sprint failure or a stop-the-line incident where a team basically ran into an impediment and needed to re-plan or re-align their sprint.

They all agreed with me that things had been going smoothly. And I received more than a few questioning stares as to why that was a problem. I tried to be careful in my reply, but my concern was that we might be playing it too safe. That we were becoming complacent in our agile practices and that we weren’t stretching ourselves enough. We weren’t taking chances. And we weren’t taking risks.

I explained that these traits are fundamental to the growth and advancement of agile teams. And the fact that we weren’t seeing failures indicated that we’ve plateaued in our growth and performance. I felt this was a problem…and I asked if they could drive more failures from the teams.

Can you imagine the remainder of this discussion?

Here I am the Director of R&D at a successful company talking to my team of Scrum Masters and asking them to drive more failure—to influence their teams towards more risk-taking and inspire more stretch goals. The point I’m trying to make is that I truly embrace failure. That I’ve learned to view it as a critical success criterion and that its absence is a problem for me. I wonder how many organizations and leaders have the same view.

The Notion of “Failing Forward”

One of my favorite authors is John C. Maxwell. He’s relatively well known as a leadership coach and he’s quite a prolific author—having written more than 50 books on various leadership topics. He’s got a strong Christian background to his life and writing, so if you’re not so inclined, don’t let that put you off. He’s mastered the art of leadership. A few years ago he published a book entitled Failing Forward—Turning Mistakes Into Stepping Stones to Success. In it he emphasizes failure as a truly transformative factor in our personal, professional, and team-based lives. But he carefully frames failure with a leaning forward posture. That is—instead of viewing failure as a negative end-state and feeling sorry for ourselves, we should embrace it as a positive learning experience. That you would be “leaning forward” in your failure—leveraging the lessons learned to improve.

I don’t think Maxwell is simply blowing positive smoke in our direction here. History is clearly littered with examples of successes that were inspired, forged, and hardened in the fire of failure—Thomas Edison being a famous example as he persevered to invent the light bulb.

In my agile coaching I consistently use the terminology “fail forward” when I discuss team-based failures. Yes, I want a team to be honest with themselves and acknowledge they failed. But I also want them to embrace their mistakes instead of getting defensive, blaming others or denying it entirely. And I want their posture to be leaning forward. Eager to try something new that will drive different results. Not afraid of failure.

I find that using this terminology helps teams to ‘get’ the nature of failure and to behave appropriately. But beyond terminology, project and functional leadership need to fully support the idea too—meaning the entire leadership team needs to be supportive of failure. There…I said it.

Wrapping Up—But, I’m a Bit Strange

All of that being said, I wonder if I’ve got a strange and largely minority view towards failure? I wonder if the right response is to indeed be fearful of it. To deny its existence. To spend countless hours trying to predict it. To never mention it in public. Are those and similar actions the right responses?  I really think your insights will be helpful here. I’m trying to get to a root understanding of acceptance and also the root cause for those views. While I’m particularly interested in agile teams, don’t let your lack of agile experience prevent you from responding.

About the Author: 

Bob Galen is the Director, Agile Practices at iContact and founder of RGCG, LLC a technical consulting company focused towards increasing agility and pragmatism within software projects and teams. He has over 25 years of experience as a software developer, tester, project manager and leader. Bob regularly consults, writes and is a popular speaker on a wide variety of software topics. He is also the author of the book Scrum Product Ownership – Balancing Value from the Inside Out. He can be reached at bob@rgalen.com

Posted in: 
PM-Agile