The Seven Deadly Myths of Mobile

Recently I had the chance to chat with Josh Clark, one of the biggest names in Mobile Web Design, and keynote speaker for The Atlanta Drupal Business Summit, on how companies can unlock mobile for online success.

In the interview, Josh answers questions about mobile strategy and design such as:

  • What are the biggest hurdles companies face when “going mobile”?
  • What new mobile standards and technologies will Drupal need to embrace to stay relevant?
  • What are the next phases of responsive design?

You can listen to the interview here.

Josh Clark will keynote The Atlanta Drupal Business Summit taking place at the Cobb Galleria on Friday, October 26, with The Seven Deadly Myths of Mobile. This Business Summit brings together business leaders and technology strategists to discuss business solutions built with Drupal and sharing real-world examples, lessons learned and best practices.

“Drupal is the leading open source content management platform for business benefits, rapid technology advancement and long-term ROI,” says Jim Caruso, CEO of MediaFirst and an avid Drupal Association member.

Georgia boasts impressive recent adoptions of Drupal. The metro-Atlanta area is home to Turner Broadcasting Systems, Inc. which has been gradually migrating their roster of sports web properties onto Drupal, including At the public sector, the State of Georgia is wrapping-up a highly successful migration of Drupal from a proprietary platform—by the way, they are saving about 5 million dollars a year in doing so. At the higher-ed sector, Drupal has exploded in Georgia—Georgia Tech utilizes Drupal on over 40 departmental sites, the University of Georgia, Emory University, and Kennesaw State are all leveraging Drupal.

You can view the full schedule for the Drupal Business Summit and register at:

About the Author: 

Adam Waid is the Director of Marketing at Mediacurrrent, an industry-leader in helping organizations architect custom Drupal websites. Adam is also a MATRIX Alumnus, where he worked closely with the Sales and Recruiting organizations to develop differentiation strategies, create content, and drive CRM and social media initiatives with a single goal in mind - build stronger, more meaningful relationships with our clients. Leveraging new technology, the latest social media trends, and a good mix of traditional marketing, Adam grows online communities.  

Follow Adam on Twitter and Read his Social Media Blog.

Posted in: 

Crazy Counter Offers and the War for Technology Talent

We have all heard the saying many times. A quick web search returns countless articles and statistics thoroughly detailing why it is a horrible idea. Good recruiters will remind you early and often throughout the process of looking for a new job. “Never accept a counter offer!”

Despite the negative connotation and overload of data available, the number of counter offers being accepted by technology professionals has never been higher . So what is leading to these jaw dropping decisions that are leaving hiring managers and recruiters alike crying, deflated, and exasperated? Some insight from an insider as I explain what I see.

IT Employment at Record Levels

Simply put, the market is hot. According to the latest report from TechServe Alliance, the number of IT jobs in the United States continues to grow, reaching yet another monthly all-time high in August of 2012. The IT index is up 3.4% year over year which means more jobs are being filled. However, thousands upon thousands of skilled IT jobs remain open at some of the America’s most respected companies. Across the country the shortage of talent is causing project delays resulting in unexecuted strategies and lost market share. Companies are getting desperate and simply can’t afford to lose key technical talent. Where we once observed companies make counter offers to only the very top tier of talent, we now are seeing an “any means necessary” retention strategy used on much larger percentages of the IT department. This also has led to diminished fears of “If I accept the counter, they will let me go once they find a replacement.” Right or wrong, talented techies feel like they won’t be replaced. Last month a talented engineer who accepted a counter told me, “If they want to replace me good luck. If my skills are that easy to replace would I be getting multiple call from recruiters every day?”

Evolution of Sourcing- Increase of Unmotivated Candidates

A recruiter’s ability to identify and engage talent has drastically improved in recent years. As adoption of social media has increased by IT professionals, so has the candidate pool for recruiters. You may not be looking for a new job or in my database, but if you have a profile on LinkedIn or Facebook, send an occasional tweet, check in on Foursquare, or even register for local user groups on Meetup, any good recruiter can find and communicate with you. Today everyone is a candidate and almost everyone is open to listen. Ultimately, this leads to more candidates being sold on potential opportunities when they were not truly motivated to make a change prior to conversations with a recruiter. When they go to give notice and their employer responds with an even better counter, the decision to stay is not difficult.

Crazy Counter Offers

Everyone has read the insane stories on Techcrunch, including Neal Mohan turning down a key Product role at Twitter after Google gave him $100 million in stock to stay. Another article says 80% of Googlers agree to stay when offered a counter. Regardless, the phenomenon is no longer limited to Silicon Valley’s elite. When I recently polled MATRIX recruiters across the country I found that we have seen counters offers in excess of $10.00 per hour and 20K annually being offered in many markets. Recently in Dallas, we had a candidate accept a counter that included a Promotion, 30K raise, and full work from home flexibility.

It will be interesting to see how this all plays out in tight knit local IT communities. Eventually, the demand will settle, it always does. Will those who accept counter offers be viewed as flaky and bridge burners by their peers? Or will they simply be viewed as opportunists who maximized their market value?

About the Author: 

Justin Thomason is the Director of Recruiting for the MATRIX Western region.  His expertise includes hiring, training, and leading world class recruiting organizations.  With a focus on innovative delivery strategies, Justin's recruiting teams specialize in leveraging social media to develop lasting relationships with talented IT professionals.

Posted in: 
Job Seeker

The Agile Project Manager—Don’t Throw out the PMBOK!

I must admit that I’m a fairly rabid agilist. Part of the rationale for that is the pain I’ve experienced in leveraging traditional PM techniques in software projects. Another influence is my experience dealing with traditional leadership and the dysfunction relating to driving projects and teams towards unrealistic goals.

What’s interesting though is a conversation I had with our Scrum Master team the other day. I asked them to act more like “traditional project managers”. To begin to...

  • Be more prescriptive at times with their teams; demanding high quality, excellent software, and adherence to our agile principles.
  • Pay close attention to risks – planning, real-time surfacing and guiding team reactions.
  • Encourage the teams to improve at an accelerated rate; to set the bar high for improvemen.t
  • Become visible as leaders and spokespersons for their teams; to do a better job of socializing state & progress.
  • Take the role of impediment resolution to the next level; mining for impediments…and dependencies, then inspiring action.
  • Cheerlead for their teams; inspire and demand that the Product Owner does the same.

What I was trying to convey to them was the ‘mindset’ of a “good Project Manager”…or at least the ones I’ve seen and collaborated with in my journey. You see, many agilists use the role of Project Manager as a bit of a verbal “punching bag”—implying that there is no need for them in an agile context. By the way, you see these same folks trivializing other roles too—functional managers, testers, and business analysts to name a few.

I can’t disagree more with these folks and that position. I think solid Project Managers can find a place in agile teams…a place that makes a HUGE difference. Yes, they might need to reframe their style and behaviors for an agile context, but please, please, please don’t throw away all of your approaches. Your teams absolutely need and will appreciate your skills, as long as you reframe appropriately without throwing out your essence

The “Agile Way”?

An anti-pattern that often shows up in agile teams relates to managers and project managers losing their way when it comes to knowing when and where to engage their teams. Knowing how to effectively handle the fuzzy and scary notion of a “Self-Directed Team” it turns out can be quite challenging.

A common reaction is to treat the team as if walking on egg shells. If you see the team heading for a cliff, you can’t really say anything—as they’re “self-directed”. Of if you do say something, you must whisper…quietly…hoping that someone might hear you.

I once coached a team in Toronto. As is my typical practice, I gave them a quick Scrum overview, then planned and kicked off their first sprint. I stayed for a few days to ensure they were going in the right direction, and then I went home. I came back at the sprint transition just to see how things were going. In my first morning Scrum upon returning, one of the developers sort of “yelled at” their functional manager who was attending as a ‘Chicken’.

The team was stuck at a technical impasse and she said: “You better step in and tell us how to handle this, or I’m going to scream”. I was taken aback and after the stand-up I asked her what was up.

She said that ever since the sprint started that none of the functional managers were saying anything—nor providing any guidance whatsoever to their teams. And that she was sick and tired of it. She wanted help! I then asked the functional managers what was going on. They said that they were only doing what I’d told them—that the team was “self-directed” and that they were to keep quiet…being ‘Chickens’ if you will.

Ugh! I thought as I smiled a wry smile to myself. Yes, I had told them that respecting self-direction is important. But that doesn’t imply that you don’t have a role and responsibility as the teams’ functional manager. You certainly don’t let them crash into a wall without yelling and warning them. You see, the managers missed the nuance of leading within agile teams, as many roles do. They mistakenly behaved as if they were marginalized and didn’t matter…when nothing could be further from the truth!

 So—Which Way Do we Go George?.
It’s Situational & Skills Matter

Always remember that your agile PM role is situational. You’ll want to keep the values (Lean principles, Agile Manifesto Principles & Practices, Essence of your specific methods, Quality, and focus on Team) core in your thinking, but at the same time very much react to situations as you always have—with simply some ‘adjustments’.

As part of being situational, always remember that the teams’ skills & experience matter quite a bit in how you should react. If you have a brand spanking new team, then you probably want to provide more prescriptive guidance. If you have a master-level team, then your job is to softly guide & support them, but truly get out of their way. The Dreyfus model of skill acquisition is a good model to become familiar with to conceptualize the various team levels that you might encounter and to guide your adjustments.

Risk an Opinion
As in the above story, your teams still need leadership—leadership that provides clarity, vision, missions, and goal-setting. Leadership that is “in the game or trenches” with them. Leadership that endeavors to protect them and to shield them from major obstacles and mistakes. Leadership that is supportive and encouraging. Leadership that is in all cases, well, leading them…

In a word, you should evolve towards a more Servant-Leadership style. But you also need to share your thoughts with you team. Risk saying how you feel and what you’re concerned about, but then allow the team to take risks and chart their own paths. Risk telling the team ‘No’ if you feel they’re on a destructive path and be prepared to also tell them ‘Why’. Finally, risk ultimately becoming a part of the team and sharing their responsibilities.

Leverage your Instincts
As I’m writing this, my company iContact is making a fairly major release of our eMail Marketing software platform. We’ve been adding social capabilities for several months and are now exposing them via this release.

One of the things we struggled with was how we turn on our +70k customers. Do we do it all at once, or in a more measured way to mitigate risk and allow us to see how the new functionality ‘behaves’ under load. There were two schools of thought across the teams—release it ALL and release it incrementally. Most of the teams had an ALL perspective, as did our QA team members. However there were a few in the development organization that wanted a more controlled release and argued for that option. Initially they were considered naysayers, only reacting to FUD, but to our credit—we listened to them.

After much discussion, we opted for a controlled roll-out. While we didn’t encounter huge problems as we ramped-up, it allowed for us to better understand our usage metrics, plan for incremental use, and have time to fix a few lingering issues. In the end, it proved that our overall risk-handling instincts were the right way to go. I’m glad the few had the courage to “speak up” and that we trusted their instincts.

Ceremony & Reporting Matter
Remember that even agile teams still bear a responsibility to integrate back into the organization. They need to be transparent and communicative—and not simply in agile terms. It’s not sufficient to simply say “come review our burndown chart” or “just attend our Daily Scrum” if an executive or stakeholder asks you or the team for status.

Sure that is a mechanism or ceremony setup for this sort of communication. But what if that stakeholder doesn’t show up? Does that alleviate your communication responsibilities? Of course not! So beyond the information radiators, a PM can ensure the team is effectively communicating broadly across the organization.

Another important point is communicating in ‘terms’ that the business can understand. Whether it is reports, data, videos, or whatever it takes to represent the teams’ progress and efforts.

I’ll even go so far in this post to say that many of the ‘traditional’ principles and techniques from the PMBOK shouldn’t be “thrown away” from an agile perspective. Let’s take the notion of critical path for instance. In larger agile projects, with multiple teams, there still are plans that evolve. And within that framework keeping the critical path of work in-sight across teams can be a crucial visibility point.

As can asking the team to manage risks, or creating a project charter, or establishing effective milestones for cross-team integration. So please don’t throw away or ignore these skills if you “Go Agile”. Just transform them (and yourself) a bit and then trust your instincts in the situations that emerge.

Wrapping Up
I want to wrap-up this post with caution though—traditional Project Managers DO need to reframe themselves and their approaches in an agile context. Throw away your templates and checklists that prescribe a specific approach to all projects & teams. Instead you must become context-based and situational in your approaches.

You must also engage your teams, not as an execution monitor or policy cop, but as a true partner.

I’ll leave you with the following two charts that nicely illustrate some of the focus and tempo changes that occur between Traditional & Agile PM activities.


Agile Project management

Agile Integration Management

So, Project Managers—may you navigate these waters well and engage with your teams. They NEED YOU!

About the Author: 

Bob Galen is the Director, Agile Practices at iContact and founder of RGCG, LLC a technical consulting company focused towards increasing agility and pragmatism within software projects and teams. He has over 25 years of experience as a software developer, tester, project manager and leader. Bob regularly consults, writes and is a popular speaker on a wide variety of software topics. He is also the author of the book Scrum Product Ownership – Balancing Value from the Inside Out. He can be reached at

Posted in: 

Documentation: Where and When?

In my experience, documentation budgets are usually small to non-existent on smaller projects.  Nobody thinks that code or other objects should be documented.  But documentation is important since it is easy to forget what (and more importantly why) developers do what they do.  Documentation permits a developer’s work to be understood by other developers who might join the project or even by herself at a later date. 
The problem is that documentation is usually a low priority. Ironically, those most needing their projects documented often resist it because they are short on development time, money or both.  They are the clients.  And developers are often willing accomplices.   After all, documentation is tedious.   And it has to be revised frequently as the code changes.
Documentation should be done at several different levels: directly in the code, at the top level of each object and then of course in high-level technical and user documents.  The foundation of all documentation is to use proper naming conventions and self-documenting object names.  (Note: self-documenting object names can be tough due to limitations in object name length. For instance Oracle has a 30-character limit.  At other times, an object’s purpose may change over time.)
Naturally, documentation should explain what the code does.  Furthermore, documentation should include the purpose of the code: in other words why does the code exist? 
Very importantly, explain why something was coded the way it was if there was more than one possible approach.  If we used A+B=C when we also could have used D+E+F=C, then why did we use A+B?  I have spent many hours of time trying to figure out someone else’s code that wasn’t documented and then why they used the approach they did.   Worst of all, sometimes I have had to remember why I used the approach I did because I was under the gun to move onto something else and didn’t document the way I should have.   But always explain to your client that documenting something now will save her many hours of exploration time later down the road.
Here are some suggestions for better documentation:

  • Avoid cryptic abbreviations or confusing notes or jargon.
  • Use a standardized look and feel for your documentation.
  • Date and initial your notes.
  • Reread what you have written.  Does it make sense to you?  Does it make sense to another developer?  Does it make sense to your client (assuming she has some experience in reading code)?  Is it grammatically correct? 
  • Unleash your inner Hemingway: if you can say something with fewer words, then do! 

Have fun!

About the Author: 

Narayan Sengupta has been building databases for almost 20 years.  He enjoys spending his spare time with his daughters, traveling, and making presentations about American World War I and World War II history. He can be reached at

Posted in: 

Dependency Injection in ASP.NET MVC - Part 2

Welcome to part two of "Dependency Injection in ASP.NET MVC". In part one, we discussed the basics of dependency injection (DI). Please refer to part one here if you need a quick refresher on DI. Now that we have covered the basics of DI, let’s take a look at implementing DI in an ASP.NET MVC application. For the purposes of this article, I will be using ASP.NET MVC 3 and Ninject. In the application, we will create a set of data repositories and inject them into an ASP.NET MVC controller at run-time. This pattern will allow us to cleanly separate our data access logic from our presentation code, and set us up for unit testing. I will be using a simple college course catalog as our domain so we can focus our efforts on the DI implementation. There will be a single view with a simple bulleted list of course numbers and names. All of the data will be mock data defined in the code directly.

First, I am going to create a new ASP.NET MVC 3 web application in Visual Studio 2010. I selected the basic MVC 3 template and ended up with the solution below. I will not be using the Account features of the template, so I went ahead and deleted the Account controller and its associated View folder.

image 1











Now that we have our project in place, we need to pull Ninject into the project. The Nuget package manager makes this a very simple task. There is a Nuget package made specifically for Ninject integration into an ASP.NET MVC project. From the Nuget console, execute the following command to install the Ninject.MVC3 package.

image 2





What did that do for us? Notice that you will now find an App_Start folder in your project. Take a look at the NinjectWebCommon.cs file inside of the App_Start folder. This class is responsible for wiring up the Ninject kernel and ensuring that we can inject dependencies into our controllers automatically. The kernel is the component that actually handles creating instances of objects during the injection process. Take special note of the RegisterServices method. We will come back to this in a moment.

Now we need to create some additional items before we can complete the dependency injection configuration. We will need to define an interface for our course catalog repository called ICourseRepository. Additionally, we will define two implementations of ICourseRepository. Ultimately, we will be injecting an implementation of ICourseRepository into a new controller called CoursesController.  You will notice that in both implementations I am using mock data. In a true implementation, we would have SQL Server or Oracle specific ADO.NET code. Alternatively, this would be a great place to plugin your favorite ORM such Entity Framework or NHibernate. For now, our focus will be on the SqlServerCourseRepository implementation. We will talk about the Oracle version at the end of this article.

Here is the repository code.

image 3





















Notice that we have an interface defined and then two implementations called SqlServerCourseRepository and OracleCourseRepository.  The interface features only a single method called LoadAll. The implementations have some basic hard coded data returning an IList<Course>. Course is a simple two-property class as defined here.


image 4






Next, we need to build a basic controller with a single Index action that returns an IList<Course> as the model. Notice that I have included a constructor in the controller that accepts an instance of ICourseRepository. As I mentioned in part one of this article, we are now set for constructor injection. In this case, our controller has a dependency on ICourseRepository and we are going to use Ninject to “inject” an instance into our controller via the constructor. The Index action is quite simple. We call the LoadAll method of the ICourseRepository instance and return the Index view.

Image 5











Finally, we have some very simple view markup for displaying our courses in a basic un-ordered list.

image 6










There is one final configuration task that we need to complete. Jump back to the RegisterServices method in the NinjectWebCommon class in the App_Start folder. This is where we define or register our interfaces and what concrete implementations they map to. For now, let’s just use the SqlServerCourseRepository. The line below basically says, “If the Ninject kernel receives a request for an instance of ICourseRepository, then return an instance of SqlServerCourseRepository.”

image 7





If we run the application now and navigate to the Course Index, we will see this.


image 9













So what actually happened?

  • We configured our application to use Ninject via the Ninject.MVC3 Nuget package. This package handles nearly all of the Ninject wire-up to support dependency injection into our controllers via constructor injection. For the sake of this example, let’s just say that Ninject is now in charge of our controller factory. This was all part of the code that the Ninject.MVC3 package added to our project.
  •  Via the RegisterServices method, we told Ninject to use an instance of SqlServerCourseRepository whenever an instance of ICourseRepository is required.
  • When we request the Index of the Courses controller, Ninject handles creating an instance of the controller. It inspects the constructor and sees that in order to create a controller instance; it must provide an instance of ICourseRepository. The Ninject controller factory checks the kernel for a binding for ICourseRepository, which it finds. The kernel spins up an instance of SqlServerCourseRepository, and uses it to create the controller instance. 
  • Finally, we call the LoadAll method of our controller instance and return the data to the Index view.

Now that we have gone through all of this trouble to configure our application, you may be wondering why? By adhering to the principle of SoC or Separation of Concerns, we have decoupled our presentation code (Controllers and Views) from our data access code (SqlServerCourseRepository). Notice that in the controller, we only interact with an interface, ICourseRepository. The controller never actually knows what type of repository we are using since the instance is generated and “injected” and run-time.

Suppose our company is acquired by an Oracle-friendly organization. The new CIO decides that SQL Server is inferior to Oracle (yes, this has happened to me more than once) and we need to convert all of our applications to Oracle immediately. In our fictitious example above, we can do this with no impact to the presentation code since we are using a DI container combined with the Repository pattern. We would need to define new repository implementations that are Oracle aware, which we already did at the beginning of this article. Normally, you would not do that until it was actually required; but it works well for this example. Let’s visit the RegisterServices method again. Notice that we are now binding ICourseRepository to OracleCourseRepository.

image 10






And if we run the app again…

imqage 11














We are now serving up data from our mock Oracle repository without changing a single line of controller code! While this was a very simple example, it demonstrates the importance of SoC and just how easily SoC can be accomplished via dependency injection. Dependency Injection frameworks such as Ninject, facilitate SoC in your application architecture via a simple Nuget package and a handful of conventions.

In this article, we used DI to cleanly separate our data access code from our presentation code. This is only one application of dependency injection, albeit a common one. There are many other scenarios that lend themselves well to DI use, including dependency handling for MVC action filters. This is definitely a tool that should be in every developer’s toolbox. For additional reading on DI, I highly recommend the excellent Ninject documentation at 

Happy programming!

About the Author: 

Russell Thatcher is a Software Architect for a medical software provider. He possesses over 10 years of software development experience in a variety of industries, including healthcare, financial, and defense industries. With expertise in the latest Microsoft technologies and Agile development practices, Russell consistently delivers high quality, on target software solutions for his clients.

Posted in: