In our first podcast, Robert and I discuss some common pitfalls to avoid when implementing CRM On Demand. This is a handy list to keep in mind when planning and executing a CRM deployment. Along the way we share some stories from our work over the past several years in CRM On Demand consulting at Siebel and Oracle.

Commencing Configuration Work Without Complete Requirements and Validated Design

When I was running our CRM On Demand practice at Siebel, I worked with a large financial services company that had a very specific idea of what they wanted. And it was expressed through fields and picklists. They literally demanded that we just start creating fields rather than go through a structured requirements and design process – they saw no value in those exercises. I think this case illustrates two issues – the failure of the client to recognize that these fundamental steps are important and the failure of the implementer to effectively explain the value.

Robert had a similar experience consulting for a transportation company. Same thing. Spreadsheet full of fields. Seriously nasty looking. While we talked about the goal of user adoption, we ended up with screens packed with pages and pages of data. This does NOT drive user adoption. That was actually the primary complaint of the system we were replacing! So, why do it again? 

We convinced them to reconcile the design with the Business Objectives before opening up the pilot. This led to some compromises and a significant reduction in page layout. In the pilot, the users were very happy. They didn’t seem to miss all of those other fields.

The fundamental learning on this one is that a design – one that supports a defined business process – is more than the fields on screen. If a validated design exists, then field-level configuration decisions can be informed by it and ensure that every field has purpose and meaning to the end user.

Failure Obtain Business Sponsor Sign-off on Requirements & Design

Back at Siebel we had a client that had a need for a very fast implementation. This was IT led. The requirements were communicated by IT and quickly approved.  We ran a lot of small, fast projects in those days, and this one ticked along pretty smoothly.

I think everyone considered the project a success at go-live. Budget and timeframes were met. But about 4 months after go-live I got a call from the VP of Sales. He was livid. The system didn’t work, didn’t meet their needs, users were abandoning it. What went wrong?

This is a classic case of the business handing over total control to IT and then being surprised when the results aren’t exactly what they expected. This VP was supposedly our Executive Sponsor, but he was absent for the entire project. So his “vision” was likely only briefly communicated to his IT lead. I think the IT team did their best, but ultimately they looked at it as a data conversion project. So at those key milestones – finalizing the requirements and the application design – there was no sign-off from the business sponsor. Clearly that would have identified the issues before go-live.

In the end, we sent in a consultant to run a workshop with the VP and sales staff and it turned out only modest changes were needed. But the cost to both us and the customer was significant.

Failure to Execute Review Checks Against Business & Project Success Measures at Completion Of Design Stage & Configuration (System Validation)

This is probably the most often overlooked area – things are going well, progress is visible and tracking to plan, why stop for a checkpoint? This fits very well with the last two points. Executive review means we’re confirming that the design and configuration (two separate things) meet the criteria we set at the beginning of the project.

I had a customer who, two days into Go Live, was told by some key players of the user community that the application was worthless. Man, that was a slap in the face. The initial reaction was to blame the software. After a few serious sit down sessions with these key players, who had no input into application design, we found a couple of places where the implementation completely missed some key user criteria. Interestingly, one of the key business objectives was user adoption, but the team was in too big a hurry to mess with much of a process.  Without any consulting help at all, they had slapped something together, and it was nearly fatal.

I like to think of any project as a journey – moving from one point to another along a route. We can often get very focused on the small steps we’re taking day to day and feel that we’re really making progress. This can indvertantly lead to minor mis-steps that lead us astray. We have to pull our heads up and look at the big picture – where are we on the map? Are we still tracking to the route we planned at the beginning? Having checkpoints defined in your plan – and actually executing them – can keep you from going too far astray and ensure you reach your destination.

Automating Every Exception and Process Step

I think of this one as the perfect being the enemy of the good. Essentially trying to do it all at the cost of doing anything. We’ve probably all been involved in projects where we hear requirements like this. In my case, I can recall a specific customer who was implementing CRM OD for sales force automation. They actually had pretty straightforward needs and no real system in place. So the field organization was screaming for something to help them track their deals.

But the project continually got bogged down in the desire to insert an inventory management and complex parts tracking system at every sales stage. Then marketing automation became a requirement. Mind you, this was a business of less than 50 users. Months were wasted in discussions that ultimately ended with no usable system.

More recently Robert and I sat on a half dozen calls, about 2 hours each, going through a customer’s requirements – items they considered “must haves”.  It was interesting to hear the customer’s IT team try to push back, but the management of the user community was just convinced they could make the application fool proof. Ultimately, all of that helpful automation would have slowed the system down, and it would have created all sorts of other issues as exceptions to all of these rules would start popping up. In the end, there’s really no substitute for effective end-user training and management reports to help enforce good behavior.

Automating process steps certainly has its place. And CRM On Demand allows a lot of it. It’s important to balance usability, business goals, and cost when deciding where to invest in automation.

Configuring Custom Reports Before Completing Base System

And speaking of reports, how about building out a TON of reports before the data model has even been finalized? It’s not a catastrophic mistake, but it’s something that can cost hours in re-work. I’ve had a few situations where, once the initial layouts were presented to users, they asked for some fields that they felt were key to their processes. Consequently, they were also key for a couple of reports that they wanted. It wasn’t catastrophic, but it did cost us several hours re-writing those reports.

It’s just as bad to have no idea how you’re going to use reports to monitor and manage behavior. It’s not too often that the manager can leverage just the user interface to look and the data and manage his employees. Most often, just a few key reports make that process a lot easier. After doing a design review with a customer, they had absolutely no idea how the manager was going to use the system to manage a few key processes. In the end, we did identify a few out of the box reports plus two custom reports that would get them started.

Approaching Configuration With A Siebel On-Premise Mindset

This is an interesting issue that we see with larger customers, which are far more often implementing CRM On Demand. Many times they already have Siebel in place somewhere, or key people on the implementation team have Siebel on premise experience.

Rather than approach the project as implementing a Software As A Service, they treat it like any other Siebel project. Just start configuring and coding until the application does exactly what you want. It might take two years, and it might require adding some new hardware, but you can make Siebel CRM do pretty much what you want.

That approach almost totally negates the main value proposition of Software As A Service. If you need to write a lot of custom web services and interfaces, you’re probably not implementing very quickly. And you’re not going to fix bloated code and processes by throwing hardware at the problem. Unlike the enterprise implementation, there are a lot of variables and constraints that you have to honor.

CRM On Demand was built around certain business processes and best practices. Rather than start with a blank sheet of paper and say “this is what we want” (as is done in many enterprise deployments), effective implementations start by gaining a deep understanding of what those in-built processes and practices are, then determine how best to leverage them.

This allows both a rapid implementation and reduces costly customization.

In Summary…

None of the pitfalls mentioned are necessarily catastrophic. In our experience, there are very few pure CRM failures. It’s really more a matter of how much time and resources are expended until the value of the system starts to accrue. Keeping these in mind and avoiding them, or looking for signs that you may be encountering one, can help keep your implementation on track.

Share and Enjoy:
  • Digg
  • LinkedIn
  • del.icio.us
  • Facebook
  • Mixx
  • Google Bookmarks
  • Twitter
  • StumbleUpon
  • Technorati
  1. Padmanabha Rao on 23 Mar 10 4:10 pm

    Great post, especially about the checkpoints.