It looks like nothing was found at this location. Maybe try a search or browse one of our posts below.

Business is the land of the panacea, and IT is its capital. Sensible people … people who would never ask, “What do you think — should I take tetracycline, Vitamin D, or have surgery?” before undergoing the minor inconvenience of a medical diagnosis to determine (a) if they are ill; and (b) if so, with what … ask whether they should undertake CMM, ITIL or COBiT without first having any idea of what is and isn’t working well.

Let’s dispose of something right now: Ignore CMM (the Software Engineering Institute’s Capability Maturity Model).

That, more or less, is the advice given by Capers Jones, one of CMM’s strongest advocates.

CMM is waterfall application development methodology taken off a cliff. (Come to think of it, waterfalls always go off a cliff, by definition. Oh, never mind.) Had everyone followed CMM, the World Wide Web would never have happened. By the time the first few thousand websites had been deployed … with maybe one tenth the content and functionality of the ones that actually happened … everyone would have lost interest.

According to Capers Jones, CMM is essential for just about any run-of-the-mill IT shop that employs more than 1,000 developers and undertakes 10,000+ function point projects. He has solid evidence that this is the case.

So if you lead more than a thousand developers, it’s just barely possible that you really do need CMM. Probably not, though, because even if you do, you rarely have a reason to charter a 10,000+ function point development project, for two reasons.

The first: You’re almost always better off chunking your projects down into a succession of small projects instead. Risk goes down, success is more reliable, and business value happens faster. And, (he said, never passing up an opportunity to plug his books and seminars) you can employ the Bare Bones methodology instead of something bulkier.

The second: Most business IT groups spend the majority of their applications effort integrating, configuring, maintaining and upgrading commercial software packages rather than developing from scratch. The square peg of development methodologies has limited relevance when it comes to the round hole of integration and configuration, despite decades of attempts to pound one into the other.

But I digress.

Four major factors determine the success of any IT organization: Business alignment, process maturity, technical architecture, and human performance.

Which of the four is most important? Human performance, without a doubt. As mentioned here in recent columns, the proof is easy, and geometrical in its rigor: Great employees routinely overcome bad or non-existent process, ineffective leadership and governance, and messy technical architecture. Bad employees just as routinely cause even the best process designs to fail while turning elegant architecture into a tangle of spaghetti, and efficient governance into meaningless committee meetings as projects become eternal.

Bad employees don’t routinely fail in the face of excellent leadership for the simple reason that excellent leaders hire bad employees less often, and when they do either coach them to success, reassign them to more suitable roles, or terminate them. Strong leaders don’t tolerate weak employees.

That doesn’t stop us (us being IT Catalysts, my consulting company) from advising clients on how to make their processes more effective. Far from it. One of the questions we often ask struggling managers is how (but really whether) they know if the processes they’re responsible for are healthy.

Am I speaking out of both sides of my mouth?

Not at all. Here’s the distinction: Effective leaders place their emphasis on people. Effective managers insist on delivery, and recognize the importance of process in achieving that end.

If you are responsible for a business function you have to be effective at both leadership and management, and they aren’t independent topics.

Effective management depends on effective leadership for all the reasons enumerated here over the past few weeks and summarized above: Effective leadership puts motivated employees with the right skills, attitude and focus in the right roles, allowing processes to be effective.

Effective leadership is equally dependent on effective management, although the reason is more subtle: Because effective managers know how to monitor and manage processes, they can know what’s going on without having to micromanage.

One other reason for the dependency: If you are in charge of a business function, you have to know how to manage first, because results … process outputs … are what you’re paid for.

Fail to deliver them and your own manager might insist you spend less time leading and more time closely supervising the work.

Did we give up too easily?

We used to automate processes. Back in the early days, before we called ourselves Information Technology, Information Systems, or Management Information Systems, that’s what we did.

We didn’t optimize processes, orchestrate them, or develop metrics to assess their performance. We obliterated them, replacing them with computer programs that called on humans only when an input or decision was required the computer couldn’t handle on its own.

To be fair, we didn’t always do this all that well. Some of these automations sped up business processes that contributed little business value beyond keeping some people busy who otherwise might have caused all sorts of mischief.

There were also quite a few cases where the program paved a metaphorical cow path, faithfully reproducing in automated form every process step that had previously been executed by a bored human being, even if these steps had nothing at all in common with what an engineer might construct if given the goal and a blank sheet of paper.

But even with these flaws, IT’s predecessors delivered orders-of-magnitude improvements in business efficiency. And then Something happened, and suddenly, overnight, IT became the biggest bottleneck to business improvement.

My theory is that Something = Methodology, but perhaps I’m being unfair. I’m not, after all, a business historian, so while I lived through some of the mayhem, my personal mayhem experience isn’t a statistically significant random sample.

Based on my personal experience, direct and second-hand through colleagues, here’s the way process automation happened back in the good old days we neocodgers see in the warm glow of imperfect memory:

Someone from the business would drop by EDP (electronic data processing, IT’s ancient forebear), sit on the corner of a programmer’s desk, and ask, “Can you get the computer to do x?”

After a short discussion the answer was either yes or no, and if it was no, a longer discussion usually led to a useful alternative the computer was capable of.

The programmer would go off and program for a week or so and call the business person back to review the results and suggest course corrections. In not all that long the computer had automated whatever it was the business person wanted automated.

Usually, in the interim, other notions occurred to the business person, who, while reviewing how the solution to the initial request was progressing, would ask, “Can you also get the computer to do y?”

Over a span of a few years these solutions to business problems accumulated, turning into the big legacy systems many businesses still rely on.

If we’d only had the wit to call what we were doing a methodology and label it Agile.

Had we done so we might have avoided quite a lot of the discreditation that happened to IT during the years of waterfall wasteland that, starting in the early 1980s, transformed us from the department that automated stuff, generating enormous tangible business benefits, to the Department of Failed Projects.

For that matter, had we continued in our quest to automate the bejeezus out of things in our naively Agile sort of way, disciplines such as Lean and Six Sigma might never have achieved their current level of prominence.

Not that Lean and Six Sigma are terrible ideas. In the right contexts they can lead to solid business improvement.

What they’ve turned into for some businesses, though, are Strategic Programs, and religions for some practitioners. For these devoted adherents they’re the answer to all questions, before actually asking the questions.

What they’ve also turned into is a sort of IT-less shadow IT — a way to improve business processes without any need to involve IT, and, more important, without having to ask the Department of Failed Projects to deliver very much.

Let’s imagine the executive team at some enlightened (or misguided — your call) company reads the above and, convinced, calls to ask how best to return IT to its process automation roots. What would a modern version look like?

Mostly, it would treat each process as a black box that turns inputs into outputs. IT’s job would be to understand what the inputs and outputs are, and to develop, through a combination of inference and listening to the experts, an algorithm that reliably turns the defined inputs into the desired outputs.

That’s it — the entire methodology in one paragraph, understanding that “algorithm” can hide a lot of complexity in its four syllables.

Might this work? Beats me. It’s an idea I’m just starting to play with. Next week I might strenuously disagree with this week’s me.

Don’t hesitate to weigh in.