It looks like nothing was found at this location. Maybe try a search or browse one of our posts below.

Beware the “Tom Peters Fallacy.” As identified in this space back in 2007, it goes like this:

  1. Find a great organization.
  2. Identify a trait in that organization you like.
  3. Decide that this trait is what makes that organization great.
  4. Declare that this trait is the panacea for all other organizations.

This week’s perpetrator is the estimable Jeff Bezos. Mr. Bezos started with the dream of selling books online and turned it into the … that’s the, not a … retailing behemoth and the most important cloud computing platform.

And so disagreeing with Bezos about part of his success formula calls for caution.

No, this isn’t a commentary on how Amazon treats its employees. That’s well-plowed turf. It’s about Bezos’s approach to organizational decision-making.

In a wide-ranging interview on the Lex Fridman Podcast, reported by Business Insider’s Sawdah Bhaimiya, Bezos asserts that compromise is a bad way to resolve disagreements. It’s bad, he says, because it takes little energy, but “doesn’t lead to truth.”

Start here: Leaders have five ways to make a decision in their toolkit: They can (1) make it (authoritarianism); (2) make it after talking it over with folks worth talking it over with (consultation); (3) persuade and influence everyone involved to agree to a solution (consensus); (4) give up on consensus and let stakeholders vote on their preferred alternative (voting); or (5) ask someone or other to make the decision for you (delegation).

When Bezos talks about compromise, he’s talking about doing what’s needed to get to consensus. He starts out wrong because if there’s one universal truth about consensus decision-making it’s that consensus takes far more time and effort than any way to get to a decision.

But how about the getting-to-the-truth part?

To be fair, when it comes to his strawman case – deciding how high a room’s ceiling is, he’s right on target: A tape measure yields results superior to compromise. But then, it’s superior to any of the five listed decision styles because, also to be fair, direct observation doesn’t count as a decision, unless you live in a space-time continuum in which alternative facts hold sway.

More significantly, delve into the branch of philosophical inquiry known as epistemology, or just review Plato’s cave allegory, and, in addition to acquiring a migraine you’ll figure out that none of us has access to “the truth.” We can approach it asymptotically (add Karl Popper to your reading list), but so far as the truth is concerned, knowing the answer to a question with confidence is the best we can aspire to. Certainty? Even knowing the height of your ceiling depends on you trusting your measuring tape’s manufacturer.

All of which might strike you as philosophical dithering. But when it comes to organizational decision-making, decisions of any consequence rest in part on unverifiable assumptions, often about the unknowable future. So with all the best of intentions, different participants, making different and conflicting assumptions and forecasts, will reach different conclusions. Which will result in a list of conflicting but equally valid possible right decisions to choose from.

You can either pick one, or find a compromise that’s right enough.

Sometimes, picking one of the alternatives and going with it is the best choice. It’s the engineering optimum, and would yield the best results. As someone once said, no committee ever painted a Mona Lisa.

But engineering optima can face a frustrating constraint: Without buy-in on the part of the decision’s major stakeholders even the most elegant designs will fail, while an inferior, messy compromise to which the whole organization is committed to will succeed.

Bob’s last word: I have one more nit to pick, and that’s Bezos’s implicit assumption that decisions are about discovering “the truth.” That isn’t what decisions are for.

When it comes to organizations, decisions, as has been pointed out in this space from time to time, commit or deny staffing, time, and money. Anything else is just talking.

Decisions, that is, are about designing solutions and choosing courses of action. “The truth” implies these are binary – right or wrong. But competing designs and courses of action are better or worse, not right or wrong. And what constitutes better or worse depends on each evaluator’s personal values and priorities.

No tape measure in the world will reconcile these when they conflict.

Bob’s sales pitch: Stick around. We’re still working through the complexities of handing over the keys to KJR, as it were. And as with just about everything else on this planet, no matter how simple a task looks before someone has to do it, having to do it reveals complexities that someone didn’t anticipate.

We are working on it, shooting for early next year to make it happen.

Watch this space.

eXtreme programming is a shame.

Understand, there’s a lot to be said for it. I have nothing against it. Smart people have extolled its benefits in print, and in person. It undoubtedly works very well.

But it’s still a shame, because it was carefully packaged to scare the living daylights out of a typical CIO.

When you think of eXtreme programming, what comes to mind first? See? A CIO’s first thought is almost certainly, “Two programmers at one keyboard? There’s no way on earth I can afford to literally cut programmer productivity in half. What’s next on the agenda?”

Or, the CIO will hear the word “extreme” and immediately tune out everything else, because extreme means risk and risk means waiting until other companies make it mainstream.

But doubling up programmers is, while interesting, a nit. Here’s why eXtreme programming, or some other “adaptive methodology,” should be an easy sell:

If you ask business executives what IT does worst, the most common answer is probably project completion. Ask them what IT does best, and you hear about application maintenance and small enhancements — responsibilities most IT organizations address with great competence.

What adaptive methodologies have done is to turn big-bang application development into development by continuous enhancement. They start by building something small that works and adding to it until there’s something big that works. They play, that is, to IT’s greatest strength. That should make sense to even the most curmudgeonly of CIOs.

As with everything else on this planet, the great strength of adaptive methodologies is the cause of their biggest weaknesses, ones they also share with old-fashioned application enhancement.

The first is the risk of accidental architecture. To address this issue, adaptive methodologies rely heavily on “refactoring,” which sounds an awful lot like changing the plumbing after you’ve finished the building.

By beginning with a “functional design” effort that publishes an architectural view of the business as well as the overall technology plan you can reduce the need for refactoring. It’s also important to make sure the development effort starts with the components that constitute a logical architectural hub, as opposed to (for example) taping a list of the functional modules on a wall and throwing a dart at it.

The second risk is colliding requirements. With ongoing enhancements to more stable applications there’s a risk that this month’s enhancement is logically inconsistent with a different enhancement put into production three years ago. With adaptive methodologies, the time frame is closer to three weeks ago but the same potential exists: To a certain extent they replace up-front requirements and specifications with features-as-they-occur-to-someone. It’s efficient, but not a sure route to consistency.

How can you deal with colliding requirements? Once again, take a page from how you handle (or should be handling) system enhancements. In most situations, you’re better off bundling enhancements into scheduled releases than putting them into production one at a time. This gives you a fighting chance of spotting colliding requirements. As a fringe benefit it amortizes the cost of your change control process across a collection of enhancements. (Here’s an off-the-topic tip: If your developers like your change control process you need to improve your change control process. But I digress.)

The same principle applies to adaptive methodologies. As a very smart application development manager explained it to me, “My goal isn’t to have frequent releases. The business couldn’t handle that anyway. What I want is to have frequent releasable builds.”

Yeah, but who cares? As last week’s column argued so persuasively (how’s that for being humble?) most IT shops purchase and integrate, rarely developing internal applications, and integration methodologies aren’t the same as development methodologies. Are there adaptive integration methodologies?

It’s a good question for which the answer is still emerging. Right now, it’s “kinda.” The starting point is so obvious it’s barely worth printing: Implement big packages one module at a time. If the package isn’t organized into modules, buy a competing package that is.

Which leads to the question of which module to implement first. The wrong answer is to implement the module with the biggest business benefit. The right answer is to start with the application’s architectural hub. That will minimize the need to build ad hoc interfaces.

Taking these steps doesn’t make your integration methodology adaptive. The chunks are still too big for that.

But it’s a start.

Does your organization have a climate change problem?

No, no, no, no, no. I’m not asking if your organization is or will be affected by anthropogenic climate change, or if it has a plan for dealing with it.

No, what I’m asking is about a parallel, namely:

While in spite of overwhelming evidence, some people still doubt climate change is real and potentially devastating, by now that’s an ever-shrinking minority. And yet, as a society we’re still unwilling to take the steps needed to address the problem.

A likely reason: “solution aversion,” (and thanks to Katharine Hayhoe, Chief Scientist for The Nature Conservancy, for bringing this phenomenon to my attention).

Solution aversion is what happens when the solution to a problem is so onerous that our minds run away from it screaming “Murmee murmee murmee murmee” to drown out the voices insisting the problem has to be solved.

So when I ask if your organization has a climate change problem, I’m asking if it’s facing an emerging situation that threatens its existence or viability, except that it isn’t really facing it at all. It’s refusing to face the situation due to solution aversion.

The problem might be that your customers are aging and you have no strategy for replacing them with others whose life expectancy is greater.

It might be that your product architecture has painted you in a metaphorical corner, preventing your design engineers from adding the features your product needs to be competitive.

Closer to IT’s home, an “unplug the mainframe” initiative was chartered and budgeted with goals in line with its title: the plan is to migrate all of the hundred or so mainframe-hosted batch COBOL programs in your applications portfolio with a hundred or so cloud-hosted batch COBOL programs.

Which means that when IT finally unplugs the mainframe, all of the business managers who had put their plans on hold for two years will discover that the converted applications, having preserved their batch-COBOL legacy, are no more flexible than their big-iron ancestors. Which in turn means that by the time business plans become business realities they’ll be four years out of date.

If you think your organization’s decision-makers are succumbing to solution aversion, the obvious question is what you can do about it. The obvious answer is to try to persuade them to deal with their climate-change problem by putting together a solid business case.

The obvious answer is, sad to say, the wrong answer. You aren’t going to resolve this with evidence and logic, just as you aren’t going to solve it by tearing your hair out in frustration while saying, through gritted teeth, “That’s just kicking the can down the road.”

The only way to overcome solution aversion is to figure out an alternative solution that doesn’t trigger the aversion reaction. Usually, this means figuring out ways to nibble away at the problem in convenient, non-threatening ways.

In the case of actual climate change this might mean starting with painless steps like replacing incandescent bulbs with LEDs, and making your next car a plug-in hybrid.

In the case of mainframe unplugging it might mean identifying a small number of the mainframe batch COBOL applications that, by rewriting them in a microservices architecture would generate an 80/20 benefit in terms of improved flexibility and future business agility.

Bob’s last word: My usual formula for persuasion starts with selling the problem. There’s no point in designing a solution until decision-makers and influencers agree there’s a problem that needs solving. And it’s only after everyone has agreed on the solution that it makes any sense to take the third step – developing an implementation plan.

The role of having a plan in a persuasion situation is to give decision-makers and influencers confidence that the solution can, in fact, be successfully implemented.

This week’s guidance doesn’t violate this formula so much as it augments it. It’s intended for situations in which the most plausible solution … actually, plan, but the folks who coined the term “solution aversion” didn’t ask for my input … “un-sells” the problem.

So it should be called “plan aversion,” but let’s not quibble. What matters is recognizing when your organization has a climate-change problem so you can find ways to finesse the plan.

Bob’s sales pitch: CIO.com just posted the eighth and last article in my IT 101 series. It’s titled “The CIO’s no-bull guide to effective IT” and it both summarizes and serves as a tour guide to the previous seven entries. Whether you’re new to IT management or are a seasoned CIO, I think you’ll find value in the collection.

Also: Remember to register for CIO’s upcoming Future of Work Summit February 15th through 17th, where, among an extensive program, you can hear me debate Isaac Sacolick on the business readiness of machine learning. Our session is scheduled for February 16th, 2:50pm CST. Don’t miss it!