It looks like nothing was found at this location. Maybe try a search or browse one of our posts below.

It’s called “solution selection.”

There are those who grouse that we should stop the doublespeak and call it software selection.

Not me. We engage in this sort of thing because we have a problem that needs solving, after all (or, in happier circumstances, an opportunity that needs chasing). A software product by itself is, as logicians might put it, necessary but not sufficient for solving (or chasing) it.

The fundamentals for going about solution selection are, by now, well understood: When comparing the alternatives, evaluators should dig into: (1) Features and functionality — does the solution do what you need it to do? (2) Internal construction — is it built well? (3) Terms and Conditions — what will it cost, and beyond cost is contract and licensing language acceptable? (4) Vendor characteristics — is the seller financially viable and, almost as important, easy to work with?

The fundamentals are well understood, and yet the results, more often than not, are disappointing.

What goes wrong? What follows is just a starting list. I hope you and your fellow members of the KJR community will add to it in the Comments so we can, as the saying goes, all be smarter than any of us are.

So with that in mind, here’s my shortlist of hard-to-avoid solution selection gotchas:

Wrong selection: The selection is always wrong and has to be. No matter the depth of due diligence, some warts only appear when you implement. So you’ll see the awful reality of the selected solution. Everything else exists in the glorious world of brochureware. It’s a case of the grass always being greener under someone else’s GrowLux.

Each requirement vs all requirements: I’ve seen this a few times — a supplier’s products can handle every requirement the solution selection team has identified. Sadly, no one of the supplier’s products can handle them all, and the ones you need to cover everything aren’t integrated.

Feature scaling: A vendor can usually get its software to do what you need it to do. It can make it do something else you need it to do, too. But there’s often a limit on how many different something-else-you-need-it-to-do-toos their software can do at the same time, because really, what they’ll have to do to get their software do each of those things is a one-off. Doing them all makes it a kludge.

SaaS means not having to worry about the internals: Wrong! Saying that a software product’s internal engineering doesn’t matter because it’s SaaS is a lot like saying an airplane’s engineering doesn’t matter because someone else is flying it.

There’s a limit to how much of a solution’s internal engineering a provider will share. In the case of software what you probably care about the most is the data model. Explain what you need to know about each type of stuff the software will manage, and ask where that data gets stashed.

Terms, conditions, conditions about conditions, and conditions about conditions about conditions. Some software vendors still hide terms and conditions inside linked documents, inside documents linked to the linked documents, and so on ad infinitum. I guess it makes sense that if good software design means that modules invoke modules that invoke modules, that good software license design would be similar.

Regardless, some T&Cs can place unfortunate limits on what you can do with what you’re licensing. This is why you need lawyers.

Vendor from where? In Greek mythology, Prometheus was the good guy, stealing fire from the gods for humans to use. In Christian mythology, Lucifer (light-bringer) was guilty of pretty much the same crime, assuming you’re willing to overlook that little war-on-heaven peccadillo.

Some software vendors are more Lucifer than Prometheus (you were wondering where this was going, weren’t you?).

What can you do to anticipate this? References won’t get you there. Even the worst vendor will have a few customers who like it. The best you can do is ask around and hope a few people you trust have had experience with the vendors you’re considering.

What else can go wrong? Lots else can go wrong. As I said at the beginning of this column, this list is just for starters, and it doesn’t include any of the mistakes customers make while implementing the solution they’ve so painfully selected (for example, thinking they’re implementing the software and not changing the business).

So now it’s your turn. Jump to the Comments and share your insights as to what can go wrong while choosing the right solution.

The KJR community awaits your wisdom!

“Better,” said Voltaire, “is the enemy of good.”

I used this quote last year, describing it as a healthy attitude, to finish a column proposing seven warning signs of a culture of complacency. In response, the estimable Frank Hayes, who writes “Frankly Speaking” for Computerworld and is one of the best commentators in the industry, was kind enough to respond:

“It’s an incorrect translation. What Voltaire wrote, possibly quoting an existing French proverb, was: Le mieux est l’ennemi du bien. “Mieux” does translate as “better.” But “le mieux” is always translated as “the best.” (No, I don’t know how they say “the better” in French.)

“So the translation should be: “The best is the enemy of good” — (unattainable) perfection is the enemy of (attainable) quality. Which is a worthwhile thing for us to remember, but it’s the flip side to the point you were making: better is (and should be) the enemy of good enough.

“Which doesn’t need to be translated from French — just translated into a lot of people’s brains.”

It’s an outstanding point (as you’d expect from both Voltaire and Frank Hayes): While satisfaction with mediocrity defines complacency, insistence on perfection is paralyzing. It’s pointless anyway, because the universe is a stochastic place where just about anything can happen. All the molecules of air in your bedroom could congregate in one corner while you’re sleeping, leaving you in a vacuum to asphyxiate. All of the uranium atoms in the nuclear reactor closest to you could decay at the same moment, causing a colossal explosion. Microsoft could release a version of Internet Explorer without security holes.

Hey, I didn’t say these events were likely. But they’re possible given the laws of physics as we know them. All that’s kept them from happening to you are the mathematical laws of probability.

In IT, competent project managers, system administrators and application developers all recognize the stochastic nature of their domains. Project team members get sick, hired away, or reassigned. Servers fail for indeterminable reasons and won’t come back up. The state-of-the-art development tool being used for a bunch of mission-critical code turns out to multiply wrong under certain unlikely circumstances.

Based on much of the correspondence responding to last year’s columns on complacent IT organizations, it appears some readers don’t live in a stochastic universe at all: In a truly well-run IT organization, they assert, everyone can leave promptly at 5pm every day because everything is always under control.

I don’t think so. Yes, in a well-run IT organization there will be days where nothing untoward interrupts the plan. In large, well-run IT organizations, though, the laws of large numbers take over and the odds that something goes awry increase to the point of inevitability. If a culture of complacency permeates, that won’t matter — everyone will leave at quitting time. In a healthier culture, professionals will work late to get the job done.

Quite a few readers agreed with my point — that if IT is a ghost town at 5pm it’s a symptom of complacency — but argued I shouldn’t have said so, because many executives would read it to mean that if anyone leaves at 5pm there’s a problem. That’s in spite of my also saying, in the same paragraph, “If everyone works late hours and six or seven day weeks all the time, it suggests a very different problem: Desperation. It comes from strong motivation — usually fear — coupled with severe ineffectiveness.” Their argument, while correct, is dangerous advice.

Writers are responsible for clarity. We’re responsible for avoiding ambiguity to the extent possible given constraints of space, limitations of language, and last-minute changes imposed by the copy desk. We’re responsible for marshaling persuasive facts and logic into a narrative framework that guides readers through the complexity of the subject matter.

We aren’t, however, responsible for every reader’s ability to comprehend. Some can’t. Others choose to mischaracterize because they read to gather ammunition, not new ideas.

This matters to you. As an IT leader, communication — listening, informing, and persuading — is a critical skill. Often, you’re informing and persuading non-technical executives of the need for hard-to-explain yet vital investments, such as those required to maintain a healthy IT architecture.

Communicators who spend their time and energy worrying about the ability of others to misunderstand them avoid controversial topics altogether, concerned that the consequences of someone misunderstanding their message are too large to risk. Business being a political environment, it’s a valid concern.

If you allow this concern to outweigh all others, though, you’ll have earned two labels: “craven,” and “politician.”

If you’ll forgive the redundancy.