It looks like nothing was found at this location. Maybe try a search or browse one of our posts below.

I’m jealous of political pundits. They get to write about global warming, economic protectionism, and cloning (for example) without any particular expertise in the relevant disciplines, influencing public policy in the process.

Not me. All I get to write about is how to run IS. My writing about Zippergate, for instance, would be about as classy as the Hollywood actors who make political speeches during the Academy Awards (although that may be preferable to the current practice of thanking everyone in the telephone directory).

That’s why I’m so thankful for Department of Justice vs. Microsoft. It lets me comment on a major issue of public policy without straying from my purported area of expertise. Here’s my comment: As of this writing, Microsoft seems to be counting on one of three legal possibilities: Either 1) It isn’t really a monopoly because even though it is right now it won’t be forever so that makes everything OK; 2) the antitrust laws are a bad idea so don’t enforce them; or 3) even if it’s found guilty, the penalty won’t hurt very much. The “consumers haven’t been hurt” argument is, of course, irrelevant grandstanding – the question is one of using its monopoly for competitive advantage, not one of price-gouging.

The biggest impact of this trial, of course, is all the free publicity it’s given Linux. Ain’t irony grand?

Last week’s column presented a simple formula for predicting the success and failure of new technologies and technology products, using past products as examples. This week we’ll apply it to the current technology scene, starting with Linux.

The formula, you’ll recall, was customers/affordability/disruption – a successful new product must do something worthwhile for the customers (the people making the buying decision, as opposed to the consumers, who use the product); it must be affordable; and it must not disrupt the current environment. (Disruption, by the way, is a big reason companies like Microsoft, which dictate the architecture in particular environments, have the incumbent’s huge advantage.) Let’s start predicting!

Linux (as a server) – Main customers: Webmasters and underfunded system administrators. Benefit: Runs reliably and fast. Affordability: Free, or nearly so. Disruption: As a Web server or file-and-print server, it integrates invisibly into the network (unless Microsoft can make it disruptive through proprietary server-side innovations like Active Server Pages). Score: Perfect – Linux is a winner.

Linux (on the desktop) – Main customers: End-users. Benefit: Fast, less-crash-prone PC. Affordability: Free except for the (not overwhelming but not trivial) time needed to learn it. Disruption: The office suites for Linux don’t reliably read and write the Microsoft Office file formats – that is, there’s a significant delay before the Linux suites catch up to each new Office release, and even then they’re glitchy. Score: Iffy.

Personal Digital Assistants (PDAs) – Main customers: End-users. Benefits: Lightweight, carries around essential information, IS doesn’t get to say what users can and can’t do with it. Affordability: Very. Disruption: Unless an installation goes south, they’re invisible to IS. Score: Perfect – PDAs are a winner.

XML – Main customers: IS developers. Benefits: Universal, awesomely adaptable file format. Affordability: Open standard, learnable without needing tensor calculus. Disruption: A complicated question. As meta data (think data dictionary on steroids) there’s no significant installed base to disrupt. As an office-suite file format, either Microsoft’s XML tags will do the job for everyone or it won’t get off the ground. As a replacement for HTML it’s highly disruptive until it’s built into the browser. Then it’s nondisruptive. Score: Very High – XML has enough different applications that it’s bound to succeed in at least some of them.

Java – Main customers: IS developers. Benefits: Nicely designed object-oriented language, automatic garbage collection, possibly portable (the jury’s out on this one). Affordability: As affordable as any other programming language. Disruption: A complicated issue. For established developers it’s disruptive – having some of a product in Java and the rest in a compiled language is messy, and probably won’t happen. For new developers it’s non-disruptive, except for the performance issues compared to any compiled language. Score: Adequate – Java will become just another programming language.

Well, there you have it. Like a good math text, you now have a formula and examples. Go forth and prognosticate.

Don’t make data-driven decisions. Make data-informed decisions, or so my friend Hank Childers advises.

It’s a nice distinction that recognizes both the value of evidence and its limitations.

For those who like to rely on evidence we live in tricky times. The increasing availability of evidence about just about any topic is accompanied by an at-least-equally increasing supply of disinformation, in direct proportion to the profit to be made by biasing things in the profit-maker’s favor.

Which is one reason I’m skeptical of the long-term reliability of IBM’s Watson as a medical diagnostician.

There’s a program called Resumix. What it does is scan resumes so as to find the best skill-to-task matches among the applicants. It’s popular among a certain class of recruiter because reading resumes is an eye-blearing chore.

Worse, the recruiter might, through fatigue, miss something on a resume. Worse still, the recruiter might inadvertently practice alphabetical discrimination, if, for example, the resumes are sorted in name order: Inevitably those at the front and back of the stack will receive more attention than those in the middle.

But on the other side of the Resumix coin is this: Most applicants know how to play the Resumix game. Using techniques similar to how those who write websites learn how to get the attention of search engines, job-seekers make sure Resumix sees what it’s supposed to see in their resumes.

If Watson becomes the diagnostician of choice, do you think there’s any chance at all that those who stand to profit from the “right” diagnosis won’t figure out how to insert what Watson is looking for in the text of the research papers they underwrite and pharmaceutical ads they run?

It’s one thing for those who developed and continue to refine IBM’s Watson division … for whom, by the way, I have immense respect … to teach it to read and understand medical journals and such. That task is merely incredibly difficult.

But teaching it to recognize and discard utter horse pucky as it does so? Once we try to move beyond the exclamation (“That’s a pile of horse pucky!”) to actual definition, it isn’t easy to find one that isn’t synonymous with “I don’t want that to be true!”

Well, not, that isn’t right. A useful definition is easy: Horse pucky is a plausible-sounding narrative that’s built on a foundation of misinformation and bad logic.

Defining horse pucky? Easy. Demonstrating that something is horse pucky, especially in an age of increasingly voluminous disinformation? Very, very hard. It’s much easier to declare that something is horse pucky and move on … easier, but intellectually bankrupt.

So imagine you’re leading a team that has to make an important decision. You want the team to make a data-informed decision — one that’s as free from individual biases as possible; one that’s the result of discussion (solving shared problems), not argument (one side winning; the other losing).

Is this even possible given the known human frailties that come into play when it comes to evaluating evidence?

No, if your goal is perfection. Absolutely if your goal is improvement.

While it’s fashionable to disparage the goal of objective inquiry because of “what we now know to be true about how humans think,” those doing the disparaging are relying on evidence built on the philosophical foundations of objective inquiry … and are drawing the wrong conclusions from that evidence.

Here’s the secret:

The secret of evidence-informed decision-making: Don’t start by gathering evidence.

Evidence does, of course, play a key role in evidence-informed decision-making (and what would you do without KJR to give you profound insights like this?). But it isn’t where you start, especially when a team is involved.

Starting with evidence-gathering ensures you’ll be presiding over an argument — a contest with winners and losers — when what you want is collaboration to solve a shared problem.

Evidence-gathering follows two essential prerequisite steps. The first is to reach a consensus on the problem you’re trying to solve or opportunity you’re trying to chase. Without this, nothing useful will happen. With it, everyone agrees on what success will look like when and if it eventually happens.

The second prerequisite step is consensus on the process and decision-making framework the team will use to make its decision. This means thinking through the criteria that matter for comparing the available alternatives, and how to apply evidence to evaluate each alternative for each of the criteria.

Only then should the team start gathering evidence.

Informed decisions take hard, detailed work. So before you start, it’s worth asking yourself — is this decision worth the time and effort?

Or is a coin-toss good enough?

Definitions get me into a lot of trouble.

Early in my career, I was asked to perform a “feasibility study.”

“What’s the subject?” I asked.

“An inventory system,” my boss answered.

“OK, it’s feasible,” I told him. “I guarantee it. Lots of other companies keep track of their inventories on computers, so it must be.”

More patient than most of the managers I’ve reported to in my career, he explained to me that in IS, a feasibility study doesn’t determine whether something is feasible. It determines whether it’s a good idea or not.

It turned out to be a good idea (a tremendous surprise), so next we analyzed requirements. You know what’s coming: A senior analyst asked me if the requirements were before or after the negotiation.

“What negotiation?” I asked. “These are requirements. They’re required.”

This is how I learned that we do feasibility studies and requirements analyses in part to test the validity of the requests we receive. The process would be unnecessary if we believed end-users were our customers.

At the supermarket, nobody says to a customer, “Those fried pork rinds aren’t an acceptable part of your diet!” or, “Prove you need that ice cream!” At the supermarket, wanting something and being able to pay for it are all that matter.

In IS we used to view end-users as our (internal) customers, and we figured the relationship followed from the role: If they’re our customers, our job is (as Tom Peters would say) to delight them.

End-users aren’t our customers, though. They’re our consumers – they consume our products and services but don’t make buying decisions about them. But does that really change anything, or is it just a useless distinction?

It does change things. “Customer” defines both a role and a relationship. What does “consumer” say about a relationship? Nothing. Or at best, very little.

“Consumer” defines only a role, and in the context of organizational design, role is a process concept, whereas relationship is a cultural one. (Definitions: Processes describe the steps employees follow to accomplish a result. Culture describes their attitudes and the behavior they exhibit in response to their environment.)

What should the relationship between IS and the rest of the business look like? This is one of the most continuously controversial issues in our industry. When you view it as a clash between process design and cultural cues, the reason our discussions are jumbled is more clear.

Defining the rest of the business as our consumers frees us to define whatever relationship works the best. As with my inventory system, every highly successful result I’ve ever seen in IS has been the result of an open collaboration between IS and end-users, with authority shared and the dividing line between the two groups obliterated.

Yet many of my colleagues and much of the correspondence I receive on the subject still advocate a hard dividing line between the two groups, with formally documented “requirements” defined by a small group of end-users and the authority for “technical” decisions reserved for IS.

Of course, purely technical decisions are few and far between these days. What brand of NIC should you buy? OK, that’s a technical decision, but even something as seemingly technical as choosing a server OS places constraints on business choice. Or, looking at the opposite end of the process, selecting a business application limits IS’s choice of operating system, sometimes to a single one.

Trying to partition responsibilities to preserve the prerogatives of one group or the other leads to nothing but bad results. “You have to do this for me because I’m your customer,” and “You can’t do that because it violates our standards,” are two sides of the same counterfeit coin.

There are few purely technical or purely business decisions anymore. Since form follows function, you should strive for a relationship that recognizes this fact. What kind of relationship should you have with your consumers? One that emphasizes joint problem-solving and decision-making, and working together in a collaborative environment.

Or, in a word, “partnership.”