It looks like nothing was found at this location. Maybe try a search or browse one of our posts below.

Technology … all successful technology … follows a predictable life cycle: Hype, Disillusionment, Application.

Some academic type or other hatches a nifty idea in a university lab and industry pundits explain why it will never fly (it’s impossible in the first place, it won’t scale up, it’s technology-driven instead of a response to customer demand … you know this predictable litany of nay-saying foolishness).

When it flies anyway, the Wall Street Journal runs an article proclaiming it to be real, and everyone starts hyping the daylights out of it, creating hysterical promises of its wonders.

Driven by piles of money, early adopters glom onto the technology and figure out how to make it work outside the lab. For some reason, people express surprise at how complicated it turns out to be, and become disillusioned that it didn’t get us to Mars, cure cancer, and repel sharks without costing more than a dime.

As this disillusionment reaches a crescendo of I-told-you-so-ism, led by headline-grabbing cost-accountants brandishing wildly inflated cost estimates, unimpressed professionals figure out what the technology is really good for, and make solid returns on their investments in it.

Client/server technology has just entered the disillusionment phase. I have proof – a growing collection of recent articles proclaiming the imminent demise of client/server computing. Performance problems and cost overruns are killing it, we’re told, but Intranets will save it.

Perfect: a technology hitting its stride in the Hype phase will rescue its predecessor from Disillusionment.

What a bunch of malarkey.

It’s absolutely true that far too many client/server development projects run way over the originally estimated cost. It’s also true that most client/server implementations experience performance problems.

Big deal. Here’s a fact: most information systems projects, regardless of platform, experience cost overruns, implementation delays, and initial performance problems, if they ever get finished at all. Neither the problem nor the solution has anything to do with technology – look, instead, to ancient and poorly conceived development methodologies, poor project management, and a bad job of managing expectations.

I’m hearing industry “experts” talk about costs three to six times greater than for comparable mainframe systems – and these are people who ought to know better.

I have yet to see a mainframe system that’s remotely comparable to a client/server system. If anyone bothered to create a client/server application that used character-mode screens to provide the user-hostile interface typical of mainframe systems, the cost comparison would look very different. The cost of GUI design and coding is being assigned to the client/server architecture, leading to a lot of unnecessary confusion. But of course, a headline reading, “GUIs Cost More than 3278 Screens!” wouldn’t grab much attention.

And this points us to the key issue: the client/server environment isn’t just a different kind of mainframe. It’s a different kind of environment with different strengths, weaknesses, and characteristics. Client/server projects get into the worst trouble when developers ignore those differences.

Client/server systems do interactive processing very well. Big batch runs tend to create challenges. Mainframes are optimized for batch, with industrial-strength scheduling systems and screamingly fast block I/O processing. They’re not as good, though, at on-line interactive work.

You can interface client/server systems to anything at all with relative ease. You interface with mainframe systems either by emulating a terminal and “screen-scraping,” by buying hyper-expensive middleware gateways (I wonder how much of the typical client/server cost over-run comes from the need for interfaces with legacy systems?), or by the arcane issues of setting up and interfacing with LU2 process-to-process communication.

And of course, the development tools available for client/server development make those available for mainframes look sickly. Here’s a question for you to ponder: Delphi, Powerbuilder and Visual Basic all make a programmer easily 100 times more productive than languages like Cobol. So why aren’t we building the same size systems today with 1/100th the staff?

The answer is left as an exercise for the reader.

The world is awash in bad metrics.

Start with a timely example from your favorite daily newspaper or shouting-heads news commentary: “The top 1 percent paid 35 percent of all income taxes” (usually terminated by an exclamation point.)

No, tax policy isn’t relevant to leading and managing an IT organization, but devising and interpreting metrics is essential to the job, so what the heck. Tax policy is, if nothing else, more fun to tear into than IT SLAs. So let’s dig in, starting here: This statistic has no meaning.

None, because there’s no referent. If the top 1 percent earned 100 percent of all income its tax burden is way too low; if it earned 5 percent it’s probably too high. According to the Tax Foundation, the number is 18.7 percent, which might lead you to conclude the top 1 percent is being unfairly burdened by current tax policy.
Perhaps they are. But we’re far from finished, because:

The 18.7 percent number ignores income sheltering. It also ignores the income retained by corporations in which the 1 percent hold large amounts of stock.

Then there’s the question of why we’re only including income tax. Surely, what matters is the total tax burden – federal income tax, state income tax, payroll taxes, sales taxes, property taxes, and, in some states, personal property taxes.

Interestingly enough, comparisons of income to total tax burden are hard to come by.

For that matter, should we be thinking in terms of total income, or should we be thinking in terms of disposable income after an allowance for a person’s basic needs are subtracted out? The top income brackets have a far greater share of disposable income than 18.7 percent no matter where you draw the blurry line that separates necessities from nice-to-haves.

My point in all this isn’t to suggest the top 1 percent, top 0.1 percent, or top anything percent are paying too much or too little in taxes, let alone whether the country’s total tax rates are too high, too low, or are, Goldilocks-like, just right.

My point is to suggest we all need to start crying FOUL! every time a politician or member of the commentariat parades yet another meaningless number in front of us to prove how awful it all is.

These fine folks probably aren’t deliberately trying to deceive us. I suspect it’s worse: I don’t think they know how to devise and interpret useful metrics at all. They’re just passing their ignorance along to the rest of us.

Oh, by the way: This whole conversation is probably the wrong conversation. A better one would start with the questions: (1) What kind of society do we want to have? (2) What roles should government play in providing it? And (3) what’s the best way to pay for it?

We wouldn’t all agree, but at least we’d know what we’re disagreeing about.

Lest you think this has no relevance to IT, take a look at (to choose an easy target) just about any IT benchmarks ever published. Your company’s CEO probably looks at, if nothing else, the ever-popular percent of revenue spent on information technology, so you’d better look at it too.

It’s a dopey metric for so many reasons they’re hard to count, yet its popularity persists. Many reasons? Sure, like:

  • Is a smaller number better or worse? Most CEOs will assume less is better. Spending less means a smaller number for operating expenses for the same amount of revenue, and that’s good, isn’t it?

Sure, except when it means fewer investments in information technology that would increase revenue or reduce other operating expenses.

  • Are the numbers comparable? Different companies have different levels of IT spending outside the IT department. Some companies have outsourced whole business processes, with the relevant IT owned, managed, maintained and enhanced by the BPO vendor – they’re still paying for the IT, only it doesn’t show up on the books.
  • Has your company invested more in information technology in the past? The more IT you’ve built, the more you spend on operations and applications maintenance to keep the joint running. To avoid exceeding the percent-of-revenue benchmark that means you have to spend less on new development.

Just like the tax question, the percent-of-revenue benchmark also leads to the wrong conversation. The right conversation? It starts with very much the same questions as the tax conversation: (1) What kind of business do we want to run? (2) What roles should IT play in providing it? And (3) what’s the best way to pay for it?

Business managers probably call the IT chargeback a tax, too.

ManagementSpeak: Use common sense.
Translation: Figure out how I would handle the situation and do it.
This week’s anonymous donor had the good sense to figure out what “common sense” means.