It looks like nothing was found at this location. Maybe try a search or browse one of our posts below.

KJRSpeak: Don’t shoot the message!

Thanks to Jim Green for this fine example of what we’re looking for.

Don’t be left out. Send your quotables — ones you or your friends have come up with, not something Mark Twain, Winston Churchill or Voltaire published for the ages — and make sure to tell me whether I can give you proper credit. – Bob

What is it about vitamins?

When I was a kid they were chewable. Then I grew up and they were small, coated, and easily swallowed.

Now they make me feel like Mr. Ed: They’re the size of horse pills, and if (when) they get stuck on the way down I taste alfalfa, or maybe straw.

Speaking of things that are hard to swallow, correspondent Mark Sowash made me aware that, since at least last October, Gartner has been predicting increased employee ownership of PCs (“Ready or not, here comes user PC choice,Computerworld, 10/15/2007).

There is, of course, a difference between prediction and advocacy. Still, fair’s fair — Gartner got there first (ouch!) and I agree (the horror!).

The Computerworld article describes a 250-employee consulting company that’s giving employee PC ownership a try.

It isn’t going to be a free-for-all. The company is establishing standards for all PCs that connect to the corporate network, especially for security. It will scan devices for compliance before they connect. It’s moving its enterprise applications to web-based access to keep enterprise data in the data center.

Sounds well-thought-out and workable to me.

Following last week’s column on this subject (“Getting to 21st century IT,Keep the Joint Running, 3/3/2008) I added two posts on Advice Line, the question-and-answer blog I do for InfoWorld (“Can virtualization resolve the IT/end-user disconnect?” 2/29/2008 and “Getting to 21st century IT – User-owned PCs?” 3/4/2008), asking for comments. The response was enthusiastic, perceptive, argumentative at times, and in all respects worthwhile.

I was struck by something: The positions taken were generally rooted in a hidden assumption about the nature of the workplace and employee. Those advocating employee ownership, for example, tended to be travelers or knowledge workers who are expected to creatively solve problems for their employers. Many of those who rejected the idea (and the idea of significantly loosened controls) support production workers with well-defined responsibilities that are entirely supported by the company’s enterprise applications.

The more I think about the subject the more I’m convinced the right answer isn’t yes or no. It’s “it depends.” This is, in retrospect, obvious. It’s also woefully incomplete, because the moment you say the words, you have to then explain what “it” depends on.

We’ll start the ball rolling this week with three factors: size, role, and regulation.

Size: Small companies tend to succeed through individual initiative, employing generalists who understand a broad swath of what the company does. Most “business process” happens inside one employee’s head.

As companies grow they gain economies of scale. To get them, they have to standardize what was idiosyncratic, whether the subject is business process, HR policy or PC configurations. Employee roles specialize, and what used to happen in one person’s head now happens through defined workflows.

It’s a sad trade-off: With success comes increasing bureaucracy, because the alternative is having costs increase as fast as revenue, or even faster.

Role: Some jobs are defined by their lack of clear definition. While the desired outcome might be clear, the means for achieving it is not. The list might include sales, marketing, consulting, IT developer (yes, IT developer) and varying kinds of analyst.

If you can’t predict what someone will have to do to get the job done it seems futile to lock down a specific toolkit and say, “that’s all you need.”

Other jobs are rigidly defined and narrowly focused. Call center agents come to mind. While they might, and often are called on to be flexible in how they converse with callers, when the time comes to pull data out of the company’s systems and put new data in, you want every agent to use the exact same tools in the exact same way.

Regulation: Some businesses are more highly regulated than others. This is neither bad nor good (I remember the pre-environmental-regulation United States and like what we have now much better). It’s a fact.

Another fact: Regulators care about compliance and only compliance. If compliance means a reduced ability to innovate, too bad. HIPAA regulators care about protecting patient data. The impact on creativity elsewhere in the company isn’t their concern.

The Big Finish: The smaller the company, the broader and fuzzier the role, and the less regulated the industry, the more likely it is you can open things up.

The really tough challenge is figuring out what you can do, if you’re big and highly regulated, to provide as much empowerment as possible. After all, the people who run big companies rarely want to preside over bureaucracies.

They just need an alternative.

Draw a circle.

Surround it with several more circles. Connect the new circles to the one in the center with lines.

This is your systems architecture. Before you read on, label each of the circles. I’ll wait.

OK, done? You almost certainly chose one of the following two labeling schemes: You either tagged the central circle with some synonym for mainframe, host, or server and the outside circles as terminals, PCs, or network computers; or you called the central circle a personal computer and the outside circles mainframe, server, minicomputer, World Wide Web, and other resources you can access through your PC.

If you chose the mainframe-centric labels, chances are you like the fat-network architectures now fashionable under the misleading “thin client” moniker. If, like me, you put the end-user in the middle of modern computing architectures, using a PC to draw on whatever resources are currently needed, you probably worry about the whole fat-network approach to systems design.

Fat-network systems come in two basic flavors: Windows terminals and “Webware” interfaces. What’s interesting about these two flavors is that they have exactly nothing in common except that they have both have been misrepresented as thin clients.

The Windows terminal approach, whether sold by Citrix, Microsoft, or any of the dozen or so remote-control vendors, is remarkable primarily because of how unremarkable it is. It ignores the application architecture entirely, instead providing an alternative for deploying the same old stuff. The term “architecture” really shouldn’t be applied to a Windows terminal solution at all, since all it does is extend the keyboard and screen across a network. It does, however, put the host firmly in the middle, since with Windows terminals IS tightly controls the resources available to end-users.

Webware is more intriguing. When you design applications around Webware, you have at your disposal browsers, JavaScript, downloaded Java applets and applications, Java servlets and server-based applications, active server pages, Notes/Domino applications, Perl scripts, and enough other choices to delay any development project for a year while you sort them all out.

A Webware architecture means some code gets downloaded for desktop execution while other code executes on the server. The only option not available to you when designing Webware is storing software (other than a “thin” 50MB or so browser) on local hard drives. In other words, you never use the fastest and cheapest storage you have. That makes sense … if you have decided to put the host in the middle.

Take a moment to go beyond cost of ownership to deal with the more interesting benefit of ownership, and you’ll discover that GUI applications installed on the desktop and Webware-based applications aren’t mutually exclusive alternatives. You’ll want to use n-tier, thin-client (in the true, skinny presentation layer sense), probably object-based architectures to build both.

As a general rule you’ll use desktop GUIs for high-use applications targeted to a known desktop platform … for example, whenever you’re building or selecting software to be used by employees as part of their core job functions.

When you do, follow the rules for easy, stable installations: Don’t touch the registry, don’t put anything in the Windows folder or any of its subfolders, test builds for memory leaks, and deploy the full application in a scale-model test environment before implementing it in production. Design every module for portability and manageability, too.

When you don’t know what end-users will run on their desktops, or when you need to deploy an application for occasional use, go for Webware. Here, the increased functionality of a desktop-installed GUI no longer outweighs the benefit of easier deployment.

When you’re building for Web access, you have no control over bandwidth, so force developers to test applications on lowest-common-denominator (another misused term; they’re actually greatest-common-factor) systems. And … remember that testing and manageability thing? Building on Webware doesn’t eliminate the need for any of that.

When you’re building (or buying) for deployment on your intranet, don’t forget: Fatten up that network.

Putting the end-user in the middle doesn’t mean you want the data center to seem remote.