It looks like nothing was found at this location. Maybe try a search or browse one of our posts below.

I don’t get it.

I just read Lucas Carlson’s excellent overview of microservices architecture in InfoWorld. If you want an introduction to the subject you could do far worse, although I confess, it appears microservices architecture violates what I consider to be one of the fundamentals of good architecture. (It’s contest time: The first winning guess, defined as the one I agree with the most, will receive a hearty public virtual handshake from yours truly.)

My concern isn’t about the architectural value of microservices vs its predecessors. It’s that by focusing so much attention on it, IT ignores what it spends most of its time and effort doing.

Microservices, and DevOps, to which it’s tied at the hip, and almost all variants of Agile, to which DevOps is tied at the ankle, and Waterfall, whose deficiencies are what have led to Agile’s popularity, all focus on application development.

WAKE UP!!!!! IT only develops applications when it has no choice. Internal IT mostly buys when it can and only builds when it has to. Knowing how to design, engineer and build microservices won’t help you implement SAP, Salesforce, or Workday, to pick three examples out of a hat. Kanban and Scrum might be a bit more helpful, but not all that much. The reasons range from obvious to abstruse.

On the obvious end of the equation, when you build your own solutions you have complete control of the application and information architecture. When you buy solutions you have no control over either.

Sure, you can require a microservices foundation in your RFPs. Good luck with that: The best you can successfully insist on is full access to functionality via a RESTful (or SOAPy, or JSON-and-the-Argonauths) API.

Halfway between obvious and abstruse lies the difference in cadence between programming and configuration, and its practical consequences.

Peel away a few layers of any Agile onion and you’ll find a hidden assumption about the ratio of time and effort needed to specify functionality … to write an average-complexity user story … and the time needed to program and test it. The hidden assumption is that programming takes a lot longer than specification. It’s a valid assumption when you’re writing Angular, or PHP, or Python, or C# code.

It’s less valid when you’re using a COTS package’s built-in configuration tools, which are designed to let you tweak what the package does with maximum efficiency and minimum risk that the tweak will blow up production. The specify-to-build ratio is much closer to 1 than when a team is developing software from scratch, which means Scrum, with its user-story writing and splitting, backlog management, and sprint planning, imposes more overhead that needed.

And that ignores the question of whether each affected business area would find itself more effective by adopting the process that’s built into the COTS package instead of spending any time and effort adapting the COTS package to the processes they use at the moment.

At the full-abstruse end of the continuum lies the challenge of systems integration that’s lying in the weeds there, waiting to nail your unwary implementation teams.

To understand the problem, go back to Edgar Codd and his “twelve” laws of relational data normalization (there are thirteen of them; his numbering starts at zero). Codd’s framework for data normalization is still the touchstone for IT frameworks and methodologies of all kinds, and just about all of them come up short in comparison.

Compare the process we go through to design a relational database with the process we go through to integrate and synchronize the data fields that overlap among the multiple COTS and SaaS packages your average enterprise needs to get everything done that needs to get done.

As a veteran of the software wars explained to me a long time ago, software is just an opinion. Which means that if you have three different packages that manage employee data, you bought three conflicting opinions of what’s important to know about employees and how to represent it.

Which in turn means synchronizing employee data among these packages isn’t as simple as “create a metadata map” sounds when you write the phrase on a PowerPoint slide.

To the best of my knowledge, nobody has yet created an integration architecture design methodology.

Which shouldn’t be all that surprising: Creating one would mean creating a way to reconcile differing opinions.

And that’s a shame, because a methodology for resolving disagreements would have a lot more uses than just systems integration.

Gary Fuller and I worked together early in my consulting career. He oversaw the network technicians; I ran the help desk, so we found ourselves collaborating a lot.

Gary was a friend the way many colleagues are friends — we enjoyed working together and schmoozed during breaks, but we didn’t socialize after hours. Older then me, and with more consulting experience, he also served as a mentor when I needed one, even when I didn’t recognize the need.

And now, let’s observe a moment of silence for Personal Information Managers (PIMs) – an honorable category of software that’s gone to its eternal reward.

Software designers devoted more creativity to PIMs than to all other software categories combined, I think, and in this millennial year it’s all for naught. (And yes, of course the pun was intended.)

Let’s take a trip down memory lane. For those of you who can reach back that far, Sidekick, the first PIM, invented “terminate and stay resident” (TSR) techniques for DOS. Just as significantly, Sidekick and succeeding TSRs taught PC users the value of having access to more than one program at a time. What should have been Sidekick’s longest-lasting legacy, though, is how it helped end-users with the small stuff – the thirty-second tasks that fill the day. It was Sidekick, far more than WordStar and VisiCalc, that leveraged the PC to bust the old paradigm of IS apart. After Sidekick, end-users understood that computers could help them with everything, not just those official responsibilities, related to company “core processes”, that were written into their job descriptions.

It took IS a decade to catch up. In some companies, it never did. Sadly, in others it did once but forgot.

PIMs generated more enthusiasm … and partisanship … among end-users than any other software category. I loved a product called “Tornado Notes”, which popped up tiny windows all over the screen, into which you could type quick notes, which you could find instantly. It was very cool. You can’t buy anything like it anymore.

Then there’s the late, lamented Ecco Pro. Purchased and immediately discontinued by NetManage (a brilliant business strategy) it combined outlines, tabbed pages, and drag-and-drop to let you keep track of every scrap of information you ever collected and knew you’d need to find three months from now. More than any other program I’ve ever used, Ecco Pro demonstrated the validity of the concept of “personal” computing. It kept track of my information.

With Ecco Pro (among other products), if you wanted to log an appointment with someone, you dragged their contact record to the right date/time slot on its calendar. Need to note the need to call someone? Drag their record to a fresh line on the “Calls” page. Need to take notes about a call? Hit the tab key to indent the next line and start typing.

Now I have Outlook — an integrated enterprise system, not a PIM. While its interface is ghastly in comparison, my need to integrate contacts and calendar with corporate e-mail and my PDA have forced me to abandon the defunct Ecco Pro. Random notes? While Outlook’s Notes feature provides a few of the old Tornado Notes’ capabilities, it has replaced the instantaneous response I got from Tornado Notes on my old 286 with tedious delays on my Pentium 233, while what Tornado Notes did in four keystrokes takes about 37 keystrokes and mouse clicks in Outlook.

Why are there no PIMs anymore? One reason: “Personal” computing … using computers to empower individuals in everything they do … is waning. It’s waning everywhere, that is, except among the end-users who still need a personal effectiveness tool. They, in desperation, are buying PDAs by the zillion.

PDAs, of course, have their own limitations: Any application that synchronizes with the PC is limited to what the controlling PC application stores.

What’s ironic is why personal computing has fallen out of favor. Microsoft, the biggest purveyor of personal effectiveness tools on the planet, has done most of the damage by persistently designing “DLL hell” into its product line — it’s DLL hell (and inflated estimates of its impact by the big industry research firms) that’s motivated IS to depersonalize the PC.

What’s even more ironic is this: As the World Wide Web has made the idea of cross-linked, cross-indexed information commonplace, the tools we provide end-users to cross-link and cross-index their personal information have become worse, not better.

Figure that one out.