Tech in the 603, The Granite State Hacker

Champions of Disruption

I’ve been noticing lately that truely interesting things only happen on the “edge”. Everything is energy, and everything happens at the point where energy flows are disrupted.

If you don’t believe me, just ask Mother Nature. Take solar energy. Powerful energy flows from our sun and saturates our solar system… but all the amazing things happen where that energy flow is disrupted. The Earth disrupts it, and the result, in this case, is merely life as we know it.

It’s so primal that we’ve abstracted the concept of energy flows, and call it (among other things) currency. When we sell a resource (a form of energy, in a sense), we even call that change “liquidation”.

Sure, potential energy has value, but there are no edges in a region of potential energy. Potential energy is usually static, consistent, and only really exciting for what it could do or become, rather than what it currently is.

Likewise, it’s where disruptions occur that there’s business to be done.

According to this article on Information Week, CIO/CTO’s appear to have generally become change-averse order takers. Surveys cited indicate that many shops are not actively engaged in strategy or business process innovation.

Perhaps they’re still feeling whipped by the whole “IT / Business Alignment” malignment. Maybe they’re afraid of having business process innovation through technology innovation come off as an attempt to drive the business. Ultimately, it seems many are going into survival mode, setting opportunity for change asside in favor of simply maintaining the business.

Maybe the real challenge for IT is to help business figure out that innovation is change, and change is where the action is.

In any case, it seems there’s a lot of potential energy building up out there.

The disruptions must come. Will you be a witness, a victim, or a champion of them?

Tech in the 603, The Granite State Hacker

Retail IT in the Enterprise

Lately, the projects I’ve been on have had me taking on roles outside my comfort zone. (I’m not talking about downtown-Boston… with the “Boston Express” out of Nashua, I’m ok with that.)

I’ve always been most comfortable, myself, in cross-discipline engineering roles, especially in smaller teams where everyone’s got good cross-discipline experience. The communications overhead is low. The integration friction is low. Everyone knows how it needs to be done, and people are busy building rather than negotiating aggressively.

These types of tight, focused teams have always had business focused folks who took on the role of principal consultant. In this type of situation, the principal consultant provides an insulation boundary between the technical team and the customer.

This insulation has made me comfortable in that “zone”: I’m a technologist. I eat, sleep, dream software development. I take the ability to communicate complex technical concepts with my peers effectively and concisely, very seriously.

So like I said, lately the projects I’ve been on have yanked me pretty hard out of that zone. I’ve been called on to communicate directly with my customers. I’ve been handling item-level projects, and it’s a different world. There is no insulation. I’m filling all my technical roles, plus doing light BA and even PM duty.

Somewhat recently, I emailed a solution description to a CFO. The response: “Send this again in user-level English.”

It killed me.

I’ve gotten so used to having others “protect” me from this sort of non-technical blunder. In contemporary projects, the insulating consulting roles are simply not present.

Makes me wonder about the most important lessons I learned during my school days… In high school days, maybe it was retail courtesy, and retail salesmanship in a technical atmosphere (“Radio Shack 101”). In college days, the key lessons might have been how to courteously negotiate customer experience levels, (from “help desk 101”).

Tech in the 603, The Granite State Hacker

Semi-IT / Semi-Agile

While working on-site for a client, I noticed something interesting. On the walls of some of my client’s “users” offices, along with other more classic credentials, are certifications from Microsoft… SQL Server 2005 query language certifications.

I’ve heard a lot about the lines between IT and business blurring. We talk a fair amount about it back at HQ.

Interestingly, this case is a clear mid-tier layer between classic IT (app development, data management, advanced reporting) and business in the form of ad hoc SQL querying and cube analysis. In many ways, it’s simply a “power-user” layer.

The most interesting part about it is the certification, itself. The credentials that used to qualify an IT role are now being used to qualify non-IT roles.

Another trend I’m seeing is development ceremony expectations varying depending on the risk of the project. Projects that are higher risk are expected to proceed more like a waterfall ceremony. Lower risk projects proceed with more neo-“agility”.

The project I was on was apparently considered “medium” risk. The way I saw this play out was that all of the documentation of a classic waterfall methodology was expected, but the implementation was expected to develop along with the documentation.

In many ways, it was prototyping into production. Interestingly, this project required this approach: the business users simply did not have time to approach it in a full waterfall fashion. Had we been forced into a full-fledged classic waterfall methodology, we might still be waiting to begin implementation, rather than finishing UAT.

Tech in the 603, The Granite State Hacker

Economic Detox

While contemporary headlines bode poorly for the U.S. economy, I see them as signs of hope…

I keep hearing high-pitched alarms about the weakening U.S. dollar, inflation, energy prices, the housing market bubble burst. We all see the ugly face of the these conditions.

Global trade has been a bitter (but necessary) pill for the U.S. Perhaps the Clinton-detonated U.S. economic nuclear winter (of global trade, NAFTA, etc.) is finally starting to give way to a new economic springtime in the States.

In the late 90’s US market, there were a lot of excesses in the technology sector. Then the bubble burst. When the dust settled, we (the US IT industry) found ourselves disenfranchised by our sponsors… corporate America beat us with our own job hopping. U.S. Engineers hopped off to the coolest new startup, and rode their high salaries into the dirt, while enduring companies went lean, mean, and foreign. We had become so expensive, we were sucking our own project ROI’s completely out of sight. By hooking foreign talent pools, the ROI’s were visible again.

Nearly a decade later, look what’s happening around the world… Many foreign IT job markets are falling into the same salary inflation trap that the U.S. market fell into… They are going through the same inflation we experienced. Their prices are rising.

Combine their salary inflation with our salary stagnation and a weakening dollar, and what do you get?

A leaner, meaner domestic competitor.

In a sense, it’s like that in many sectors of the U.S. economy.

So let the U.S. dollar weaken… It means that America can go back to being product producers (rather than mindless consumers) in the global market!

Tech in the 603, The Granite State Hacker

Real Software Heroes

While scanning the channels looking for an interesting show to watch, I came across a show on the Science channel… “Moon Machines“. I couldn’t have been luckier than to see the chapter “Navigation”.    (Update: Full video online here: )

I’d heard bits about the technology that went into the Apollo missions, and how there were some of the first “modern” IC-based computers on board, but I never really thought about the implications of what they were doing. Having these computers aboard meant they had software. There wasn’t exactly COTS systems for navigating to the moon.

The episode focused quite a bit on the experience of the software development team, including some at the personal level. There were quotes like “Honey, I’m going to be in charge of developing something called ‘software’.”… (and the response: “Please don’t tell the neighbors.”)

I’ve felt pressure on projects before… stuff I was working on that represented millions of dollars in its success, and presumably millions lost in its failure. I’ve even worked on software projects where runtime production logic errors could send people to jail. I’ve never written software that human life directly depended on.

My hat is off to the folks who took up this monumental challenge for the first time in software history, and made it work. To me, that’s every championship sports victory… ever… combined.

All I can say is… wow.

They knew what a monumental victory it was, too… 40+/- years later, and the engineers they interviewed were still moved by the awe of their own accomplishment, and the personal sacrifices they made to pull it off.

As well they should be. Fantastic!!

Tech in the 603, The Granite State Hacker

Null Schema

I’ve been contemplating the whole “unstructured” thing for a while now, and I’ve developed some new hypotheses about it. The discussion’s been around the fact that Web 2.0 / Enterprise 2.0 generates a lot of “unstructured” data.

I’m not sure “unstructured” is really the most technically fitting word, though. It’s the word that works if you’re a technical person talking to a non-technical person.

I think the information we’re seeing in these settings is typically better structured than what we’ve seen in the past. The structures are being defined by the provider, however, sometimes on an ad-hoc basis, and can change without notice.

If you’re in the geek domain, I think “undefined” fits better. Maybe “unknowable structure”. It’s Null Schema.

I think we’ve all seen tons of this… it’s a trend towards increasing structure with less defined schema. It seems to fit with the “agile” trend.

So the other aspect of this Web 2.0 thing is that the data doesn’t have to just be of an unknowable format. It can also be communicated through a number of communications channels, at the provider’s discretion. People define conventions to ease this. Interestingly, the convened-upon channels end up providing context for the content. In turn, it adds to its structure… more null schema.

It flies in the face of our tightly defined, versioned SOA end-point contracts. XSOA? 🙂

It’s been said that SOA lives in a different problem space, but that may only be a matter of convention, moving forward.

Tech in the 603, The Granite State Hacker

Enterprise Reprogramming

I found an interview of a Satyam Senior VP listed on LinkedIn relatively interesting (link below).

This senior VP talks about how Satyam and the IT industry is responding to new challenges.

One thing that stands out to me is the statement that they are moving from services to solutions. They make the implication that they are rebuilding or reprogramming businesses at the workflow / process level. They appear to be successfully applying technology build-out as a commodity service while implementing their solutions… Sounds like they’re treating the enterprise as a sort of programmable platform, like SOA / BPM on a grand scale.

From the article:
“A solutions provider transforms business. The difference in business will happen when we change those business processes as well. That is where we are bringing in business transformation solutions — process optimisation, process reengineering, etc. “

My intuition doesn’t quite square with Satyam’s vision.

Lots of things have been pointing towards more innovation in the top layers of applications, built on a very stable technology base. To me, it still feels like there’s an unspoken motivation for that: business leadership wants IT folks to make ruggedized app dev tools and hand them over to power users (and/or process owners). Business leaders want IT to get the C# off their business processes.

All of that is sorta where I started cooking up the hypothesis of metaware from.

I’m curious to know how Satyam’s vision is really working. I guess we’ll know in a few years.

‘Moving towards IP-led revenues’

Tech in the 603, The Granite State Hacker


There’s been a fair amount of buzz in the IT world about IT-Business alignment lately. The complaints seem to be that IT seems to produce solutions that are simply too expensive. Solutions seem to range from “Agile” methodologies to dissolving the contemporary IT group into the rest of the enterprise.

I think there’s another piece that the industry is failing to fully explore.

I think what I’ve observed is that the most expensive part of application development is actually the communications overhead. It seems to me that the number one reason for bad apps, delays, and outright project failures, is firmly grounded in communications issues. Getting it “right” is always expensive. (Getting it wrong is dramatically worse.) In the current IT industry, getting it right typically means teaching analysts, technical writers, developers, QA, and help desk significant aspects of the problem domain, along with all the underlying technologies they need to know.

In the early days of “application development”, software based applications were most often developed by savvy business users with tools like Lotus 1-2-3. The really savvy types dug in on dBase. We all know why this didn’t work, and the ultimate response was client-server technology. Unfortunately, the client-server application development methodologies also entrenched this broad knowledge sharing requirement.

So how do you smooth out this wrinkle? I believe Business Analytics, SOA/BPM, Semantic web, portals/portlets… they’re providing hints.

There have been a few times in my career where I was asked to provide rather nebulous functionality to customers. Specificially, I can think of two early client-server projects where the users wanted to be able to query a database in uncertain terms of their problem domain. In both of these cases, I built application UI’s that allowed the user to express query details in easy, domain-specific terms. User expressions were translated dynamically by software into SQL. All of the technical jargon was hidden away from the user. I was even able to allow users to save favorite queries, and share them with co-workers. They enabled the users to look at all their information in ways that no one, not even I, had considered before hand. The apps worked without giving up the advances of client-server technology, and without forcing the user into technical learning curves. These projects were both delivered on time and budget. As importantly, they were considered great successes.

In more recent times, certain trends that have caught my attention: the popularity of BI (especially cube analysis), and portal/portlets. Of all the other tools/technologies out there, these tools are actively demanded by business end-users. At the same time, classic software application development seems to be in relatively reduced demand.

Pulling it all together, it seems like the IT departments have tied business innovation into the rigors of client-server software application development. By doing this, all the communications overhead that goes with doing it right are implied.

It seems like we need a new abstraction on top of software… a layer that splits technology out of the problem domain, allowing business users to develop their own applications.

I’ve hijacked the word “metaware” as a way of thinking about the edge between business users as process actors (wetware) and software. Of course, it’s derived from metadata application concepts. At first, it seems hard to grasp, but the more I use it, the more it seems to make sense to me.

Here is how I approach the term…
Application Space. This diagram shows the surface of IT to User domains across technology and business process space. This surface covers hardware, software, metaware, and wetware, including where these 'wares touch.
As I’ve mentioned in the past, I think of people’s roles in business systems as “wetware“. Wikipedia has an entry for wetware that describes its use in various domains. Wetware is great at problem solving.

Why don’t we implement all solutions using wetware?

It’s not fast, reliable, or consistent enough for modern business needs. Frankly, wetware doesn’t scale well.

Hardware, of course, is easy to grasp… it’s the physical machine. It tends to be responsible for physical storage and high-speed signal transmission, as well as providing the calculation iron, and general processing brains for the software. It’s lightening fast, and extremely reliable. Hardware is perfect in the physical world… if you intend to produce physical products, you need hardware. Hardware applications extends all the way out to wetware, typically in the form of human interfaces. (The term HID tends to neglect output such as displays. I think that’s an oversight… just because monitors don’t connect to USB ports doesn’t mean they’re not human interface devices.)

Why do we not use hardware to implement all solutions?

Because hardware is very expensive to manipulate, and takes lots of specialized tools and engineering know how to implement relatively small details. Turnaround time on changes makes it impractical in risk-management aspects for general purpose / business application development.

Software in the contemporary sense is also easy to grasp. It is most often thought to provide a layer on top of a general purpose hardware platform to integrate hardware and create applications with semantics in a particular domain. Software is also used to smooth out differences between hardware components and even other software components. It even smooths over differences in wetware by making localization, configuration, and personalization easier. Software is the concept that enabled the modern computing era.

When is software the wrong choice for an application?

Application software becomes a problem when it doesn’t respect separation of concerns between integration points. The most critical “integration point” abstraction that gets flubbed is between business process and the underlying technology. Typically, general purpose application development tools are still too technical for user domain developers, and so quite a bit of communications overhead is required even for small changes. This communications overhead is becomes expensive, and complicated by generally difficult deployment issues. While significant efforts have been made to reduce the communications overhead, these tend to attempt to eliminate artifacts that are necessary for the continued maintenance and development of the system.

Enter metaware. Metaware is similar in nature to software. It runs entirely on a software-supported client-server platform. Most software engineers would think of it as process owners’ expressions interpreted dynamically by software. It’s the culmination of SOA/BPM… for example BPMN (Notation) that is then rendered as a business application by an enterprise software system.

While some might dismiss the idea of metaware as buzz, it suggests important changes to the way IT departments might write software. Respecting the metaware layer will affect the way I design software in the future. Further, respecting metaware concepts suggests important changes in the relationship between IT and the rest of the enterprise.

Ultimately it cuts costs in application development by restoring separation of concerns… IT focuses on building and managing technology to enable business users to express their artifacts in a technologically safe framework. Business users can then get back to innovating without IT in their hair.