Tech in the 603, The Granite State Hacker

Intro to Uno Platform

Uno’s free.  Uno is open-source.  Uno could seriously be the next significant disruption in mobile development.

Apologies that I neglected to hit on the conference call for the introductions.  We did get the bulk of the presentation recorded.

On the call:  Jerome Laban, Architect, and Francois Tanguay, CEO of nventive of Montreal, Quebec, Canada. Participants of the Windows Platform App Devs (including myself) were in the audience, asking questions.

To make up for the intro missed in the call, let me begin with the elephant in the room…

What’s “wrong” with Xamarin?

The relatively well known Microsoft tool set called Xamarin enables developers to write a dialect of C# and Xaml to target a variety of platforms including Windows, Windows Mobile, iOS, Android, MacOS and others.

For that reason, Xamarin’s currently a top choice for mobile developers around the world. Xamarin enables developers to target billions of devices.

The problem Xamarin presents is that Xamarin has become its own distinct dialect of .NET-based development.  Xamarin has its own distinct presentation layer called Xamarin Forms. Xamarin Forms as an employee skill set is not the same as a classic Windows developer set.  It’s not exactly the same as a Windows 10 developer skill set.  It’s a different platform, and requires developers that understand it.

Uno Platform reduces the skillset burden in this problem by converging the main skill set on Windows 10 development. Developers with an appreciation for the future of Windows development will definitely appreciate Uno Platform.

Windows Universal Platform (UWP) targets ALL flavors of Windows 10, including some unexpected ones, like Xbox One, and IoT devices running Windows 10 IoT Core.

Uno bridges UWP to iOS, Android, Web Assembly (Wasm), on top of Windows 10. This targets a huge and rapidly growing range of devices… (currently approaching around 3 BILLION… and that might be a low estimate.)

I’d embed the video, but Blogger’s giving me a hard time with the iframe-based embed code… please click this

Link to the video:

Intro to Uno Platform Skype conference recording.

The meetup:
Granite State Windows Platform App Devs
https://www.meetup.com/Granite-State-NH-WPDev/events/251284215/

Uno Platform’s site:
http://platform.uno

Tech in the 603, The Granite State Hacker

What’s Hiding Behind “Low Resolution” Metrics?

100 data points

I’m a software application developer, but I get this.  Metrics are the photographs of business. 

While I’m at it, here’s another classic cliché for ya…  “A picture’s worth a thousand words.” 

What if your picture has been reduced to a small number of data points? 

You get something like the image on the left…  there’s actually 100 data points in that image:  the resolution has been reduce to a very small number of pixels, each expressed as a block of color.  (The image it was originally reduced from is about 40,000 data points.)  

Anyway, this is what metrics are to a business… data points that, when taken collectively, become the model or picture of the state of the company.

Standard GAAP accounting is supposed to provide a meaningful definition of metrics for any company, of any size, and for some purposes this may be sufficient.

Problems generally come in with the specialization of a company… the metrics it measures its own processes and performance by. 

Too many metrics, and it can’t all be taken in… like getting a close up of the whisker I missed when I shaved.  (From the “be careful of what you wish for” department.)  Thankfully that doesn’t happen very often;  it’s hard to imagine justifying the expense of that kind of metric “resolution”. 

It’s far more likely there are too few metrics. 

Imagine what it would look like if we reduced the resolution of the picture further… say to one data point.

Imagine, for example, if you only considered the price of a share of common stock in trying to get an idea of how well a company is performing.   Indeed, that’s definitely a “single pixel” view, and it really won’t tell you anything about the stock or the company attached to it.

Now take this, again, to internal processes.  Let’s imagine a bank that measures its loan officers only by their average ROI on loans. 

Ok… so that’s a silly extreme, but let’s just run with it for a moment…

Imagine trying to provide a bonus-impacting performance review of a loan officer when the only metric you had was the ROI on their loans. The average interest rate of the loan may be a valuable metric, but only when taken with other metrics. 

It won’t be long before all the loan officers are writing a few extremely short term loans for a penny at hundreds to thousands of percent interest.  Hey, for $99.99, ROI on the penny just netted someone another $10k in bonuses, right?  Again, a goofy extreme example, but you get the point.

This is a problem that’s plagued more than just a few business units… more than a few businesses, corporations, conglomerates.  Really, it’s impacted more than just a region, and even the nation.  Poor metrics beget poor metrics. In the global economy, poor metrics, taken collectively, have hidden a great number of sins that contributed significantly to the global downturn referred to as “The Great Recession”.   (Who wants to know where they’re going when they don’t like the answer, I guess, huh?)

No one, from your boss, to world governing bodies, can point the ship in the right direction without a clear picture of where we’re at.

Tech in the 603, The Granite State Hacker

Hedging Against The Risk of Becoming A Monopoly

First Microsoft with their late entry into the mobile market (and flubs leading up to it)… then Apple… now Facebook…  anyone notice that they kinda suck lately?  

Apple, clearly getting bored with it’s iPhone, is now turning its attention to it’s iWatch… which doesn’t make much sense to me;  I purposely gave up all other devices, including a wristwatch, in favor of a single unified mobile device.  It will take a lot to convince me to add a wristwatch back in, and I’m sure having to pay for it will be a deterring factor.   (Next thing you know, they’ll add electroshock notifications, and make it so that authorities will have the ability to lock it to the wearer’s wrist and cause it to electromagnetically bind to the nearest metal object in order to detain people… (but that’s another whole story)).

I’m always toying with social media, so when I ran across a Facebook post from an entrepreneurial acquaintance recently, wondering if his content was being suppressed, I had to check it out.   As an experiment, he posted a really cute puppy, and it picked up a fair number of responses.  His concern was that his regular posts were not getting the response he’d grown accustomed to.  To add yet more anecdote, there was recently a post on the New York Times’ blog about similar observations, tied to tweaks Facebook has made recently.  It seems posts that are engaging or paid for are prioritized, and posts that are not quite as popular are at best “deprioritized”.  It seems likely that even engaging posts tied to commercial products are likely suppressed unless paid for.  Anyone who dabbles in trying to build an audience through Facebook must pay or make sure their content is very engaging.   I like knowing about the books friends of mine are publishing.  I like knowing about their small mom & pop shop.  These posts are getting hidden from my newsfeed.  It’s not the most engaging stuff, but it’s part of what I use Facebook for.  Having this stuff drop off my radar makes Facebook start to suck more.  Yes, they want to make money, but I think there may be even more to it.

I digress.

But I have to ask…  with all the Big Data that companies like Apple, Intel, Microsoft, Qualcomm, Facebook, Google, and the rest have…  and rest assured, they have it… the analytics.  How can they really not recognize the things that are hurting their business? 

Is it intentional?

If modern history has shown us anything, it’s that free markets do not tolerate monopolies.  In every case, any time a company takes advantage of its own strength in the market, the market has pushed back, forcing one of a number of “bad” things upon the company.  Just about every global company has seen this.  I recall hearing about the Rockefeller oil breakup, but in our time, it was the Microsoft / Internet Explorer shakedown…. and there have been many others.

I long suspected the reason Linux existed and was not thoroughly stomped on by the powers that be (Microsoft) was to allow Linux to be a “competitor” in the market… something that would never have a unified corporate focus that could actually unseat Microsoft.  I know that Microsoft even supported some Linux components, which anecdotally supports my theory.  I’m sure they supported it as much as they felt they necessary in order to make sure Linux was a viable competitor.

When it became clear that Linux’s strength was flagging, a more corporate competitor became necessary.  It seems Apple filled that gap very nicely in the PC market for some time.

While Apple began to dominate the mobile market, Google stepped up to become a competitor there, partially because Microsoft wasn’t committed to the market space.  (It wasn’t enough of a threat to the PC market.)  Android has the same problems as Linux… too decentralized to be a lasting threat, so while Apple had it’s heyday and now lets itself slip in the market, Microsoft will target Google.  Eventually, I predict Apple and Microsoft will take turns with market dominance with Google there to provide another safety net.

So back to Facebook…  It seems like Twitter has become a haven for market bots, but not much more of real use to the average person.  Facebook’s power grew to near monopolistic levels over 2012, but I predict that Facebook will actually allow this unhappy situation to persist for entrepreneurial folks, encouraging them to explore Google+.  This leadership transference to Google+ will bolster Google+ as a competitor, enabling Facebook to remain free of  the shackles of being a monopoly.  I suspect they’ll both start taking turns with market dominance, but despite the market competition, I bet both will claim better results in their marketing campaigns, thus leading to higher advertising prices on both.

The nasty part, here, is that the reason for preventing and sanctioning monopolies is to prevent them from strong arming their markets.  Unfortunately, what it seems like we’re getting instead is very small oligarchies taking turns to be the dominant, but not quite monopolistic force in the market.  They take advantage of each other to develop brand loyalty which improves their profit margins and gives them near monopolistic power among their followers, yet they maintain their monopoly-free, unsanctioned status.

Tech in the 603, The Granite State Hacker

The Great Commandment

While I was writing a post the other day, I noticed that I had neglected a topic that I find very important in software development. Risk management.

There are only a few guarantees in life. One of them is risk. Companies profit by seizing the opportunities that risks afford. Of course, they suffer loss by incidents of unmitigated risks. All our government and social systems are devices of risk management. In business, risk management is (now, and ever shall be) the great commandment.

Many software engineers forget that risk management is not just for PM’s. In fact, software and its development is fundamentally a tool of business, and, by extension, risk management. The practice of risk management in software really extends in to every expression in every line of source code.

Don’t believe me? Think of it this way… If it wasn’t a risk, it would be implemented as hardware. I’ve often heard hardware engineers say that anything that can be done in software can be done in hardware, and it will run faster. Usually, if a solution is some of the following…
· mature,
· ubiquitous,
· standard,
· well-known,
· fundamentally integral to its working environment

…it is probably low risk, particularly for change. It can likely be cost-effectively cast in stone (or silicone). (And there are plenty of examples of that… It’s what ASIC’s are all about.)

Software, on the other hand, is not usually so much of any of those things. Typically, it involves solutions which are…
· proprietary,
· highly customized,
· integration points,
· inconsistently deployed,
· relatively complex / error-prone
· immature or still evolving

These are all risk indicators for change. I don’t care what IT guys say… software is much easier to change than logic gates on silicone.

I’ve dug in to this in the past, and will dig in more on this in future posts, but when I refer to the “great commandment”, this is what I mean.

Tech in the 603, The Granite State Hacker

If It Looks Like Crap…

It never ceases to amaze me what a difference “presentation” makes.

Pizza Hut is airing a commercial around here about their “Tuscani” menu. In the commercial, they show people doing the old “Surprise! Your coffee is Folgers Crystals!” trick in a fancy restaurant, except they’re serving Pizza Hut food in an “Olive Garden”-style venue.

It clearly shows my point, and that the point applies to anything… books, food, appliances, vehicles, and software, just to name the first few things that pop to mind. You can have the greatest product in the world… it exceeds expectations in every functional way… but any adjective that is instantly applied to the visual presentation (including the environment it’s presented in) will be applied to the content.

If it looks like crap, that’s what people will think of it.

(Of course, there are two sides to the coin… What really kills me are the times when a really polished application really IS crap… it’s UI is very appealing, but not thought out. It crashes at every click. But it looks BEAUTIFUL. And so people love it, at least enough to be sucked into buying it.)

Good engineers don’t go for the adage “It’s better to look good than to be good.” We know far better than that. You can’t judge the power of a car by its steering wheel. Granite countertops look great, but they’re typically hard to keep sanitary.

When it comes to application user interfaces, engineers tend to make it function great… it gives you the ability to control every nuance of the solution without allowing invalid input… but if it looks kludgy, cheap, complex, or gives hard-to-resolve error messages, you get those adjectives applied to the whole system.

So what I’m talking about, really, is a risk… and it’s a significant risk to any project. For that reason, appearance litterally becomes a business risk.

For any non-trivial application, a significant risk is end-user rejection. The application can do exactly what it’s designed to do, but if it is not presented well in the UI, the user will typically tend to reject the application sumarily.

That’s one thing that I was always happy about with the ISIS project. (I’ve blogged about our use of XAML and WPF tools in it, before.) The project was solid, AND it presented well. Part of it was that the users loved the interface. Using Windows Presentation Foundation, it was easy to add just enough chrome to impress the customers without adding undo complexity.

Tech in the 603, The Granite State Hacker

SSIS: Unit Testing

I’ve spent the past couple days putting together unit tests for SSIS packages. It’s not as easy to do as it is to write unit & integration tests for, say, typical C# projects.

SSIS Data flows can be really complex. Worse, you really can’t execute portions of a single data flow separately and get meaninful results.

Further, one of the key features of SSIS is the fact that the built-in data flow toolbox items can be equated to framework functionality. There’s not so much value in unit testing the framework.

Excuses come easy, but really, unit testing in SSIS is not impossible…

So meaningful unit testing of SSIS packages really comes down to testing of Executables in a control flow, and particularly executables with a high degree of programability. The two most significant control flow executable types are Script Task executables and Data Flow executables.

Ultimately, the solution to SSIS unit testing becomes package execution automation.

There are a certain number of things you have to do before you can start writing C# to test your scripts and data flows, though. I’ll go through my experience with it, so far.

In order to automate SSIS package execution for unit testing, you must have Visual Studio 2005 (or greater) with the language of your choice installed (I chose C#).

Interestingly, while you can develop and debug SSIS in the Business Intelligence Development System (BIDS, a subset of Visual Studio), you cannot execute SSIS packages from C# without SQL Server 2005 Developer or Enterprise edition installed (“go Microsoft!”).

Another important caveat… you CAN have your unit test project in the same solution as your SSIS project. Due to over-excessive design time validation of SSIS packages, you can’t effectively execute the SSIS packages from your unit test code if you have the SSIS project loaded at the same time. I’ve found that the only way I can safely run my unit tests is to “Unload Project” on the SSIS project before attempting to execute the unit test host app. Even then, Visual Studio occassionally holds locks on files that force me to close and re-open Visual Studio in order to release them.

Anyway, I chose to use a console application as the host app. There’s some info out there on the ‘net about how to configure a .config file borrowing from dtexec.exe.config, the SSIS command line utility, but I didn’t see anything special in there that I had to include.

The only reference you need to add to your project is a ref to Microsoft.SqlServer.ManagedDTS. The core namespace you’ll need is

using Microsoft.SqlServer.Dts.Runtime;

In my first case, most of my unit testing is variations on a single input file. The package validates the input and produces three outputs: a table that contains source records which have passed validation, a flat output file that contains source records that failed validation, and a target table that contains transformed results.

What I ended up doing was creating a very small framework that allowed me to declare a test and some metadata about it. The metadata associates a group of resources that include a test input, and the three baseline outputs by a common URN. Once I have my input and baselines established, I can circumvent downloading the “real” source file, inject my test source into the process, and compare the results with my baselines.

Here’s an example Unit test of a Validation executable within my SSIS package:

[TestInfo(Name = "Unit: Validate Source, duplicated line in source", TestURN = "Dupes")]
public void ValidationUnitDupeLineTest()
{
using (Package thePackage = _dtsApp.LoadPackage(packageFilePath, this))
{
thePackage.DelayValidation = true;
DisableAllExecutables(thePackage);
EnableValidationExecutable(thePackage);
InjectBaselineSource(GetBaselineResource("Stage_1_Source_" + TestURN), thePackage.Variables["SourceFilePath"]);
thePackage.Execute(null, null, this, null, null);
string errorFilePath = thePackage.Variables["ErrorLogFilePath"].Value as string;
//throw new AbortTestingException();
AssertPackageExecutionResult(thePackage, DTSExecResult.Failure);
AssertBaselineAdjustSource(TestURN);
AssertBaselineFile(GetBaselineResourceString("Baseline_Stage1_" + TestURN), errorFilePath);
}
}

Here’s the code that does some of the SSIS Package manipulation referenced above:


#region Utilities
protected virtual void DisableAllExecutables(Package thePackage)
{
Sequence aContainer = thePackage.Executables["Adjustments, Stage 1"] as Sequence;
(aContainer.Executables["Download Source From SharePoint"] as TaskHost).Disable = true;
(aContainer.Executables["Prep Target Tables"] as TaskHost).Disable = true;
(aContainer.Executables["Validate Source Data"] as TaskHost).Disable = true;
(aContainer.Executables["Process Source Data"] as TaskHost).Disable = true;
(aContainer.Executables["Source Validation Failure Sequence"] as Sequence).Disable = true;
(aContainer.Executables["Execute Report Subscription"] as TaskHost).Disable = true;
(thePackage.Executables["Package Success Sequence"] as Sequence).Disable = true;
(thePackage.Executables["Package Failure Sequence"] as Sequence).Disable = true;
}


protected virtual void DisableDownloadExecutable(Package thePackage)
{
Sequence aContainer = thePackage.Executables["Adjustments, Stage 1"] as Sequence;
TaskHost dLScriptTask = aContainer.Executables["Download Source From SharePoint"] as TaskHost;
dLScriptTask.Disable = true;
}


protected virtual void EnableValidationExecutable(Package thePackage)
{
Sequence aContainer = thePackage.Executables["Adjustments, Stage 1"] as Sequence;
TaskHost validationFlow = aContainer.Executables["Validate Source Data"] as TaskHost;
validationFlow.Disable = false;
}

protected virtual void EnableValidationExecutable(Package thePackage)
{
Sequence aContainer = thePackage.Executables["Adjustments, Stage 1"] as Sequence;
TaskHost validationFlow = aContainer.Executables["Validate Source Data"] as TaskHost;
validationFlow.Disable = false;
}

Another really handy thing to be aware of…

IDTSEvents

I highly recommend you implement this interface and pass it into your packages. Of course, in each event handler in the interface, implement code to send reasonable information to an output stream. Notice the call to thePackage.Execute, way up in the first code snippet… the class that contains that method implements that interface, so I can manipulate (when necessary) how to handle certain events.

Interestingly, I haven’t needed to do anything fancy with that so far, but I can imagine that functionality being very important in future unit tests that I write.

Here’s a visual on all the resources… the image shows SSMS over VS, with both database tables and project resources with common URNs to relate them.

I won’t get into the details of the framework functionality, but I found it useful to be able to do things like set a flag to rebuild baseline resources from current outputs, and such.

I modeled some of my framework (very loosely) functionality on the Visual Studio Team System Edition for Testers, which we used on the TWM ISIS project.

Another interesting lesson learned: I can see that the folks who built SSIS were not avid unit testers themselves. SSIS Executables have a “Validate()” method. I encountered lots of problems when I tried to use it. Hangs, intermittent errors, all that stuff that testing should have ironed out.

Tech in the 603, The Granite State Hacker

Artless Programming

So maybe I am strange… I actually have printed snips of source code and UML diagrams and hung them on my office wall because I found them inspirational.

Reminds me of a quote from The Matrix movies…
Cypher [to Neo]: “I don’t even see the code. All I see is blonde, brunette, red-head.” 🙂

It’s not quite like that, but you get the point. There’s gotta be a back-story behind the witty writing. I suspect it has something to do with a programmer appreciating particularly elegant solutions.

One of the hard parts about knowing that programming is an artful craft is being forced to write artless code. It happens all the time. Risks get in the way… a risk of going over budget, blowing the schedule, adding complexity, breaking something else.

It all builds up. The reality is, as much as we software implementers really want application development to be an art, our business sponsors really want it to be a defined process.

The good news for programmers is that every application is a custom application.

It really sucks when you’re surgically injecting a single new business rule into an existing, ancient system.

This is the case with one of my current clients. At every corner, there’s a constraint limiting me. One false move, and whole subsystems could fail… I have such limited visibility into those subsystems, I won’t know until after I deploy to their QA systems and let them discover it. If I ask for more visibility, we risk scope creep. The risks pile up, force my hand, and I end up pushed into a very tightly confined implementation. The end result is awkward, at best. It’s arguably even more unmaintainable.

These are the types of projects that remind me to appreciate those snips of inspirational code.

Don’t get me wrong. I’m happy there’s a fitting solution within scope at all. I’m very happy that the client’s happy… the project’s under budget and ahead of schedule.

The “fun” in this case, has been facing the Class 5 rapids, and finding that one navigable path to a solution.

See also:
politechnosis: Art & Science

Tech in the 603, The Granite State Hacker

Multiprocessing: How ’bout that Free Lunch?

I remember reading an article, a few years back…

The Free Lunch Is Over: A Fundamental Turn Toward Concurrency in Software

Its tagline: “The biggest sea change in software development since the OO revolution is knocking at the door, and its name is Concurrency.”

Mr. Sutter’s article suggests that because CPUs are now forced to improve performance through multi-core architectures, applications will need to typically employ multi-threading to gain performance improvements on newer hardware. He made a great argument. I remember getting excited enough to bring up the idea to my team at the time.

There are a number of reasons why the tag line and most of its supporting arguments appeared to fail, and in retrospect, could have been predicted.

So in today’s age of multi-core processing, where application performance gains necessarily come from improved hardware throughput, why does it still feel like we’re getting a free lunch?

To some extent, Herb was right. I mean, really, a lot of applications, by themselves, are not getting as much out of their host hardware as they could.

Before and since this article, I’ve written multi-threaded application code for several purposes. Each time, the threading was in UI code. The most common reason for it: to monitor extra-process activities without blocking the UI message pump. Yes, that’s right… In my experience, the most common reason for multi-threading is, essentially, to allow the UI message pump to keep pumping while waiting for… something else.

But many applications really have experienced significant performance improvements in multi-processor / multi-core systems, and no additional application code was written, changed, or even re-compiled to make that happen.

How?

  • Reduced inter-process contention for processor time
  • Client-server architectures (even when co-hosted, due to the above)
  • Multi-threaded software frameworks
  • Improved supporting hardware frameworks

Today’s computers are typically doing more, all the time. The OS itself has a lot of overhead, especially Windows-based systems. New Vista systems rely heavily on multi-processing to get performance for the glitzy new GUI features.

The key is multi-processing, though, rather than multi-threading. Given that CPU time is a resource that must be shared, having more CPUs means less scheduling collision, less single-CPU context switching.

Many architectures are already inherent multi-processors. A client-server or n-tier system is generally already running on a minimum of two separate processes. In a typical web architecture, with an enterprise-grade DBMS, not only do you have built-in “free” multi-processing, but you also have at least some built-in, “free” multi-threading.

Something else that developers don’t seem to have noticed much is that some frameworks are inherently multi-threaded. For example the Microsoft Windows Presentation Foundation, a general GUI framework, does a lot of its rendering on separate threads. By simply building a GUI in WPF, your client application can start to take advantage of the additional CPUs, and the program author might not even be aware of it. Learning a framework like WPF isn’t exactly free, but typically, you’re not using that framework for the multi-threading features. Multi-threading, in that case, is a nice “cheap” benefit.

When it comes down to it, though, the biggest bottlenecks in hardware are not the processor, anyway. The front-side bus is the front-line to the CPU, and it typically can’t keep a single CPU’s working set fresh. Give it a team of CPUs to feed, and things get pretty hopeless pretty quick. (HyperTransport and QuickPath will change this, but only to the extent of pushing the bottle necks a little further away from the processors.)

So to re-cap, to date, the reason we haven’t seen a sea change in application software development is because we’re already leveraging multiple processors in many ways other than multi-threading. Further, multi-threading options have been largely abstracted away from application developers via mechanisms like application hosting, database management, and frameworks.

With things like HyperTransport (AMD’s baby) and QuickPath (Intel’s), will application developers really have to start worrying about intra-process concurrency?

I throw this one back to the Great Commandment… risk management. The best way to manage the risk of intra-process concurrency (threading) is to simply avoid it as much as possible. Continuing to use the above mentioned techniques, we let the 800-lb gorillas do the heavy lifting. We avoid struggling with race conditions and deadlocks.

When concurrent processing must be done, interestingly, the best way to branch off a thread is to treat it as if it were a separate process. Even the .NET Framework 2.0 has some nice threading mechanisms that make this easy. If there are low communications needs, consider actually forking a new process, rather than multi-threading.

In conclusion, the lunch may not always be free, but a good engineer should look for it, anyway. Concurrency is, and will always be an issue, but multi-core processors were not the event that sparked that evolution.