The Business-Technology Weave

January 11, 2018  1:15 PM


David Scott David Scott Profile: David Scott
Electric grid, EMP

Contrast Cars and Roads from 100 Years Ago with Today’s – Now compare poles and wires; what is our excuse for not progressing the grid in the face of EMP and terror threats?


Once upon a time, this country had no interstate highway system – that is, the interconnected freeways that allow 70+ mph vehicular travel from Maine to California with nary a red light.  A hundred years ago we had roads, but they were a patchwork of bricks, gravel, dirt, and mud.  Over time, a network of asphalt roads existed in mainly developed places, but there was no greater coast-to-coast, and north-to-south, network of superior roadways as represented by today’s interstate highway system.  Our highway system wasn’t here by magic.  How did it come to be?

Well, the Eisenhower Administration mounted the plan in the 1950s to emplace it:  The Federal-Aid Highway Act of 1956, generally  known as the National Interstate and Defense Highways Act, was signed into law on June 29, 1956 by President Dwight D. Eisenhower, with subsequent construction.

In comparing roads, cars and electrical poles of a hundred years past with today’s, anyone can see a strange anomaly .  Have a look at this picture, circa 1920s:

One hundred years ago, many roads didn’t have edge and center lines.  Most were not paved.  Safety measures such as railroad-crossing lights and gates, traffic lights, pedestrian walkways, and signage for dangerous curves, etc., were nearly non-existent.  Cars were noisy, drafty, and bumpy.  They had virtually no safety features, such as belts and airbags.  They didn’t even have padded dashes – the interior of cars was a harsh, unyielding, metal.  Even minor traffic accidents yielded broken bones.

Look at the enormous chasm we’ve crossed in terms of cars and roads; the utility truck shows the advance in our vehicles; handily, this picture also exposes the extreme vulnerability of above-ground infrastructure:

When we examine today’s roads and cars as compared to 100 years ago, we can readily see the enormous progressions and improvements.  However, when we examine telephone poles and lines from approximately 100 years ago, and compare them with today’s, we notice something very uncomfortable:  They look markedly the same.  With fair examination, this situation is quite astonishing… and to the informed reader, quite alarming.

The above ground infrastructure of the national electric grid, to include substations, towers, poles and wires, is quite ugly and inconvenient, when you think about it.  Weather events frequently interrupt power.  I lose power in my neighborhood when someone sneezes too hard.

Further, this above-ground infrastructure is extremely vulnerable to terror events and sabotage.  Why no grid-equivalent Federal Act for our electric power infrastructure such as what was done for the infrastructure of our highways?  In other words, why no National Interstate Grid and Defense Infrastructure Act for our power?  It is interesting that our water and sewage needs are largely underground.  How is it that your home has underground conduits for the delivery and removal of water, but above ground poles and wires servicing the delivery of power?

Beyond mere weather vulnerabilities, another one is looming – and it is an ‘elephant in the room’:  That is the matter of Electro-Magnetic Pulse (EMP).  Potential for catastrophic harm can now include huge geographic areas – even a nation itself – and varying conditions can happen from deliberate attack or natural forces, such as solar flares.

The easiest means of defeating a modern country – a country that relies on a Business-Technology Weave at the highest, lowest, and broadest levels – is through an EMP attack.  An EMP attack could be something as simple as a scud missile carrying a single nuclear warhead.  This missile need not be accurate for any specific target.  It need only be detonated at a suitable altitude:  the weapon would produce an electro-magnetic pulse that would knock out power in a region – all power – and more.

Not only would some measure of a nation’s power grid be out, but also generators and batteries would not work.  There would be no evacuation of affected areas:  Cars would not work, and all public transportation would be inoperable.  Even if trains, planes, and other mass transit were operable, the computers that enable their safe use would not be.  This would be due to the loss of all electronic data, rendering all computers useless.  There would be no banking, no stock market, no fiscal activity of any kind, and there would be no economy.

Hospitals would fail without power.  There would be no electronic communications: no mobile phones, no land phones, no e-mail, no television transmission, nor even radio.  There would be no refrigeration of food, which would quickly rot to become inconsumable.  Potable drinking water would quickly be expended, and the means to create more would not exist.  Fires would rage, since the ability to deliver and pump water would be virtually nonexistent.

No Federal Government would be able to govern – nor would any State or local government command any control over events  or actions.  No police department could be able to know where events were happening requiring response.  Priorities would be non-existent.  The only actionable situations would be those in a direct line of sight.  The Military would not be able to communicate.  Hence, there would be no chain-of-command; no control.  Scattered commands and units would soon begin operating autonomously in the vacuum.

The affected society, on all levels, would be sliced and diced into small groups and factions hell-bent on survival – the situation would be an almost immediate chaos.  As we’ve seen during New Orleans and other disasters, breakdown of the social order is rapid and deadly.  In this circumstance, it would also be prolonged, and possibly permanent – until the arrival of an enemy control.  Imagine, if you will, a peak, sustained, Katrina/New Orleans disaster, coast-to-coast.

An American Perspective

A Grim Knowledge:  In America’s case, a “burnout” of large scale, created by an extensive EMP attack, would create damage to equipment that takes years to replace.  Today, there are massive transformers in our power grid that are no longer manufactured in America.  This represents a very wide divide:  the conduct of business on a crucial support structure – that has no ready replacement in the event of failure.  These transformers can take a year to build – they then have to be transported, delivered, and installed.

At a Senate subcommittee hearing on the threat of EMP, scientific testimony yielded this statement: “The longer the basic outage, the more problematic and uncertain the recovery of any [infrastructure system] will be.  It is possible – indeed, seemingly likely – for sufficiently severe functional outages to become mutually reinforcing [emphasis added], until a point at which the degradation… could have irreversible effects on the country’s ability to support any large fraction of its present human population.”  This should sound familiar.  This is Runaway, resulting in Unrecoverability.

Here in America we also have to recognize that a nuclear-generated EMP attack can quite easily be mounted so as to affect the entire continental United States, parts of Canada, and parts of Mexico.  An EMP attack would not kill many people outright.  However, the comprehensive wallop of systems disablement would ripple and self-reinforce, having been characterized as throwing any receiving nation back to the mid-1800s.  Not quite true:  People in the mid-1800s relied on paper for records; horses and buggies for personal transportation; operated and maintained sewage and water systems without computers; fed themselves largely without refrigeration through local production of food; had not yet built reliance and vulnerabilities on comprehensive, instantaneous communication; and were in the middle of a reasonably ordered, stable, and progressing society.

Throwing today’s America, or any industrialized country, instantly back to the mid-1800s will result in a catastrophic loss of all social order.  It will also make that country a “walk-in” for assumption of control by others.

Forget the plodding, nebulous, contested, “threat” of climate change.  The threat of an EMP attack is a real risk – now – a part of where we are… NOW.

National Security:  ‘Where We Really Are’

In knowing where security stands, and where security is “stuck,” it is helpful to consider some statements from leading representatives; statements made more than ten years ago.  Congressman Roscoe Bartlett (R-6-MD), then Chairman of the House Projection Forces Subcommittee, stated on his website:

…America is vulnerable and virtually unprotected against a devastating EMP attack [emphasis added].  That’s the bad news.  The good news is that we can significantly reduce both the threat and impact of an EMP attack with relatively inexpensive steps that can be taken in the next few years [emphasis added].

The Congressman’s website did not detail a solution to the threat of EMP; rather, noting that we must develop “insurance” against the threat, and to “reduce” its impact once already occurring.  There is no suggestion of a mission or a project here.  Prevention is absent.

There are also those in government who propose guarding sensitive equipment from EMP attack by building some equipment to new “EMP proof” standards.  For example, ten years ago Senator Jon Kyl (R-AZ), then Chairman of the Senate Judiciary Subcommittee on Terrorism, Technology and Homeland Security, stated:

Fortunately, hardening key infrastructure systems and procuring vital backup equipment such as transformers is both feasible and – compared with the threat – relatively inexpensive, according to a comprehensive report on the EMP threat by a commission of prominent experts.  But it will take leadership by the Department of Homeland Security, the Defense Department, and other federal agencies, along with support from Congress, all of which have yet to materialize [emphasis added]. 

Here we may sense a “false solution,” as explained shortly.

The Best We Can Do?  These statements seem representative of the Federal government’s lagging (apathetic?) posture.  Recognize that these are the government representatives who were proactive, and leading, voices on EMP (comparatively speaking).  What can we glean from those statements?

  • The limit of hardening “key” infrastructure: Some infrastructure is left out of EMP protection.  Just as during New Orleans, there will be the perception that certain areas were left off the protection grid according to some devaluation of human life, or through a prioritization of certain regions’ protection over others.  Indeed, some areas will be left out, partly based on prioritizing others, in order to protect food stores, water, larger populations vs. smaller ones, and so on.  In any event, the difficulty will be how we set the standards for whom and what gets protection, and whom and what do not.  Recognize that following an EMP attack, “key” priority infrastructure assets will be like unstrung beads:  some areas will have power, many won’t – and all that goes with that.
  • The threat has been characterized by people in government, as well as Science, as being now. The “solution” could be ready in “the next few years” (assuming immediate start, and a perfect project).  Government’s estimation of the threat and ‘solution’ yields a divide that is difficult to exaggerate.

Because the divides are so large, and the consequence so dire, let’s direct our focus to this: there is a tremendous inadequacy here on government’s part in the face of this threat, and our current response to it.  Given the stakes, we are already too far into a schema of Inadequacy, Disaster, Runaway, and Unrecoverability – IDRU.

Terror Attack:  Today, possibilities of comprehensive national catastrophe (to any nation) are no longer in the realm of Science Fiction, or held in abeyance through MAD (Mutually Assured Destruction, as during the Cold War with the former Soviet Union).  Just consider the well-known pronouncements coming from North Korea.   But too, we face extremely large harm from asymmetrical sources:  Sources that are weaker than their opponents in conventional terms.  They can’t compete through strength in numbers:  neither by membership; number of conventional arms; number of nuclear arms, or even in the numbers of their sympathizers.  Their goals can be anathema to the vast majority.

But these asymmetric forces’ business and objectives (that which they’ll do, in support of their desired outcomes, respectively) are as strong as they can possibly be.  In fact, their business trumps any concern for survival of any specific individual of their own.  And, their objectives include the stated destruction of whole societies.  We must realize too, that with these groups, an effective internal check-and-balance on unreasonable actions diminishes rapidly as the size of the considered group diminishes.

However, tremendous will – even infinite will – means nothing without some form of power.  Today, power is moving closer – closing a divide – with this tremendous will of the relative few.  Soon, if not now, weapons representing delivery of catastrophic harm will be available to the few – no matter how vile their agenda, no matter how onerous their task in procurement.  Our argument here is not the specific “who” – that is not necessary in setting the awareness.  For the present, we can emphasize a keen awareness that asymmetric attack forces are closing a divide: Until recently, the achievement of their objectives was denied because of the simple divide between their will to dispense widespread destruction, and their means to do it.

It is reasonable to assume that once closing a divide between will and means, a complete dedication to “business” will be paired with extraordinarily damaging “technology.”  One group or another will pull a trigger or push a button once closing this divide.

A Start to Part of an Overall Solution: We must recognize that one component of a strategic national plan to secure our country in the face of modern risks is to mount a Federal Infrastructure Modernization Act, which will include plans to put ALL infrastructure into underground weather/EMP-proof conduits and spaces.  This will include power stations, substations, the comprehensive grid, and even power distribution to individual buildings and homes – any structure whereby power is delivered and utilized.

It is quite possible that the basic shell components of the Project Management Framework (PMF) that was used for the highway act of 1956 could be updated and repurposed: For example, cost apportionments and duties across Federal, State, and Local agencies.  The model for private and government resources has been struck, and can be repurposed.  Many of the raw construction models and components can at least provide serious clues for how to build a plan, and subsequent project for this.

Now Playing (NP): Traneing In: John Coltrane with the Red Garland Trio, Prestige 7123, 1958, original vinyl.

September 21, 2017  8:36 AM

The Most Powerful Suite of Applications You Can Hope to Own…

David Scott David Scott Profile: David Scott
API, Applications, apps, Data Analytics, Data integration, Real time, Web services

May already be under your roof


Organizations that find themselves struggling with data integration challenges – whether that struggle involves integrity of data, efficiency of exchange (real-time vs. latencies), budget, or anything else – may want to consider an extremely powerful product such as ShuffleLab’s ShuffleExchange.

Most so-called “solutions” to effect the integration of apps, systems, allied process and data involve very expensive coding (development) endeavors.  These ‘back-side-of-the-screen’ efforts do work to a fair degree.  However, many disadvantages and problems are inherent:

  • These solutions are very expensive to mount and deliver.
  • They are expensive and difficult to maintain as business rules and requirements change over time.
  • New apps and their attendant requirements are difficult to integrate to the “bundle” of existing, integrated, apps.
  • Changes and their deployment to these rigid solutions take an inordinate amount of testing, time, and resources.

Consequently, a dismaying number of organizations simply cannot afford projects for integration.  Budget constrictions, meager resources for developers and time, and competing projects leave integrations parked on a wish-list for some magic fiscal window.  Often, re-keying of data or cumbersome overnight batch processes serve to “integrate” process and data.  The goal of real-time exchanges and the efficiency, accuracy, and power of that is held in abeyance – often to the detriment of the organization and its members, customers, constituents, and any allied partners.  The lack of real-time standards often leads to reporting errors, misjudgments, and bad business.

There is a pressing and increasing need to integrate enterprise, mobile, custom, legacy, shelf and other apps – in any number and combination – for best business outcomes.  And, it must be performed and managed efficiently and affordably.

Let’s consider an alternative

Today, apps have ready “handles” for integration between any other(s).  These handles are their APIs, oData, and Web Services.  Specialized integration apps, by virtue of their proprietary, pre-built, Connectors, can make use of these handles on the ‘back-side-of-the-screen,’ and present to any user a graphical palette (a GUI) whereby integrations are created via mouse and a few keystrokes.  Integrations and their maintenance are now a point-and-click, drag-and-drop, draw-lines type of affair that is performed on the ‘front-side.’  Now, integrations and associated maintenance costs plummet to an average 30% of the former.

Further, any user with the proper training and authority can build and maintain integrations.  This negates the need for in-house or outside (vendor) developers’ expensive time.  It also frees those same developers for other projects and requirements.  Simple training, qualification and authority puts integrations into a simple schema of Design; Connect; Manage:

Design:  Utilize an intuitive visual interface to configure application integrations using easy drag-and-drop and data mapping tools.  Apply conditional and data filters to manage the data flow between applications.

Connect:  Leverage pre-built Connectors to establish integrations instantly, or build a Connector using the technology connector stack (such as for specialty or legacy apps).  Many vendors, such as the aforementioned ShuffleLabs, build connectors for free if you don’t find your preferred application(s) in their library.

Manage:  These platforms centrally manage all integration points.  The analytics module provides clear insights on data transfers.  And, notification services accurately inform about any interruptions on the endpoints.

Simplify your world

We all must keep in mind that modern software applications have large data structures with complex relationships; nothing stays the same, and managing these structures and relationships will become ever more cumbersome – absent proactive progressions such as the institution of data integration apps and all associated efficiencies and cost-savings.  There will only be an increasing need for these systems to be integrated smartly and efficiently.  Fortunately, most of these systems mitigate complexities by virtue of available APIs, and oData and Web Services; these are ready for utilization by the right vendor, and can handily be utilized by them for extremely powerful integration purposes, “turning on” real-time data exchange capabilities and associated leverage between all relevant systems.

Traditional data transfer tools, often targeting tables directly inside complicated databases, require considerable end-user involvement with thorough understanding of intricacies.  Beyond burdens of expertise, the use of these tools is exceptionally demanding of end-users’ time.  Products such as ShuffleExchange connect systems through the APIs, and oData and Web Services, removing complicated aspects from end-users’ consideration, and makes configuring and implementing a comparative breeze.

Remember this adage:  In the realm of risk, unmanaged possibilities become probabilities. Organizations face increasing data complexities and requirements; absent pro-active planning – time, expense and cost will increase exponentially.  Happily, today, you can bring affordable simplicity and ease-of-use to these endeavors, while serving the nature of these integrations, no matter how complex they may have been on the “back side of the screen.”


March 11, 2015  7:46 AM

The Business Case for Managed Infrastructure and Application Transformation

David Scott David Scott Profile: David Scott
Business transformation

~ From Present State to Future State – and ongoing Best State ~

Background: Organizations today, whether Fortune 500, non-profit, or government agency, are facing unprecedented challenges to their safety and surety by virtue of a fundamental pairing:

That of their total reliance on technical enablements…

…and the associated, escalating, vulnerabilities.


According to a recent global IT study by EMC Corporation (NYSE: EMC), data loss is up 400% from 2012. And yet, organizations state that they remain unprepared in an environment of increasing technical sophistication, escalating threats, and lagging human-readiness:

  • 71% of IT professionals are not confident that they can recover following an incident of data loss
  • 51% of organizations lack up-to-date Disaster Recovery (DR) plans
  • Only 6% of organizations have comprehensive DR plans for today’s challenges of the evolving mobile, hybrid cloud, and big data environment
  • A mere 2% or organizations feel they are data protection “leaders”; 11% are “adopters” (hopefully at a prudent juncture); and 87% “lag”
  • Organizations with 3 or more vendors lost three times as much data as those with a “single-vendor strategy”

In tandem with data loss is simple system downtime. For the user, customer, stakeholder – the person relying on data – inaccessibility of any system has the same appearance as data loss. Data’s value is just as lost whether the data has disappeared or whether data is ostracized from practical value and use by virtue of systems’ inhibition. The business of the day suffers and can even stop. Truly, Current State for many organizations today means inefficiency, pain, and even danger to the sound conduct of your business and fulfillment of your mission.

Furthering an Understanding of Need: The argument for business and infrastructure/application transformation, and arrival at Best State, is bolstered by even the so-called “successful” lurch forward. Consider the most crucial element for moving forward – that of projects. Consider these sobering stats:

  • 25% of projects are cancelled before completion
  • 49% of projects suffer budget overruns
  • 41% fail to deliver the expected business value and ROI
  • 47% have higher than expected maintenance costs
  • 62% of projects fail to meet their schedules

A Case for Business: Studies have shown that tens, hundreds, and even thousands of dollars are expended for every proactive dollar deferred in the realm of systems and data surety. Complete business loss has been known to happen in cases of inadequate addressal for these issues. An IT adage helps to focus a necessary awareness for business today: An ounce of prevention is worth 10,000 pounds of cure.

And, understand that deferral of dollars includes not only squelched projects (we can’t afford that; we’ll do that next year, etc.), deferral includes the squander of wasted dollars through the prism of those dismal stats immediately above. If your “bang for buck” is a dud, your dollar is worse than deferred: it’s valueless.

Prevention of bad outcomes, and arrival at best ones, requires news scales of proactivity and agility. In order to reach the secure “destination” of total data reliability and systems surety, any organization has to know its point of “origin” – where it is today, with all gaps, lags, vulnerabilities and threats exposed . That requires a survey. Subsequently, the organization needs to 1) close divides (between dependency and security), 2) direct purpose (for improvement and closure), and 3) achieve results (for the “destination” of a totally secure environment, with applications and capacities that serve better and best business).

A Survey of Business: Improving and ensuring the wellness of data and allied systems (applications, infrastructure, broadband, monitoring, oversight, protection, prevention, recovery, etc.) requires a survey for deficiencies. This includes, by the way, the human element: Human error must be kept to a minimum, and training, certification, focus, discipline and other “human” things all factor into data and systems reliability.  Ongoing and regularized surveys must occur In order to achieve continuing prevention of bad outcomes (data loss, corrupted data, systems downtime, loss of organizational wellness, loss of reputation), and sustained data/systems optimization for best-use. These must be undertaken on a leading and proactive basis, in order to stay ahead of threatening conditions, in order to expose any harming event or condition that may manifest – before it does. Surveys must also tease out weak points regarding ability to recover in the event of lost data and/or systems.

Actionable Priorities: Surveys yield actionable priorities, and should include exposure and status of:

  • IT infrastructure
    • Locations
    • Capacities
    • Age and condition
  • Applications and systems
    • Shelf
    • Custom
    • Third-party
    • In-house
  • Data
    • Repositories
    • Availability
    • Integrity
    • Monitoring
    • Contingencies of loss; prevention
    • Backup
    • Recovery
  • Staff
    • In-house support and development
    • Vendor(s)
    • Training
    • Certifications
    • Number and apportionment
    • Duties
    • Contingencies (on-call statuses defined and understood; cross-training for absences or departures, etc.)

For the organization, today’s business poses a set of questions:

  • How do we determine our present standing regarding security of data, applications, and systems? This can be defined as the Current State.
  • Where do we need to be in the face of our understanding? This can be defined as a Future State.
  • How do we get there, and who best to help us to our destination? This translates as a measure of projects and an allied vendor, or vendors; a measure of solutions partners.

Answering these questions is the first step to an actualization of subsequent steps – in the form of a valid project – to the secure Future State.

An IT Goal – A Business Goal – Future State/Best State: A secure Future State, and in actuality a perpetually managed Best State, is achieved by mounting a comprehensive project with defined project milestones and project deliveries. The overall project for securing any business-technology environment is really a set of interdependent projects driving toward the main goal of overall business surety through solid business systems management and progressions. The overarching project will require a solid solutions partner or set of partners. However, if it is possible to engage a solutions partner that fulfills the “single vendor strategy” – that is, a partner with the robust staffing and knowledge to handle a comprehensive move to your Future and ongoing Best State, that is the ideal.

Projects: The project(s) will help you to achieve the following best-practice and Best State conditions:

  • A centralized majority of IT infrastructure in match to the majority of main business application systems
  • A migration of applications from fat-client to web-based
  • Exploration of Cloud services for new functionalities and availabilities
  • A discretionized data tier behind a firewall to guard against intrusion and breach
  • A comprehensive suite of monitoring tools to maintain uptimes, to highlight trouble areas, and to provide notifications and warnings on a proactive basis for timely resolutions
  • An update of remote locations for hardening of network connections for highest standards of connectivity
  • Identify and sanction real time objectives (RTO) and recovery point objectives (RPO) for data recoveries
  • An established and understood plan for regularized IT education and training to maintain knowledge and skills currency.
  • A plan-set for maintenance of organizational Best State: The One Year Plan; The Five Year Plan; The Individual Action Plan(s) – for each department, with a managed-content (CMS) feed to the overall umbrella of the master Organizational Plan (set).

Organizations that are in the “messy desk” state of discretionized plans and projects; competing plans and projects – and thus plans and projects that don’t answer to realities – need to take immediate stock for a new footing.

That begins with identification of the appropriate solutions partner – a vendor – who has a comprehensive understanding for all of this; one that can help you to mount the plans and projects to bring you from Current State… to Future State, and ongoing Best State.


April 28, 2014  10:47 AM

Internet Explorer “Bug” puts One-Fourth of Web Users at Risk

David Scott David Scott Profile: David Scott

Microsoft made a stunning announcement over the weekend:  That they are working to fix a bug – a bug that is present in Internet Explorer (IE) versions 6 through 11, and one that hackers are currently exploiting.

Versions 6 through 11 presently make up approximately 55% of the browser market, but the exploitation (at least for now) appears to be concentrating on versions 9 through 11 – about 26.25% of the market, according to FireEye, a cyber security software company that “caught” the bug, according to The Daily Mail.  (One has to wonder – did they “catch” it through survey, test, and identification – or did they “catch” it [like catching a cold] in terms of being hacked and exploited!  – the report we saw was ambiguous).

For now, the hackers appear to be targeting U.S. defense and financial sector service firms – but don’t let that ameliorate any concerns:  You could be next – and it could be in the next moment.

For any readers still using XP (and an estimated 15 to 25% of the world’s PCs are still running XP), recognize that you will receive no update(s) for this bug, being that Microsoft has stopped supporting XP with upgrades and fixes.

According to FireEye spokesman Vitor De Souza, “It’s unclear what the motives of this attack group are, at this point.  It appears to be broad-spectrum intel gathering.”

In an advisory, Microsoft said that the vulnerability engendered by this bug could allow hackers to take complete control of a system.  Recognize that in these instances, data theft, data destruction, accounts creation, installation of further malicious programs, and likely anything else you can imagine are possible.

Chief Technology Officer of the cyber security firm Seculert, Aviv Raff, has said that other hacking groups are racing to learn more about the bug so that they can launch their own attacks:  “This will snowball,” Raff says.

Keep your eye on this, and as always, apply security patches and upgrades as soon as possible.  (Although our counsel regarding aesthetic and minor operational fixes still holds:  Let those changes “cook” in the market a bit, to make sure they don’t break more things than they fix/enhance).

As for those XP users?  Microsoft released a statement to Reuters, advising XP users to upgrade to Windows 7 or Windows 8.

October 22, 2013  1:01 PM

“ObamaCare”, Project Management, and Empiricism

David Scott David Scott Profile: David Scott

Docs - x-rayMuch has been written about the Affordable Care Act’s rollout and status – the program colloquially known as “ObamaCare.”

Politics aside (let me emphasize that), for both IT and business professionals who are empirically grounded, I think it’s fairly evident that something is not only horribly awry at present, but also; cascading issues will yield an exponential number of problems as the current system is exercised in full – if ever, that is.

It’s not a matter of fixing discreet troubles – a couple isolated in this module here, a few in that procedure over there.  There is an undetermined, interwoven, set of malfunctioning elements.  Having professionally managed many projects directly, and having directed project teams in both Fortune500 and Federal environments, it is easy to see the “back side of the screen” trainwreck of this endeavor, from the evidence readily available on the “front side of the screen.”  And, recognize that this system tethers to quite a few others – is not only an imbroglio to itself:  It has the power to corrupt several other critical sites, both in the private-sector and public (government) realms.

A Couple Erroneous Terms:

Website:  First, let’s understand this – ObamaCare is not a “website,” as it is referred to in many quarters.  It is a highly complex computer program application (really a set of programs), represented by millions upon millions of lines of code – and that is supposed to be available through a simple portal:  The associated website.  In total, it is not an efficient coding endeavor, and to boot, the overage of code does not match real-world ‘business’ requirements:  that is, the business of registering people and allowing them to shop for affordable healthcare.  Characterizing this as a “website” helps to mask the gross inefficiencies and problems we’re all facing…

Glitches:  The system is not suffering “glitches.”  It has deep, entrenched, and very fundamental technical and program flaws.  Bringing in the “best and the brightest” for fixes to so-called “glitches” may actually compound the problem.  To use a hackneyed, but very useful, phrase:  9 women cannot make a baby in 1 month.  Innumerable “best and brightest” types, with their fingers simultaneously in the pie, may actually make things worse.  One has to ask:  Where were a measured number of “best and brightest” these past 3 years of effort (toward the Go-Live of this thing)?

Speaking of Go-Live – I never missed that date with any of my myriad projects these past decades.  Milestones were reality-based, paired with interim reality-based testing, which in-turn delivered results that yielded efficient fixes during the project’s course, and Go-Lives yielded actual business programs that could be used to good purpose on Day 1.  That is, fully functioning programs, software, and applications that allowed 100% support to the business at-hand.  Any “glitches” were truly minor in scope – easily fixed, and generally same-day – and we were always able to offer users work-arounds in getting business done in the meantime.  Day 1.  Period.

Consider something I spoke of in my book, I.T. Wars – IDRU:  Inadequacy, Disaster, Runaway, and Unrecoverability.  Read that chapter, and you’ll see why I believe this system will be scrapped, and started anew – much as the FBI’s VCF (Virtual Case File) System was.  In that circumstance, a post-9/11 effort to transition paper records to an electronic system of terror-tracking and management was necessary so that allied agencies could more effectively share and collaborate through utilization of necessary data:  The FBI, NSA, CIA, etc.  However, in that case, there was comparatively little political consequence in starting over.

IDRU applies here because:

Inadequate attention was given to the project’s scope, its requirements, its timeline, true expectations for delivery, its true course and progress, and the inadequate awareness for the folly of going live with something that was wholly dysfunctional.  (By the way:  The first day’s screen splash of “The System is Down” was erroneous.  A system had to have been Up to be Down.  The actual status was:  “The System is Not Yet Ready”).

Disaster:  Certainly any system that delivers a 100% failure on a promised (and ballyhooed) Go-Live date is a disaster – in an IT-context certainly.  However, even the Act’s supporters are beginning to call this rollout “the greatest IT disaster in history.”

Runaway:  We may well be in a zone whereby more and more resources are poured into this thing, with ever diminishing results.  As the size of the team increases, errors and challenges in simple communication become ever-larger, more and more reports are required – with associated efforts and oversights.  Ever more programmers are stepping on other programmers’ changes, and as related, ever more meetings are required for preclusions, negotiations, and fixes to “fixes.”

Unrecoverability:  This specific project (not the Act) may indeed be unrecoverable – it may yet be trashed, and started anew.  However, it will not be positioned that way for public consumption.  The Affordable Care Act, and the related website/system, will be reported as undergoing major revisions, with the requirement for registration likely seeing a major delay… to sometime in 2014, for example.  A great analogy serves here:  If you have a pyramid of cheerleaders, and several on the bottom are in the wrong uniform, you must have everyone clamber down and stand around as a measure of cheerleaders don the correct uniform.  You cannot “fix” the existing pyramid.  You have to disassemble, and start over:  Once everyone is in the correct uniform, you can re-mount the pyramid.  In the case of the Affordable Care Act, “pulling” and fixing modules, lines of code, tables, tethers (to outside systems), databases, etc., is going to create an ever-widening circle of problems.  As problems accrue, overlap, and self-reinforce, their growing aggregate will become like a snowball rolling downhill; accruing mass, accelerating the system toward doom – a condition of true runaway, leading to unrecoverability.

Any person in the business-IT realm worth their salt knows this:  In IT, an ounce of prevention is worth 10,000 pounds of cure.

With ObamaCare, political considerations preclude the admission that the system is dysfunctional, and not likely to get better any time soon – and that it must be remounted from a fresh start.  Either way, an enormous effort is necessary:  I believe it may take a year to get a fully-functional system (regardless of anyone’s opinion as to what a functioning system may be enabling and delivering in terms of real-world, affordable, readily-available, healthcare policies).

The Affordable Care Act and its associated online enablements have a number of rollout issues, and whatever the present citizen/user experience is at the moment, there is very obvious evidence of what is wrong with the system that speaks in a special way to this readership:

This readership is comprised of people who operate on empiricals:  Actual measures of things in match to real-world requirements.  We are comprised of programmers, system architects, engineers, Agile-adherents, project managers, IT managers/directors, CIOs, CTOs… that list can go on.  Readership also includes a sizeable number of non-IT, tech-savvy, personnel who inhabit “business-stakeholder” expertise and standing in the enterprise-business-IT realm.  CFOs, CEOs, COOs, business owners, business directors, finance and accounting staff, and all manner of managers and allied staff.

Most of us here in this readership would call the ObamaCare rollout, and its associated web-enablement, a disaster.  It’s almost a reflection for the death of empiricism.

After all, how many of us here have delivered a business system on the Go-Live date that didn’t work?… that didn’t work at all?

August 4, 2013  12:34 PM

The Government, Metadata, and You

David Scott David Scott Profile: David Scott

A recent article, FBI Pressures Internet Providers to Install Surveillance Software, had me revisiting my thoughts regarding recent government assurances.

Various politicians, pundits, and agencies have made assurances that the government is only collecting metadata – for phone calls, internet activity, and other personal pursuits.  Let’s take phone calls as an example for an important point I’ll be making here.

Supposedly, the government only records (in a database) the following regarding your phone calls and mine:  Time of initiation for the call; duration; who initiated it; who was called – and maybe a few other collateral things.  In other words, the government isn’t listening in, or recording, or transcribing what you’re saying… discussing, etc., because they are only collecting metadata.

Ah – the golden word here:  Metadata.  What is metadata?  I like this definition:  Metadata is data about other data.  Hence those surrounding details of the calls…

Hmmm.  But I sense a real problem.  Metadata can include ANYTHING you deem to be…  metadata.

How about high value concepts?  Those are in the body of calls and records – but you darn sure can collect high value concepts and stuff those under the umbrella of metadata – and plenty of people and organizations do.  And… “high value concepts” is a fungible term.  A couple words, or a phrase, not enough to satisfy the government’s concept – or need – for certain high value concepts?  No problem – just expand to a couple lines… grab the whole paragraph that certain terms appear in.  Next thing you know, the whole body of the record is “high value,” and a part of the “metadata.”

And… then… any politician can stand in front of a microphone, and state with all seeming sincerity, “Americans have no reason to fear the FBI’s (NSA’s, etc.) collection of data… we’re merely collecting metadata…”.

Pay attention.    :^ )   This will get hot.

NP:  Bad Company; Extended Versions.  Ok, I bit – thinking this was expanded versions of studio stuff.  It’s live from 2010, UK.  Took me 2 months to finally give this a listen.  It’s fine.  Boz is missing (having died), but we’ve got an ex-Heart guitarist, and one from Paul’s solo band on stage, and the other three originals (Paul, Mick and Simon).  Surprisingly fine, now that I’m listening.  However, beware some of the other entries in the Extended Versions series (I’m told).

July 6, 2013  12:16 PM

Personnel and Diminishing Returns: Watch for this condition

David Scott David Scott Profile: David Scott

Word comes to me of an organization that has completely overtopped on the growth of its IT department – both in terms of numbers and expertise.

The org has no outside solutions partners to speak of.  Oh, they have service providers – you can’t get away with no broadband provider, for example.  But instead of engaging a reasonable cadre of vendors/contractors/solutions partners, they’ve hired inside expertise, adding to the permanent staff, until now they have an unwieldy department that is difficult to tune and manage.

The IT budget is in a deplorable state due to the salaries of all these people.  It’s difficult to pry dollars for training from governance.  So, former “experts” fall out of their expertise over time.

Think of it this way:  You wouldn’t hire a specialist, say a plumber, to become a permanent full-time member of your home’s monthly budget, would you?  No – you engage a plumber when you need him or her – project by project, or problem by problem, if you prefer.  The plumber provides a service – a solution –solving whatever problem you have, and then goes on to service other clients.

Solutions partners in the business-IT world are engaged on much the same basis.  It’s a much more efficient use of resources ($$$) to bring someone in on the occasioned basis, rather than riding some measure of expertise on the team as a permanent “resource.”

In tandem with the abeyance for burdens of keeping in-house personnel trained, there is an advantage in employing qualified solutions partners in that they have no challenges in staying current (quality vendors, that is).  It is a part of their business to stay current, and forward edge besides, so that they remain competitive and successful, in serving youYour success is their success, and that is strong motivation.

Look around at your IT shop – large enterprises are especially vulnerable to the creep of accruing people, and keeping them, past the point of good budget and service sense.  But… I’m not trying to sweep people out of their jobs.  Rather, this warning is especially crucial to small-to-medium business(SMB).  SMBs are dynamic, frequently growing (in some cases rapidly), and you’ve got to establish the balance between permanent in-house cadre and the prudent use of outside solutions partners:  Do that efficiently, and you’ll find it economical.

Manage this carefully – the two most important qualifiers for doing this are awareness… and vigilance.

NP:  The Lovin’ Spoonful, Daydream, original LP that I just picked up at a yardsale, near-mint.

June 30, 2013  11:53 AM

Lessons of the 8-Track Cartridge – Inefficiency: Can you recognize it?

David Scott David Scott Profile: David Scott

8-Track CartIn past days, we’ve talked about multi-tasking and its potential to drive efficiency down; as opposed to manifesting a goal of getting more done in a fixed period of time.  Diminished attention to any particular thing while trying to serve too many things can lead to errors, requiring timely do-overs.  It can also cause wasted time due to the re-acquire of attention engendered by interruptions.

So… something that looks good on the surface may actually be detrimental.  There’s a great example from the past:  The 8-track tape.

Today, 8-track tape cartridges are held in pretty low esteem.  Older readers will recognize the format, developed in the early ‘60s – anyone else who is unfamiliar can Google and read up on them.  But 8-tracks essentially had, literally, eight discrete tracks (streams) of information on them.  The tracks were paired into Left and Right stereo channels, comprising four “programs” of music; Program 1, Program 2, etc.  Two stereo channels x four programs of music = eight tracks.

The tape inside the cartridge was an endless loop, pulling from the center of a single spool, passing over the playback head, and winding back on to the outside of the spool.  A sensing foil was at the splice – when it passed over a pair of contacts just downstream of the playback head, a circuit was completed momentarily that caused the playback head to shift down, to play Program 2 – this subsequently happened again, and again until Program 4 played.  Most players had circuitry to understand that Program 4 was invoked, and shut the player off after Program 4 so as to leave the cartridge at the ready for the next play, from the beginning (although you could bypass this with a button, for endless play on most players).  And “beginning” could be the beginning of any of the four Programs, by virtue of a button for manual advance.

The 8-track had the appearance of several advantages and efficiencies, a few of which are actual:

– Unlike cassettes, there was no need to flip the tape over (this advantage was negated later by auto-reverse cassette decks – but in the early and mid-60s, this was big).

– During its reign, it was also considered superior to the cassette format:  8-tracks were mastered at 3 3/4 speed, vs. the cassette’s 1 7/8 speed (a better content to tape-fidelity ratio).  Again, this advantage was temporary upon the cassette’s graduation to a high-fidelity medium toward the late ‘60s, into the ‘70s and beyond…

– There was a measure of “random access” – with open-reels and cassettes, you had to do a bit of rewinding and/or forwarding to get to music in the middle of the tape.  With 8-tracks, you could get close enough by advancing the Programs manually with a button push.

– The single spool theoretically halved the mechanical contribution to wow and flutter (the other contributors in any tape format being the motor, capstan, pinch roller…).

– Speaking of pinch rollers – the 8-track format had them inside each individual cartridge.  Therefore, no single-point-of-failure in that regard, or wear-point, by virtue of a single roller in the tape deck.  Each cartridge’s pinch roller engaged the capstan in the 8-track deck.

However, whatever “advantages” there may have seemed on the surface, the 8-track was grossly inefficient in the most important, and extreme, ways.  Consider:

– To play an entire album, the tape passed over the head four times.  Program 1 passed over the head (again) as Program 2 played; indeed Programs 3 and 4 did too.  Therefore, the tapes/cartridges had a wearout factor that was at least 2x that of cassettes and open reels (those tapes passed the head twice as each side was played).

– Maintaining proper playback head alignment was difficult, being that the head was not “fixed” – it moved to orient and play the different streams of programs on the tape.

– Early cartridges had foam pressure pads that eventually broke down and crumbled.

– Early cartridges also had pinch rollers that degenerated into sticky goo.

– With the tape pulling from the center of the spool, there was enormous wear – a special lubricant was required for the tape’s surface, which eventually wore off.  Tape wear reduced fidelity, but too, once the lube wore off, it caused jams as players “ate” the tape.

Not a great format.  Not efficient.  And in terms of investment for progressing, the format came and went fairly quickly, unlike records which enjoyed a long run with associated amazing improvements (and which remain in the market today), or cassettes, which began as a lo-fi medium primarily for dictation and voice capture, and which matured and rivaled the best open-real hi-fi realities.

So – what in your organization looks good on the surface – possibly for purpose of convenience (like the 8-track at one time), but is actually inefficient, and in danger of having a very limited shelf life?  “Solutions” that are not positioned to be supported by the future marketplace are very poor supports indeed, and you must begin to survey your environment by looking at things in a very fresh way.

Just as you can break open an 8-track cartridge, to examine how inefficient it is, you must “break open” your present organization’s environment, and start to examine the liabilities.

NP:  The Pretenders, Learning to Crawl, on 8-track.

June 30, 2013  10:22 AM

Thoughts on the Human “Machine,” Multi-tasking, and Resources

David Scott David Scott Profile: David Scott

In the discussion of multi-tasking, there was a natural discussion of resources:  Time being a very important resource; People being another.

However, someone made some potent observations, essentially saying that there is no such thing as ‘multi-tasking,’ being that people are at best capable of “serial fast-switching.”  I like that.

But that makes humans seem like a machine, in that person’s mind, and the thought was that we must stop equating humans to machines; we even have to stop treating people as “resources.”  The stated reasons include:

– Resources are something we use.

– Resources can be interchangeable with like-resources.

– Resources are generally available on-demand.

– Resources are often consumed by the process.

The question was posed:  “Are you a human resource?”  My answer is, “Yes.”

– We use people.  If you prefer, we utilize people and their associated knowledge, skills, and time (availability).

– We generally like people in IT to be, if not perfectly interchangeable, able to provide backup services if a primary person is unavailable.  Coverage and continuity is everything in IT/business.

– People are certainly available on-demand; HelpDesk, anyone?  How about a phone call from the boss:  “Sally, can you come in here for a moment?  Thanks…”.  We’re polite and respectful for people’s prior obligations, and schedules, but we’re essentially available on-demand.

– People aren’t “consumed” literally (well…).  But our time is consumed, and any person’s fulfillment as a resource is based on time/availability:  That is a consumption.

So, people are a resource:  People, and their associated knowledge, manpower (person-power?), and contributions, are most definitely a collective resource.  After all, if you don’t have enough of them, in the right proportions, with the right skills and knowledge, you’re in for a hurtin’.

And, technically speaking, they make a pretty good appearance as a “machine” to the other parts of the overall IT/business machine.

June 29, 2013  6:29 PM

Multi-Tasking: Possible? Part II

David Scott David Scott Profile: David Scott

When ‘multi-tasking,’ we’re essentially giving the appearance of handling several things in any given allotment of time.  You can only really do this three ways:

– Do things sequentially (say, in the course of the hour, day, week, project, etc.)

– Do things by jumping back-and-forth (often necessary when waiting on subordinate or tangential deliveries that feed into any specific item, or  answers, etc., on any given thing).

– Delegate and collect (the finished task, or its state of progress for your next level of involvement).

So – learn how to delegate and prioritize tasks, and give them the respect of focus, to avoid wasting time.

Become adept at prioritizing “on the fly” (and re-prioritizing) with accuracy – as stuff streams toward you, particularly unanticipated things; resolve or assign them quickly and accurately.  Also, is any particular thing merely “routine,” “emergent,” or an “emergency?”  That will factor into priorities, task focus, and assignations.

When interrupted with things, become adept at plugging back in to what you were doing before the interruption.  Some folks take awhile to regain their center, to find the place where they left off, etc.  Others can execute about-faces with military precision and focus, almost like a drill.  Get tips and tricks from these folks.  My tip:  A pot of coffee.  Seriously – if I’m in the middle of something, making good progress, and I have a creative flow in hammering out some really good service/solution, and someone knocks on my doorframe – I state bluntly, “Can this wait?”

Usually, the answer is “Yes” – or there’s some grace of space in which to address it; in which case I say “Come back in an hour” (in the afternoon; tomorrow, etc).  Of course, with all due civility and respect.   :^ )

Multi-tasking?  It’s all how you define it

NP:  Stanley Turrentine; Stan “The Man”  Original 1960 LP.

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: