…May already be under your roof
Organizations that find themselves struggling with data integration challenges – whether that struggle involves integrity of data, efficiency of exchange (real-time vs. latencies), budget, or anything else – may want to consider an extremely powerful product such as ShuffleLab’s ShuffleExchange.
Most so-called “solutions” to effect the integration of apps, systems, allied process and data involve very expensive coding (development) endeavors. These ‘back-side-of-the-screen’ efforts do work to a fair degree. However, many disadvantages and problems are inherent:
- These solutions are very expensive to mount and deliver.
- They are expensive and difficult to maintain as business rules and requirements change over time.
- New apps and their attendant requirements are difficult to integrate to the “bundle” of existing, integrated, apps.
- Changes and their deployment to these rigid solutions take an inordinate amount of testing, time, and resources.
Consequently, a dismaying number of organizations simply cannot afford projects for integration. Budget constrictions, meager resources for developers and time, and competing projects leave integrations parked on a wish-list for some magic fiscal window. Often, re-keying of data or cumbersome overnight batch processes serve to “integrate” process and data. The goal of real-time exchanges and the efficiency, accuracy, and power of that is held in abeyance – often to the detriment of the organization and its members, customers, constituents, and any allied partners. The lack of real-time standards often leads to reporting errors, misjudgments, and bad business.
There is a pressing and increasing need to integrate enterprise, mobile, custom, legacy, shelf and other apps – in any number and combination – for best business outcomes. And, it must be performed and managed efficiently and affordably.
Let’s consider an alternative
Today, apps have ready “handles” for integration between any other(s). These handles are their APIs, oData, and Web Services. Specialized integration apps, by virtue of their proprietary, pre-built, Connectors, can make use of these handles on the ‘back-side-of-the-screen,’ and present to any user a graphical palette (a GUI) whereby integrations are created via mouse and a few keystrokes. Integrations and their maintenance are now a point-and-click, drag-and-drop, draw-lines type of affair that is performed on the ‘front-side.’ Now, integrations and associated maintenance costs plummet to an average 30% of the former.
Further, any user with the proper training and authority can build and maintain integrations. This negates the need for in-house or outside (vendor) developers’ expensive time. It also frees those same developers for other projects and requirements. Simple training, qualification and authority puts integrations into a simple schema of Design; Connect; Manage:
Design: Utilize an intuitive visual interface to configure application integrations using easy drag-and-drop and data mapping tools. Apply conditional and data filters to manage the data flow between applications.
Connect: Leverage pre-built Connectors to establish integrations instantly, or build a Connector using the technology connector stack (such as for specialty or legacy apps). Many vendors, such as the aforementioned ShuffleLabs, build connectors for free if you don’t find your preferred application(s) in their library.
Manage: These platforms centrally manage all integration points. The analytics module provides clear insights on data transfers. And, notification services accurately inform about any interruptions on the endpoints.
Simplify your world
We all must keep in mind that modern software applications have large data structures with complex relationships; nothing stays the same, and managing these structures and relationships will become ever more cumbersome – absent proactive progressions such as the institution of data integration apps and all associated efficiencies and cost-savings. There will only be an increasing need for these systems to be integrated smartly and efficiently. Fortunately, most of these systems mitigate complexities by virtue of available APIs, and oData and Web Services; these are ready for utilization by the right vendor, and can handily be utilized by them for extremely powerful integration purposes, “turning on” real-time data exchange capabilities and associated leverage between all relevant systems.
Traditional data transfer tools, often targeting tables directly inside complicated databases, require considerable end-user involvement with thorough understanding of intricacies. Beyond burdens of expertise, the use of these tools is exceptionally demanding of end-users’ time. Products such as ShuffleExchange connect systems through the APIs, and oData and Web Services, removing complicated aspects from end-users’ consideration, and makes configuring and implementing a comparative breeze.
Remember this adage: In the realm of risk, unmanaged possibilities become probabilities. Organizations face increasing data complexities and requirements; absent pro-active planning – time, expense and cost will increase exponentially. Happily, today, you can bring affordable simplicity and ease-of-use to these endeavors, while serving the nature of these integrations, no matter how complex they may have been on the “back side of the screen.”
~ From Present State to Future State – and ongoing Best State ~
Background: Organizations today, whether Fortune 500, non-profit, or government agency, are facing unprecedented challenges to their safety and surety by virtue of a fundamental pairing:
That of their total reliance on technical enablements…
…and the associated, escalating, vulnerabilities.
According to a recent global IT study by EMC Corporation (NYSE: EMC), data loss is up 400% from 2012. And yet, organizations state that they remain unprepared in an environment of increasing technical sophistication, escalating threats, and lagging human-readiness:
- 71% of IT professionals are not confident that they can recover following an incident of data loss
- 51% of organizations lack up-to-date Disaster Recovery (DR) plans
- Only 6% of organizations have comprehensive DR plans for today’s challenges of the evolving mobile, hybrid cloud, and big data environment
- A mere 2% or organizations feel they are data protection “leaders”; 11% are “adopters” (hopefully at a prudent juncture); and 87% “lag”
- Organizations with 3 or more vendors lost three times as much data as those with a “single-vendor strategy”
In tandem with data loss is simple system downtime. For the user, customer, stakeholder – the person relying on data – inaccessibility of any system has the same appearance as data loss. Data’s value is just as lost whether the data has disappeared or whether data is ostracized from practical value and use by virtue of systems’ inhibition. The business of the day suffers and can even stop. Truly, Current State for many organizations today means inefficiency, pain, and even danger to the sound conduct of your business and fulfillment of your mission.
Furthering an Understanding of Need: The argument for business and infrastructure/application transformation, and arrival at Best State, is bolstered by even the so-called “successful” lurch forward. Consider the most crucial element for moving forward – that of projects. Consider these sobering stats:
- 25% of projects are cancelled before completion
- 49% of projects suffer budget overruns
- 41% fail to deliver the expected business value and ROI
- 47% have higher than expected maintenance costs
- 62% of projects fail to meet their schedules
A Case for Business: Studies have shown that tens, hundreds, and even thousands of dollars are expended for every proactive dollar deferred in the realm of systems and data surety. Complete business loss has been known to happen in cases of inadequate addressal for these issues. An IT adage helps to focus a necessary awareness for business today: An ounce of prevention is worth 10,000 pounds of cure.
And, understand that deferral of dollars includes not only squelched projects (we can’t afford that; we’ll do that next year, etc.), deferral includes the squander of wasted dollars through the prism of those dismal stats immediately above. If your “bang for buck” is a dud, your dollar is worse than deferred: it’s valueless.
Prevention of bad outcomes, and arrival at best ones, requires news scales of proactivity and agility. In order to reach the secure “destination” of total data reliability and systems surety, any organization has to know its point of “origin” – where it is today, with all gaps, lags, vulnerabilities and threats exposed . That requires a survey. Subsequently, the organization needs to 1) close divides (between dependency and security), 2) direct purpose (for improvement and closure), and 3) achieve results (for the “destination” of a totally secure environment, with applications and capacities that serve better and best business).
A Survey of Business: Improving and ensuring the wellness of data and allied systems (applications, infrastructure, broadband, monitoring, oversight, protection, prevention, recovery, etc.) requires a survey for deficiencies. This includes, by the way, the human element: Human error must be kept to a minimum, and training, certification, focus, discipline and other “human” things all factor into data and systems reliability. Ongoing and regularized surveys must occur In order to achieve continuing prevention of bad outcomes (data loss, corrupted data, systems downtime, loss of organizational wellness, loss of reputation), and sustained data/systems optimization for best-use. These must be undertaken on a leading and proactive basis, in order to stay ahead of threatening conditions, in order to expose any harming event or condition that may manifest – before it does. Surveys must also tease out weak points regarding ability to recover in the event of lost data and/or systems.
Actionable Priorities: Surveys yield actionable priorities, and should include exposure and status of:
- IT infrastructure
- Age and condition
- Applications and systems
- Contingencies of loss; prevention
- In-house support and development
- Number and apportionment
- Contingencies (on-call statuses defined and understood; cross-training for absences or departures, etc.)
For the organization, today’s business poses a set of questions:
- How do we determine our present standing regarding security of data, applications, and systems? This can be defined as the Current State.
- Where do we need to be in the face of our understanding? This can be defined as a Future State.
- How do we get there, and who best to help us to our destination? This translates as a measure of projects and an allied vendor, or vendors; a measure of solutions partners.
Answering these questions is the first step to an actualization of subsequent steps – in the form of a valid project – to the secure Future State.
An IT Goal – A Business Goal – Future State/Best State: A secure Future State, and in actuality a perpetually managed Best State, is achieved by mounting a comprehensive project with defined project milestones and project deliveries. The overall project for securing any business-technology environment is really a set of interdependent projects driving toward the main goal of overall business surety through solid business systems management and progressions. The overarching project will require a solid solutions partner or set of partners. However, if it is possible to engage a solutions partner that fulfills the “single vendor strategy” – that is, a partner with the robust staffing and knowledge to handle a comprehensive move to your Future and ongoing Best State, that is the ideal.
Projects: The project(s) will help you to achieve the following best-practice and Best State conditions:
- A centralized majority of IT infrastructure in match to the majority of main business application systems
- A migration of applications from fat-client to web-based
- Exploration of Cloud services for new functionalities and availabilities
- A discretionized data tier behind a firewall to guard against intrusion and breach
- A comprehensive suite of monitoring tools to maintain uptimes, to highlight trouble areas, and to provide notifications and warnings on a proactive basis for timely resolutions
- An update of remote locations for hardening of network connections for highest standards of connectivity
- Identify and sanction real time objectives (RTO) and recovery point objectives (RPO) for data recoveries
- An established and understood plan for regularized IT education and training to maintain knowledge and skills currency.
- A plan-set for maintenance of organizational Best State: The One Year Plan; The Five Year Plan; The Individual Action Plan(s) – for each department, with a managed-content (CMS) feed to the overall umbrella of the master Organizational Plan (set).
Organizations that are in the “messy desk” state of discretionized plans and projects; competing plans and projects – and thus plans and projects that don’t answer to realities – need to take immediate stock for a new footing.
That begins with identification of the appropriate solutions partner – a vendor – who has a comprehensive understanding for all of this; one that can help you to mount the plans and projects to bring you from Current State… to Future State, and ongoing Best State.
Microsoft made a stunning announcement over the weekend: That they are working to fix a bug – a bug that is present in Internet Explorer (IE) versions 6 through 11, and one that hackers are currently exploiting.
Versions 6 through 11 presently make up approximately 55% of the browser market, but the exploitation (at least for now) appears to be concentrating on versions 9 through 11 – about 26.25% of the market, according to FireEye, a cyber security software company that “caught” the bug, according to The Daily Mail. (One has to wonder – did they “catch” it through survey, test, and identification – or did they “catch” it [like catching a cold] in terms of being hacked and exploited! – the report we saw was ambiguous).
For now, the hackers appear to be targeting U.S. defense and financial sector service firms – but don’t let that ameliorate any concerns: You could be next – and it could be in the next moment.
For any readers still using XP (and an estimated 15 to 25% of the world’s PCs are still running XP), recognize that you will receive no update(s) for this bug, being that Microsoft has stopped supporting XP with upgrades and fixes.
According to FireEye spokesman Vitor De Souza, “It’s unclear what the motives of this attack group are, at this point. It appears to be broad-spectrum intel gathering.”
In an advisory, Microsoft said that the vulnerability engendered by this bug could allow hackers to take complete control of a system. Recognize that in these instances, data theft, data destruction, accounts creation, installation of further malicious programs, and likely anything else you can imagine are possible.
Chief Technology Officer of the cyber security firm Seculert, Aviv Raff, has said that other hacking groups are racing to learn more about the bug so that they can launch their own attacks: “This will snowball,” Raff says.
Keep your eye on this, and as always, apply security patches and upgrades as soon as possible. (Although our counsel regarding aesthetic and minor operational fixes still holds: Let those changes “cook” in the market a bit, to make sure they don’t break more things than they fix/enhance).
As for those XP users? Microsoft released a statement to Reuters, advising XP users to upgrade to Windows 7 or Windows 8.
Politics aside (let me emphasize that), for both IT and business professionals who are empirically grounded, I think it’s fairly evident that something is not only horribly awry at present, but also; cascading issues will yield an exponential number of problems as the current system is exercised in full – if ever, that is.
It’s not a matter of fixing discreet troubles – a couple isolated in this module here, a few in that procedure over there. There is an undetermined, interwoven, set of malfunctioning elements. Having professionally managed many projects directly, and having directed project teams in both Fortune500 and Federal environments, it is easy to see the “back side of the screen” trainwreck of this endeavor, from the evidence readily available on the “front side of the screen.” And, recognize that this system tethers to quite a few others – Healthcare.gov is not only an imbroglio to itself: It has the power to corrupt several other critical sites, both in the private-sector and public (government) realms.
A Couple Erroneous Terms:
Website: First, let’s understand this – ObamaCare is not a “website,” as it is referred to in many quarters. It is a highly complex computer program application (really a set of programs), represented by millions upon millions of lines of code – and that is supposed to be available through a simple portal: The associated website. In total, it is not an efficient coding endeavor, and to boot, the overage of code does not match real-world ‘business’ requirements: that is, the business of registering people and allowing them to shop for affordable healthcare. Characterizing this as a “website” helps to mask the gross inefficiencies and problems we’re all facing…
Glitches: The system is not suffering “glitches.” It has deep, entrenched, and very fundamental technical and program flaws. Bringing in the “best and the brightest” for fixes to so-called “glitches” may actually compound the problem. To use a hackneyed, but very useful, phrase: 9 women cannot make a baby in 1 month. Innumerable “best and brightest” types, with their fingers simultaneously in the pie, may actually make things worse. One has to ask: Where were a measured number of “best and brightest” these past 3 years of effort (toward the Go-Live of this thing)?
Speaking of Go-Live – I never missed that date with any of my myriad projects these past decades. Milestones were reality-based, paired with interim reality-based testing, which in-turn delivered results that yielded efficient fixes during the project’s course, and Go-Lives yielded actual business programs that could be used to good purpose on Day 1. That is, fully functioning programs, software, and applications that allowed 100% support to the business at-hand. Any “glitches” were truly minor in scope – easily fixed, and generally same-day – and we were always able to offer users work-arounds in getting business done in the meantime. Day 1. Period.
Consider something I spoke of in my book, I.T. Wars – IDRU: Inadequacy, Disaster, Runaway, and Unrecoverability. Read that chapter, and you’ll see why I believe this system will be scrapped, and started anew – much as the FBI’s VCF (Virtual Case File) System was. In that circumstance, a post-9/11 effort to transition paper records to an electronic system of terror-tracking and management was necessary so that allied agencies could more effectively share and collaborate through utilization of necessary data: The FBI, NSA, CIA, etc. However, in that case, there was comparatively little political consequence in starting over.
IDRU applies here because:
Inadequate attention was given to the project’s scope, its requirements, its timeline, true expectations for delivery, its true course and progress, and the inadequate awareness for the folly of going live with something that was wholly dysfunctional. (By the way: The first day’s screen splash of “The System is Down” was erroneous. A system had to have been Up to be Down. The actual status was: “The System is Not Yet Ready”).
Disaster: Certainly any system that delivers a 100% failure on a promised (and ballyhooed) Go-Live date is a disaster – in an IT-context certainly. However, even the Act’s supporters are beginning to call this rollout “the greatest IT disaster in history.”
Runaway: We may well be in a zone whereby more and more resources are poured into this thing, with ever diminishing results. As the size of the team increases, errors and challenges in simple communication become ever-larger, more and more reports are required – with associated efforts and oversights. Ever more programmers are stepping on other programmers’ changes, and as related, ever more meetings are required for preclusions, negotiations, and fixes to “fixes.”
Unrecoverability: This specific project (not the Act) may indeed be unrecoverable – it may yet be trashed, and started anew. However, it will not be positioned that way for public consumption. The Affordable Care Act, and the related website/system, will be reported as undergoing major revisions, with the requirement for registration likely seeing a major delay… to sometime in 2014, for example. A great analogy serves here: If you have a pyramid of cheerleaders, and several on the bottom are in the wrong uniform, you must have everyone clamber down and stand around as a measure of cheerleaders don the correct uniform. You cannot “fix” the existing pyramid. You have to disassemble, and start over: Once everyone is in the correct uniform, you can re-mount the pyramid. In the case of the Affordable Care Act, “pulling” and fixing modules, lines of code, tables, tethers (to outside systems), databases, etc., is going to create an ever-widening circle of problems. As problems accrue, overlap, and self-reinforce, their growing aggregate will become like a snowball rolling downhill; accruing mass, accelerating the system toward doom – a condition of true runaway, leading to unrecoverability.
Any person in the business-IT realm worth their salt knows this: In IT, an ounce of prevention is worth 10,000 pounds of cure.
With ObamaCare, political considerations preclude the admission that the system is dysfunctional, and not likely to get better any time soon – and that it must be remounted from a fresh start. Either way, an enormous effort is necessary: I believe it may take a year to get a fully-functional system (regardless of anyone’s opinion as to what a functioning system may be enabling and delivering in terms of real-world, affordable, readily-available, healthcare policies).
The Affordable Care Act and its associated online enablements have a number of rollout issues, and whatever the present citizen/user experience is at the moment, there is very obvious evidence of what is wrong with the system that speaks in a special way to this readership:
This readership is comprised of people who operate on empiricals: Actual measures of things in match to real-world requirements. We are comprised of programmers, system architects, engineers, Agile-adherents, project managers, IT managers/directors, CIOs, CTOs… that list can go on. Readership also includes a sizeable number of non-IT, tech-savvy, personnel who inhabit “business-stakeholder” expertise and standing in the enterprise-business-IT realm. CFOs, CEOs, COOs, business owners, business directors, finance and accounting staff, and all manner of managers and allied staff.
Most of us here in this readership would call the ObamaCare rollout, and its associated web-enablement HealthCare.gov, a disaster. It’s almost a reflection for the death of empiricism.
After all, how many of us here have delivered a business system on the Go-Live date that didn’t work?… that didn’t work at all?
A recent article, FBI Pressures Internet Providers to Install Surveillance Software, had me revisiting my thoughts regarding recent government assurances.
Various politicians, pundits, and agencies have made assurances that the government is only collecting metadata – for phone calls, internet activity, and other personal pursuits. Let’s take phone calls as an example for an important point I’ll be making here.
Supposedly, the government only records (in a database) the following regarding your phone calls and mine: Time of initiation for the call; duration; who initiated it; who was called – and maybe a few other collateral things. In other words, the government isn’t listening in, or recording, or transcribing what you’re saying… discussing, etc., because they are only collecting metadata.
Ah – the golden word here: Metadata. What is metadata? I like this definition: Metadata is data about other data. Hence those surrounding details of the calls…
Hmmm. But I sense a real problem. Metadata can include ANYTHING you deem to be… metadata.
How about high value concepts? Those are in the body of calls and records – but you darn sure can collect high value concepts and stuff those under the umbrella of metadata – and plenty of people and organizations do. And… “high value concepts” is a fungible term. A couple words, or a phrase, not enough to satisfy the government’s concept – or need – for certain high value concepts? No problem – just expand to a couple lines… grab the whole paragraph that certain terms appear in. Next thing you know, the whole body of the record is “high value,” and a part of the “metadata.”
And… then… any politician can stand in front of a microphone, and state with all seeming sincerity, “Americans have no reason to fear the FBI’s (NSA’s, etc.) collection of data… we’re merely collecting metadata…”.
Pay attention. :^ ) This will get hot.
NP: Bad Company; Extended Versions. Ok, I bit – thinking this was expanded versions of studio stuff. It’s live from 2010, UK. Took me 2 months to finally give this a listen. It’s fine. Boz is missing (having died), but we’ve got an ex-Heart guitarist, and one from Paul’s solo band on stage, and the other three originals (Paul, Mick and Simon). Surprisingly fine, now that I’m listening. However, beware some of the other entries in the Extended Versions series (I’m told).
Word comes to me of an organization that has completely overtopped on the growth of its IT department – both in terms of numbers and expertise.
The org has no outside solutions partners to speak of. Oh, they have service providers – you can’t get away with no broadband provider, for example. But instead of engaging a reasonable cadre of vendors/contractors/solutions partners, they’ve hired inside expertise, adding to the permanent staff, until now they have an unwieldy department that is difficult to tune and manage.
The IT budget is in a deplorable state due to the salaries of all these people. It’s difficult to pry dollars for training from governance. So, former “experts” fall out of their expertise over time.
Think of it this way: You wouldn’t hire a specialist, say a plumber, to become a permanent full-time member of your home’s monthly budget, would you? No – you engage a plumber when you need him or her – project by project, or problem by problem, if you prefer. The plumber provides a service – a solution –solving whatever problem you have, and then goes on to service other clients.
Solutions partners in the business-IT world are engaged on much the same basis. It’s a much more efficient use of resources ($$$) to bring someone in on the occasioned basis, rather than riding some measure of expertise on the team as a permanent “resource.”
In tandem with the abeyance for burdens of keeping in-house personnel trained, there is an advantage in employing qualified solutions partners in that they have no challenges in staying current (quality vendors, that is). It is a part of their business to stay current, and forward edge besides, so that they remain competitive and successful, in serving you. Your success is their success, and that is strong motivation.
Look around at your IT shop – large enterprises are especially vulnerable to the creep of accruing people, and keeping them, past the point of good budget and service sense. But… I’m not trying to sweep people out of their jobs. Rather, this warning is especially crucial to small-to-medium business(SMB). SMBs are dynamic, frequently growing (in some cases rapidly), and you’ve got to establish the balance between permanent in-house cadre and the prudent use of outside solutions partners: Do that efficiently, and you’ll find it economical.
Manage this carefully – the two most important qualifiers for doing this are awareness… and vigilance.
NP: The Lovin’ Spoonful, Daydream, original LP that I just picked up at a yardsale, near-mint.
In past days, we’ve talked about multi-tasking and its potential to drive efficiency down; as opposed to manifesting a goal of getting more done in a fixed period of time. Diminished attention to any particular thing while trying to serve too many things can lead to errors, requiring timely do-overs. It can also cause wasted time due to the re-acquire of attention engendered by interruptions.
So… something that looks good on the surface may actually be detrimental. There’s a great example from the past: The 8-track tape.
Today, 8-track tape cartridges are held in pretty low esteem. Older readers will recognize the format, developed in the early ‘60s – anyone else who is unfamiliar can Google and read up on them. But 8-tracks essentially had, literally, eight discrete tracks (streams) of information on them. The tracks were paired into Left and Right stereo channels, comprising four “programs” of music; Program 1, Program 2, etc. Two stereo channels x four programs of music = eight tracks.
The tape inside the cartridge was an endless loop, pulling from the center of a single spool, passing over the playback head, and winding back on to the outside of the spool. A sensing foil was at the splice – when it passed over a pair of contacts just downstream of the playback head, a circuit was completed momentarily that caused the playback head to shift down, to play Program 2 – this subsequently happened again, and again until Program 4 played. Most players had circuitry to understand that Program 4 was invoked, and shut the player off after Program 4 so as to leave the cartridge at the ready for the next play, from the beginning (although you could bypass this with a button, for endless play on most players). And “beginning” could be the beginning of any of the four Programs, by virtue of a button for manual advance.
The 8-track had the appearance of several advantages and efficiencies, a few of which are actual:
– Unlike cassettes, there was no need to flip the tape over (this advantage was negated later by auto-reverse cassette decks – but in the early and mid-60s, this was big).
– During its reign, it was also considered superior to the cassette format: 8-tracks were mastered at 3 3/4 speed, vs. the cassette’s 1 7/8 speed (a better content to tape-fidelity ratio). Again, this advantage was temporary upon the cassette’s graduation to a high-fidelity medium toward the late ‘60s, into the ‘70s and beyond…
– There was a measure of “random access” – with open-reels and cassettes, you had to do a bit of rewinding and/or forwarding to get to music in the middle of the tape. With 8-tracks, you could get close enough by advancing the Programs manually with a button push.
– The single spool theoretically halved the mechanical contribution to wow and flutter (the other contributors in any tape format being the motor, capstan, pinch roller…).
– Speaking of pinch rollers – the 8-track format had them inside each individual cartridge. Therefore, no single-point-of-failure in that regard, or wear-point, by virtue of a single roller in the tape deck. Each cartridge’s pinch roller engaged the capstan in the 8-track deck.
However, whatever “advantages” there may have seemed on the surface, the 8-track was grossly inefficient in the most important, and extreme, ways. Consider:
– To play an entire album, the tape passed over the head four times. Program 1 passed over the head (again) as Program 2 played; indeed Programs 3 and 4 did too. Therefore, the tapes/cartridges had a wearout factor that was at least 2x that of cassettes and open reels (those tapes passed the head twice as each side was played).
– Maintaining proper playback head alignment was difficult, being that the head was not “fixed” – it moved to orient and play the different streams of programs on the tape.
– Early cartridges had foam pressure pads that eventually broke down and crumbled.
– Early cartridges also had pinch rollers that degenerated into sticky goo.
– With the tape pulling from the center of the spool, there was enormous wear – a special lubricant was required for the tape’s surface, which eventually wore off. Tape wear reduced fidelity, but too, once the lube wore off, it caused jams as players “ate” the tape.
Not a great format. Not efficient. And in terms of investment for progressing, the format came and went fairly quickly, unlike records which enjoyed a long run with associated amazing improvements (and which remain in the market today), or cassettes, which began as a lo-fi medium primarily for dictation and voice capture, and which matured and rivaled the best open-real hi-fi realities.
So – what in your organization looks good on the surface – possibly for purpose of convenience (like the 8-track at one time), but is actually inefficient, and in danger of having a very limited shelf life? “Solutions” that are not positioned to be supported by the future marketplace are very poor supports indeed, and you must begin to survey your environment by looking at things in a very fresh way.
Just as you can break open an 8-track cartridge, to examine how inefficient it is, you must “break open” your present organization’s environment, and start to examine the liabilities.
NP: The Pretenders, Learning to Crawl, on 8-track.
In the discussion of multi-tasking, there was a natural discussion of resources: Time being a very important resource; People being another.
However, someone made some potent observations, essentially saying that there is no such thing as ‘multi-tasking,’ being that people are at best capable of “serial fast-switching.” I like that.
But that makes humans seem like a machine, in that person’s mind, and the thought was that we must stop equating humans to machines; we even have to stop treating people as “resources.” The stated reasons include:
– Resources are something we use.
– Resources can be interchangeable with like-resources.
– Resources are generally available on-demand.
– Resources are often consumed by the process.
The question was posed: “Are you a human resource?” My answer is, “Yes.”
– We use people. If you prefer, we utilize people and their associated knowledge, skills, and time (availability).
– We generally like people in IT to be, if not perfectly interchangeable, able to provide backup services if a primary person is unavailable. Coverage and continuity is everything in IT/business.
– People are certainly available on-demand; HelpDesk, anyone? How about a phone call from the boss: “Sally, can you come in here for a moment? Thanks…”. We’re polite and respectful for people’s prior obligations, and schedules, but we’re essentially available on-demand.
– People aren’t “consumed” literally (well…). But our time is consumed, and any person’s fulfillment as a resource is based on time/availability: That is a consumption.
So, people are a resource: People, and their associated knowledge, manpower (person-power?), and contributions, are most definitely a collective resource. After all, if you don’t have enough of them, in the right proportions, with the right skills and knowledge, you’re in for a hurtin’.
And, technically speaking, they make a pretty good appearance as a “machine” to the other parts of the overall IT/business machine.
When ‘multi-tasking,’ we’re essentially giving the appearance of handling several things in any given allotment of time. You can only really do this three ways:
– Do things sequentially (say, in the course of the hour, day, week, project, etc.)
– Do things by jumping back-and-forth (often necessary when waiting on subordinate or tangential deliveries that feed into any specific item, or answers, etc., on any given thing).
– Delegate and collect (the finished task, or its state of progress for your next level of involvement).
So – learn how to delegate and prioritize tasks, and give them the respect of focus, to avoid wasting time.
Become adept at prioritizing “on the fly” (and re-prioritizing) with accuracy – as stuff streams toward you, particularly unanticipated things; resolve or assign them quickly and accurately. Also, is any particular thing merely “routine,” “emergent,” or an “emergency?” That will factor into priorities, task focus, and assignations.
When interrupted with things, become adept at plugging back in to what you were doing before the interruption. Some folks take awhile to regain their center, to find the place where they left off, etc. Others can execute about-faces with military precision and focus, almost like a drill. Get tips and tricks from these folks. My tip: A pot of coffee. Seriously – if I’m in the middle of something, making good progress, and I have a creative flow in hammering out some really good service/solution, and someone knocks on my doorframe – I state bluntly, “Can this wait?”
Usually, the answer is “Yes” – or there’s some grace of space in which to address it; in which case I say “Come back in an hour” (in the afternoon; tomorrow, etc). Of course, with all due civility and respect. :^ )
Multi-tasking? It’s all how you define it
NP: Stanley Turrentine; Stan “The Man” Original 1960 LP.
A conversation recently had people asking:
– Is it really possible to ‘multi-task’? (Is there truly such a thing?)
– If possible, what does one do to most effectively ‘multi-task’?
If one is to be technically correct here, there is no such thing as multi-tasking. (There. I said it). I’m writing this article, and I’m not doing anything else. I can suspend my writing and take a phone call; maybe I can even nod my head, say “yes” and “no” and continue to type, but my focus is compromised and my efficiency declines on one or both endeavors. I might even have to go back and re-do something due to this compromise. In this case, my split-attention doesn’t yield the machination of two tasks at once (‘multi-tasking’): It really yields a hybrid, composite, task; one that may deliver quality to two component parts – or – as I said, one that may yield poor results, and a do-over.
‘Multi-tasking’ in my mind really means handling several things on a schedule – whether formal or informal. Hence, you can be prioritizing something first thing in the morning – perhaps you’re focusing on a specific project’s milestones (and again, you have to look at them in-turn, or as a composite), when something hits your desk, or you get a “hot” phone call regarding something needing attention. What do you do?
You either suspend a lower-priority item (in relation to the “hot” thing), or you can delegate the work. Delegation is always going on in the management realm, and even if you’re someone who can’t delegate (perhaps a HelpDesk person, with a priority task you’re working on), you can still negotiate with a co-worker to help you. Therefore, you are in essence “juggling” multiple tasks.
The trick is to delegate and negotiate help without incurring a “back-and-forth” focus that bleeds quality attention to anything you’re working on. Learn how to offload and to then relax a bit – trust your personnel, and trust that the delegated work will get handled. (If you don’t believe you can do that, there are liabilities on the team, obviously).
The alternative is to think you’re doing two things at once while you compromise your attention to details (ever had to ask someone to repeat something three times on the phone, because you’re administering e-mail at the same time [reading, answering, deleting, etc.?].
Remember that the goal of so-called ‘multi-tasking’ is to gain time, by stuffing more tasks into an allotment of time. But frequently, a blur of focus causes errors, “re-do’s,” and the loss of time.
So – how do we give the appearance of multi-tasking; that is, of being efficient while handling lots of items? We’ll look at that next…
NP: Heavy Cream (a best-of compilation; Jack Bruce, Ginger Baker, Eric Clapton): 8-track on a nice high-end Pioneer deck.