Novel approaches to mobile audio get attention from geeks as they choose their most-wanted technology gifts in 2017.
Our geek advisory panel said the most coveted gift this year is close to unanimous — Apple’s AirPods wireless headphones. AirPods, introduced at the end of 2016, pair easily with Apple devices, such as the iPhone, and offer good sound quality.
“Going wireless on my headsets has been life changing,” said one of our geek friends. “No more tangled cords, super easy to use and just way cooler!”
For the unique geek in your life, consider giving them the ColorWare treatment so they’ll stand out from the crowd. It’s already too late for this option to arrive in time for the holidays, but for the stylish geek it’ll be worth the wait.
“AirPods themselves are amazing, but you tend to look like every other [expletive] wearing AirPods,” said one geek in the know. “Using ColorWare to turn them nonwhite and to make the case a different color seems like a great gift to me.”
Geeks also mention wireless headphones from Beats by Dre, specifically the BeatsX model, and Bose, specifically the QC35 sound cancelling model.
“They’re great for taking calls, listening to music, and are an absolute must anytime you’re on a plane,” said one geek advisor of the QC35s.
Geeks also admire Google’s Pixel Buds, which offer a nifty translation feature between different languages, but advise geek gifters to hold off for further product development for now. The Pixel Buds require users on both sides of a translated conversation to have the Google Pixel phone, and the translation hasn’t yet reached conversational speed, according to early reviews.
“I desperately want the promise of what they were advertising, but it sounds like it’s another release or two away,” said an AirPods fan.
Other staple technology gifts for 2017 include new supplies for home automation systems, such as Amazon Echo Dots and even USB outlets. Get to know your geek’s brand preferences! Or, let them make their own cool gadgets with the popular Glowforge 3D printer geeks rave about.
For those without the cash on hand for expensive headphones, some home gadgetry can be had for entry-level prices, geeks said.
“There are lots of entry level smart devices that are handy like a grill thermometer, indoor thermostat, etc — they’re fun to play with, if nothing else,” one said.
Do-gooder geeks will also be tickled with donations to their favorite causes, such as Net Neutrality.
Technology gifts in 2017 for the weary traveler
Technology conferences no longer have an off-season, and technology gifts for geeks on the go are also in demand this year.
“With the ‘digital nomad’ revolution under way, and more people looking to work remotely or while traveling, there are a few devices that I’ve found to be invaluable,” said one geek road warrior.
If your geek doesn’t already own a smartwatch, one can come in handy for geek travelers. Among the latest and greatest is the Apple Watch Series 3 with cellular support.
Geeks are hesitant to say they “like” USB-c dongles for Macbooks, as they are pricey and always require a charger on hand, but they’re a must-have for geek travelers to connect with and transfer data between devices. And don’t forget USB-c battery packs and fast charging cables.
“All the new phones and new Macs run off USB-c,” said one of our geek advisors. “They are a necessary evil.”
“You can write in it, easily capture everything you’ve written with your smart phone camera really quickly, and then erase all the text by putting in the microwave,” said one expert Rocketbook user.
As geeks settle in at night, they may also like a gooseneck iPad mount for hands-free Netflix-watching or e-Book reading in bed, too.
And for any geek who might like some professional development in their stocking, offer them a book on how to manage unplanned work. For fans of The Phoenix Project, try Debois’ follow up, The DevOps Handbook. Or, if your geek works with PowerShell scripts, this advanced scripting book from Dana French might be right up their alley.
DevOps has escaped the rarefied realm of unicorns and startups, as workhorse enterprises take up application delivery and support methodology. Every experience is unique, and yet everyone can learn from the successes and messes encountered during DevOps adoption at other companies.
How does enterprise DevOps work, and how have pros — including you — struggled? Join other DevOps engineers, IT managers and developers with SearchITOperations in an interactive Challenge Your Peers session at Delivery of Things World on October 26 in San Diego.
We’ll brainstorm how to support the business through better application architectures, platforms and technologies in the IT department. What should you invest in, and how do you prove the benefit of potentially substantial changes? We encourage different viewpoints, derived from your own experience and research. Come share your knowledge, debate constructively and learn from others going through the same reimagining into enterprise DevOps shops.
Check out the Delivery of Things World agenda for the complete list of Challenge Your Peers sessions as well as other opportunities to learn about DevOps cultural change and continuous integration and delivery.
Can’t make it to San Diego? Share your questions, frustrations, bright ideas and experiences here in the comments, or reach out at email@example.com.
Kubernetes 1.7 is here just in time for the Fourth of July weekend, adding some fireworks of its own with new security features and broader support for stateful apps that are sure to appeal to the coveted enterprise market.
On the security front, a network policy API promoted from beta to stable allows users to set rules to restrict communication between individual Kubernetes pods, and isolate network traffic for individual apps as well as individual users in a multi-tenant architecture. In previous releases, each app could be given its own Kubernetes namespace, but now specific services within those apps can be controlled within the namespace.
New node authorizer and admissions control plugins allow more fine-grained control of communication between the kubelet (the main software agent that runs Kubernetes on each host in a cluster) and secrets, pods and other objects on the node level. Kubernetes secrets management also makes gains on Docker Secrets with an alpha feature that encrypts secrets in the etcd data store.
Many enterprises are after Kubernetes stateful application support in production, and this Kubernetes release refines StatefulSets to include support for new update methods such as rolling updates. Kubernetes persistent volumes also take a step forward in Kubernetes 1.7 with alpha support for local storage volumes, which are popular for many big data and HPC use cases.
Databases are still a new area of development for Kubernetes and there is plenty still on the roadmap for StatefulSets. Rolling upgrades, for example, are supported now but rollback with StatefulSets is still being developed.
Kubernetes 1.7 broadens container runtime support, extensibility
While Kubernetes 1.7 technical features are sure to make waves, another intriguing aspect of the announcement has to do with the potential implications for the industry as the container runtime becomes standardized – and commoditized. Enterprises could see greater stability in container runtime support as Kubernetes begins its integration with Docker containerd in this release, for example. Docker open-sourced containerd, its core container runtime, and donated it to the Cloud Native Computing Foundation earlier this year.
“There were definitely some concerns with the stability and modularity of the platform [before containerd],” said Sam Ghods, co-founder and solutions architect for online document sharing and collaboration firm Box. “The container runtime should be very swappable.”
In future Kubernetes releases, it will be. For now, Kubernetes 1.7 lays the groundwork for better support of alternative container runtimes with enhancements to the Container Runtime Interface, a container runtime plugin API. With version 1.7, developers can more closely monitor various container runtimes through the interface, and use newly published validation tests for container runtime integration with the interface as well. .
In subsequent releases, there will be full production-ready support for runtimes that include CRI-O and rkt in addition to Docker containerd.
Docker Inc. has been an active participant in developing Kubernetes 1.7, according to Google project overseers, and if anything, containerd has drawn Docker the company and the Kubernetes community closer together, they say. However, some industry watchers might wonder about the future direction of Docker’s business now that vendors can standardize around core containerd features without Docker’s value-add offerings, and as the prospect of CRI-O integration resurfaces with this release.
New extensibility features in this Kubernetes release, such as API aggregation, will benefit container orchestration offerings based on Kubernetes, such as Red Hat OpenShift. This new feature enables power users to tinker with third-party tools for management as part of the Kubernetes cluster.
Commercial results of this extensibility update will include the Red Hat / AWS service catalog previewed at this year’s Red Hat Summit. Advanced Kubernetes users such as Box look forward to getting their hands on these features as well.
“We can now reuse the API server and Kubernetes etcd to build in third-party resources instead of doing our own hacking to create a data store and API server for every microservice,” Ghods said. “It cuts down on the time and complexity of developing services.”
Ghods added that he hopes the new extensibility features will give rise to a Kubernetes CI/CD tool similar to Netflix Spinnaker. There aren’t any concrete plans for such a tool right now, but Kubernetes has now built the foundational technology to allow it. Ghods said.
Big data isn’t new to anyone or any industry, and its impact is challenging everyone. The internet of things didn’t exist when the term information explosion was first used in the 1940s. It was used to try to quantify the growth of data generation and consumption. It wasn’t however until October 1997 when an IEEE publication by two NASA research scientists introduced the term big data. Their article begins with “Visualization provides an interesting challenge for computer systems: data sets are generally quite large, taxing the capacities of main memory, local disk, and even remote disk. We call this the problem of big data.”
The volumes of data being generated are outpacing the abilities of our traditional systems. The past decade has seen a massive growth in data generation and that was before the internet of things (IoT) entered the equation. Just like in our everyday lives, everything we use and touch seems to already be or will soon become a networked device. Healthcare big data is no different, with thousands of monitors, diagnostics machines and other vital medical pieces of medical equipment. The implications are potentially deadly in healthcare if this if not handled properly.
This means that organizations involved in healthcare, maybe more so than other sectors, must look at several different areas with regard to the implications of IoT devices and big data processing. Just imagine if a pacemaker gets hacked! We must be more vigilant about security, compliance and processing and volumes of healthcare big data.
Security and healthcare data
Are all the newly networked medical devices secure? Will big data and IoT be a new entry point for hackers to get into hospital networks? Are there inherent vulnerabilities in their design? How can we ensure that IoT devices aren’t being intentionally designed/manufactured with weaknesses by third parties who seek to do harm?
Regulatory compliance for devices
Traditional computing platforms, servers, laptops, desktops and so on are fairly well documented in regard to their regulatory compliance procedures and audits. Are the millions of medical devices that could suddenly come online also being scrutinized sufficiently? From a medical operation perspective I suspect they are, but how about from a networked IT device perspective?
Data aggregation & processing
Aggregating and processing data are challenging already with devices that IT is familiar with. How will they handle a whole new set of device types that they have never encountered? What are the patterns and behaviors of these types of devices and how does it compare to traditional technology items? Can these new healthcare devices’ data be processed in the same way and are there unique scenarios that might otherwise be missed?
This aspect alone could jeopardize the previous three in the realm of healthcare big data. Regardless of whether it’s human or technological, weaknesses will get exposed if the system is overworked. Can we handle the terabytes of data being generated fast enough, or will a detectable breach result? Can we handle the growth in volume resulting from onboarding tens of thousands of new devices? Will the new volumes skew known patterns and trends we rely on?
There are lots of questions with few answers as you can see. This is what those seeking to do harm are counting on. Our industry must find ways to address these challenges at the speed necessary to mitigate the current risks. The NASA scientists in 1997 called it big data, let’s make sure we address these issues and not let it become a big danger to our society.
Kubernetes on OpenStack deployments — how popular is it really?
OpenStack Foundation leaders cited the organization’s annual user survey as evidence that a combination of OpenStack and Kubernetes — specifically, using an automatically provisioned OpenStack infrastructure to deliver server, networking and storage resources to Kubernetes clusters — is a popular use of the technology.
— Beth Pariseau (@PariseauTT) May 8, 2017
The group asked what platform as a service (PaaS) tools and what container tools run in today’s OpenStack environments. Jonathan Bryce, executive director of the OpenStack Foundation, said 45% of people responding to the question had answered with Kubernetes.
That question sought to find out what users are doing today, not what they’re interested in or what’s out there in the future, Bryce said. “People are combining these tools in ways that, if you go back a couple years, we certainly weren’t seeing,” he said.
Meanwhile, the biannual OpenStack User Survey released recently painted a more nuanced picture of that question and its responses. In the April 2017 survey, 192 respondents answered the question, “Which container and PaaS tools are used to manage applications on this OpenStack deployment?” Of those 192 respondents, 47% answered with Kubernetes, and 28% of them indicated they run Kubernetes in production.
The 45% number cited by Bryce corresponded to a further breakdown of survey responses from October 2016 and April 2017 that combined and then deduplicated responses to both surveys, for a cohort of 282 respondents. Of those respondents, 45% had answered with Kubernetes, and 29% were in production.
The slide referenced by Bryce did not contain information about the number of survey respondents, and could easily be interpreted to suggest that 45% of all those surveyed use Kubernetes. But of 1,400 completed surveys in April 2017, only those who registered a deployment (583) were given the containers question, according to the OpenStack Foundation. Out of 583 deployments, 192 answered the question about containers.
The bottom line? One third of OpenStack deployments use containers, based on survey answers collected since 2015. And within this group, 45% use Kubernetes, based on the last two survey periods (April 2017 and October 2016).
BOSTON — Security rules are inescapable for IT service providers within a financial enterprise, and several such companies filled in their peers on how they’ve approached DevOps and security in presentations here at Red Hat Summit 2017.
In several cases, it involved shifting security and data governance responsibilities to developers, a scary prospect for some IT pros, but for some companies like Deutsche Bank, it has worked so far.
“We code to the highest common denominator among regulations,” said William Dettelback, VP of engineering for the German financial services company. Right now, that’s the Monetary Authority of Singapore’s security regulations. “For us, the most stringent regulation is our baseline.”
Barclays has a “bring your own image” system for developers on test and development infrastructures, and those developers are accountable for the security of their images.
“We’ve changed our rules to say, we’ll report on it, we’ll give you every tool, including our own base images you can build from,” said Simon Cashmore, lead engineer and solutions architect for the UK-based bank, “But we’ll tell you, and keep telling you, you’re accountable when audits come.”
— Beth Pariseau (@PariseauTT) May 3, 2017
That doesn’t mean ops is off the hook when it comes to DevOps and security. Behind the scenes, Deustche Bank ops uses Red Hat CloudForms, which ships with OpenShift, to scan container images for security vulnerabilities published in Red Hat’s Common Vulnerabilities and Exposures (CVE) dabase, and send the results to OpenShift. New vulnerabilities trigger OpenShift to build new container images. This has helped the bank react to new security threats quickly without manual patching — apps built using container images pick up new security features as updated images are added to the Docker registry by OpenShift.
At Barclays, the new rules don’t apply yet to pushing container images in production — that’s still handled manually after image introspection by the ops team.
Automated disaster recovery is also part of new DevOps processes at both companies. Barclays’ ops team enforces app resilience by periodically “draining” containers from the infrastructure — devs ship apps without the required resilience at their own peril. Deutsche Bank, meanwhile, has established active-active disaster recovery rather than use an active-passive mode, and is working toward full automation of this process.
“We want failover done once, correctly,” said Dettelback. “If someone has to log in to deploy or fix something, we’ve failed.”
BOSTON — It was called Red Hat Summit, but it could just as easily have been OpenShift Summit.
Red Hat’s platform as a service product was the hottest topic at the show here this week. Enterprise IT pros at the show were either already running it in production, or trying to get there.
Forty-eight percent of Red Hat’s customers say management, automation and orchestration are top of mind concerns for 2017, said Paul Cormier, president of products and technologies for Red Hat, citing a recent company-run survey in his keynote presentation that kicked off the conference. Some 70% of customers said cloud was the top 2017 IT spending priority, and 59% of Red Hat’s customers are planning or have implemented a multicloud environment.
Red Hat then made a splash with new multicloud features for OpenShift – specifically, an expansion to an existing partnership with Amazon Web Services that will see AWS services managed by an on-premises tool* for the first time and some contributions to Kubernetes development by AWS engineers. It still won’t involve Kubernetes integration with the EC2 Container Service, however, which is what many IT pros still want.
— Beth Pariseau (@PariseauTT) May 3, 2017
As developers clamor for more speed in application delivery, IT operations professionals at large enterprises with hybrid infrastructures are now tasked to deploy the massive, intricate OpenShift platform. Once that’s accomplished, they also must offer developers on-demand services in private data centers and public clouds with equal flexibility and speed.
“It’s a big change – we’re no longer building infrastructure to support a specific application’s requirements,” said an infrastructure architect with a financial services company that has recently bought into OpenShift, speaking on condition of anonymity over breakfast Wednesday. “Now we have to build infrastructure with the flexibility to support any application, anywhere.”
OpenShift roadmap to focus on services provisioning
A packed OpenShift Roadmap session Tuesday afternoon highlighted another mindset shift for IT pros: thinking in services, rather than servers.
OpenShift will integrate the Open Service Broker API to make on-premises enterprise IT departments more like cloud service providers that offer a catalog of services to developers. Like hybrid cloud, this is not a new idea – but with services composed of containers, it’s finally a practical goal.
The roadmap for 2017 also includes a tech preview of improvements to a multi-tenant plugin for project isolation to protect traffic within a project pod down to the port level, rather than simply enforcing network policies project-by-project. A tech preview of cluster federation will come in the second half of the year with OpenShift 3.6.
Beyond version 3.6, OpenShift will support low-latency apps with persistent storage volumes built on Red Hat’s version of Ceph open-source software-defined storage, as well as interfaces for Amazon’s Elastic File System and S3. This support is expected to include tenant-controlled snapshots for data backup. More logs and metrics, such as Jenkins logs, will be exposed through the OpenShift user interface.
— Beth Pariseau (@PariseauTT) May 2, 2017
*Statement changed following initial publication
An IT troubleshooting rule I’ve held for a long time is to never trust what the user tells you. In my opinion, it’s one of the fundamental rules of IT that will come back to bite you when not followed. When you assume the user knows what they’re talking about, you’ll end up going down the wrong rabbit hole.
Hours can be wasted troubleshooting a problem that doesn’t really exist. Alternatively, asking the right questions at the start can turn a complicated sounding problem into a simple one.
There are also times when the user has actually told you the correct information, but it’s too hard to believe.
This is one such story of a problem that could have been resolved quickly if a single assumption wasn’t made. It is based on real IT troubleshooting experience.
Missing a crucial step
A normal, unexciting day in IT, and the help desk phone rings, breaking the tapping of keys surrounded by silence. A flustered user is at the other end of the line, desperate to have their issue solved. The problem? They urgently need a file off a USB key, but it’s “not working.”
The help desk staff member’s brain starts ticking over the best troubleshooting steps. “Not working” isn’t useful at all and could be one of too many problems, time to do some awesome troubleshooting.
The first step they take is to ask if the USB has worked before. The user doesn’t know. “I’m following a set of instructions, and I’ve done everything it says.”
A fair question is then asked of the user: “Could you read out the instructions to me?”
The exasperated end user agrees, but highlights that they need to leave soon to catch a plane. “There’s a bunch of steps. Step one is to turn the computer on. Step two is to login with username/password.” The help desk person sighs internally at the use, and recording on instructions, of a generic account with its password. That’s a fight for another day however, as there’s a small fire to put out.
The user continues: “Step three then says to open Windows Explorer. Step four is to grab the USB drive, and step five is to click on E with some dots and a slash”.
At this stage, the support person at the help desk thinks that’s all reasonable. They can’t remote onto the computer because it’s an off-network PC at a somewhat secure location, so they’ll have to rely on the user.
“Can you see Windows Explorer?” asks the hopeful help desker.
The user quickly responds “Yes, I think so. I can see a computer and a letter under it, C dots”.
The help desk person makes a fair assumption that this is the C drive. “But you don’t see any other drives, like the E drive?”
Getting annoyed, the user responds “No, nothing else. Why isn’t this working?” A question often asked during IT troubleshooting.
“OK, let’s try rebooting the computer. Sometimes things go a bit funny and that can help” the support person offers, unsure of what to try next.
“I really don’t have time for this, but fine.” The now disgruntled user goes about finding the power button, too quickly for the IT pro to intervene for them to shut down the correct way, via the operating system.
A minute later, after some slow key presses and sighs, the user gets back on the phone. “There’s STILL no E, this is ridiculous!”
Running out of troubleshooting options, the IT staffer comes to the conclusion that it’s not something that can be fixed remotely. “I think we’re out of options here, it could be a faulty USB stick, or it could be the USB port on the front of the computer.”
After a few seconds, the user responds “That’s strange that the front of the computer would have anything to do with this, is that where the wireless card is?”
Confused by this response, the help desker asks: “Did you try unplugging and plugging in the USB drive?”
“What do you mean? The instructions don’t say that,” the user responds.
It dawns on the help desk staffer. “When it said to grab the USB, did you plug it in or just hold it?”
The user responds matter of factly that the USB key was in their hand the entire time. I “Isn’t it wireless?”
Head in hands, and after a moment’s silence, the help desk staffer concludes with “I don’t think so. Let’s try plugging it in.”
As you can see, it’s easy with IT troubleshooting to head down a path that makes sense with the information you are given, following reasonable assumptions.
Verify those reasonable assumptions throughout IT troubleshooting steps. Start from the absolute basics and work your way through to the more technical troubleshooting. Various problems — a USB stick that’s been forced in the wrong way, a faulty USB stick, a faulty USB port, a driver issue, Group Policy restrictions and a myriad of other root causes — show the exact same symptoms as someone simply not plugging in a USB memory stick at all.
I wanted to be a creator; someone who put things into the world for the greater good.
I started with engineering research, and hope that I did do some good here — I worked on anti-cancer drugs; car catalysts to minimize NOx and SOx in car fumes; and fuel cells as a source of clean energy.
However, the world threw me a curve-ball, and against my better instincts, I found myself in the world of IT. I worked on diverse projects, such as implementing office automation for a large electricity generation company in the U.K. I can’t say that any of this was aimed at making the world a generally better place for anyone, apart from the employees of the company I was working for, but it paid the bills.
Due to another curve-ball, I became an industry analyst. I had found a job where I could at least try to do things for larger groups of people, working with end user organisations to help them understand how technology could best support them in their aims. However, to my mind, it was still all pretty constraining: Organizations had a business imperative, which boils down to “How do we make as much money as possible?”
Technology for better lives
With my own small industry analyst house, Quocirca, I gained more freedom to do what I wanted. This has allowed me to get back to where I wanted to be. In the early days of Quocirca, I had a meeting with Microsoft, which led on to me working alongside it on its “Unlimited Potential” campaign. Originally to be called “The Next 2 Billion,” Unlimited Potential let Microsoft look at how it could get its technology into the hands of 2 billion more people on the planet. The work had a couple of problems. It was purely looking at Microsoft, and I didn’t feel that it was actually tapping in to what was really needed out there in the real world. The campaign moved to being more inclusive of other technology vendors, and also looked at how local groups of people could use technology to just make them better at what they were already doing, rather than dragging them into the ‘new’ intelligent cities — and so replicating all the problems that happened as the industrial revolution depopulated the agrarian areas of the West and led to major poverty and problems.
Likewise, with Cisco, I provided input to try to ensure that ‘intelligent communities’ were included in intelligent city work. As seen with the activity the Sri Lankan government has undertaken to make the whole of the island ‘intelligent,’ leaving people within their existing community adds far more value to the overall country economy and well-being than dragging them into the cities.
I also wrote on how technology was being used in different communities. For example, Maasai warriors started carrying mobile phones to alert their shepherds when they saw a lion, and make sure the flock was moved to a different area. This cut down on the number of sheep killed by lions, and so the number of lions the Maasai felt that they had to kill to protect their flocks. A side effect meant that the lion population was maintained, to the good of the tourist industry.
Local entrepreneurs in South America were buying mobile phones and airtime and then allowing others who could not afford a full contract themselves to use the phones for single calls. A small amount of profit could be made on each call.
In India, early-stage internet of things architectures were becoming apparent. Dot-matrix displays could be set up in small towns stating when the travelling doctor would visit next. Details of how to book an appointment could be included — a simple text message. The patient could then be sent reminders as the date became closer — cutting down on missed appointments and wasted time.
All this seemed to get me noticed, and I was invited to become a Fellow of the Royal Society for the encouragement of Arts, Manufactures and Commerce (RSA). This 260-year-old institution works to enrich society through ideas and action.
The RSA stands for everything I believe in. It is looking at how the world can be made into a better, fairer place; in how those who are the ‘haves’ can better help those who are the ‘have less.’ It is not aimed at being a condescending group of elites just helping out those less fortunate: It is a two-way approach accepting that we all have as much, if not more, to learn from those who may not have much in what many would see as possessions and wealth as they have to learn from us.
It also moves with the times. Projects and discussions going on within the Society are looking at areas such as what can be learned from Millennials and their approach to work, ideas and life.
For all of us, the aim of our working and personal lives should surely be to try and leave the world in a better situation than we found it. At times, I wonder whether this vision gets lost in the hustle and bustle of our daily lives and in the immediacy of what is put in front of us that we see as problems.
There is nothing better than to put yourself in someone else’s shoes for a while — look at the world through their eyes and ask yourself what would make the world better for that person.
Just once in while — say, once a day — do this: Empathize with someone else. The homeless person you see when you leave the office; the person struggling with a child and an old stroller; the thought of the refugee having to flee their home and try and find shelter somewhere else; the farmer in Nepal trying to find a way to optimize their yields of rice. Then, if you can think of a way to make their life easier, try to do something to make it so.
If only 10% of the world did this on a daily basis, that would be 263 billion good actions per year.
I’ve been consulting to organizations for many years now. To the point that I’ve probably worked with, in some capacity or another, well over 100 different companies and IT leadership teams. They’ve ranged from regional banks to global manufacturers to massive federal government agencies. Some have been privately held while others were publicly traded and they’ve covered a variety of industries. In addition, I’ve been fortunate to have been engaged in various international events and efforts. So, I’ve worked with a fairly broad spectrum of companies out there.
In all of these entities, I’ve seen some unique traits certainly. There is usually a common thread that is rarely stated or acknowledged in public by the vast majority, however. This common but many times unspoken sentiment is: “We really need to get our act together”. It many times also comes in the form of: “If we could only get out of our way”. Most people within the organizations I’ve worked with know the problems their companies are facing. In many cases, they also know what’s needed to help fix them. What does this mean, though if the people on the ground know the problems and to some degree, how to fix them?
A company who has active projects to improve various operational processes like change or incident management would seem to be aware of what they need to fix and/or improve, right? In some cases it’s true, but in most others, these efforts target the symptoms and not the sickness. For example, poor compliance with associating the correct asset to an incident ticket generally falls under incident management metrics. But what if the real problem is the asset management group, which regularly corrupts the inventory lists, which undermines all downstream processes? And worse, what if that manager has been unable to lock in the necessary budget to improve their discovery tools and governance processes. Who is to blame for this: the incident management team, asset management team or their respective IT leadership who haven’t grasped the interdependencies and refuse to fund the foundational components?
The problem is at the top
For many companies, the problem lies in a lack of IT leadership. They don’t set and then fund a vision that will enable foundational corrections in their organizations. Sometimes it’s because this change isn’t the new shiny object; other times, they’re just not in tune with the reality of their organizational needs. Lower-level management typically can fund and direct process level projects. They cannot, however, set the direction or vision for the broader IT operations organization. That is IT leadership’s responsibility. This leaves department managers and supervisors to do the best they possibly can with spot solutions that fix immediate needs but rarely solidify the footings upon which everything else rests.
It’s important to understand the difference between leaders and managers. The way I like describing it is that leaders are like compasses that guide you in the right direction and adjust that direction as needed.
The managers are the task masters that hold the stopwatch and ensure things get done and on time. We need both of these roles actively engaged in our environments. Together is how organizations prosper and enable positive business outcomes.
The solution is actually pretty straightforward in theory. In execution, of course, it turns out to be more difficult than people expect. That’s what I work on with my clients so that they set reasonable expectations. As leaders, even if we don’t do it well, we know that we need to communicate and enable the strategic vision. We fail often though in our follow through. As IT leaders, we need to make sure that we support those who are trying to deliver the tactical steps. True leaders don’t show up to a kickoff meeting to say how important the effort is and then never engage again. As IT leaders, we can’t promote conflicting projects that unknowingly undermine the vision we just set. We must do better.
We need to be smarter in our actions not just our words. The disarray is visible to the employees and they have the right to be frustrated. The good ones also have the opportunity to leave, which hurts the organization. When you live in the trenches, you understand what needs to get done to achieve the corporate goals. You also recognize when you’re wasting your time and need to move on. Let’s get our act together and help them so that everyone benefits. It’s the only way to help our organizations prosper while our employees grow.