From Silos to Services: Cloud Computing for the Enterprise

Page 1 of 2012345...1020...Last »

January 16, 2017  10:07 PM

Evolving Monoliths vs. Microservices

Brian Gracely Brian Gracely Profile: Brian Gracely
containers, DevOps, Java, Kubernetes, Microservices, Software

mainlandFor the last couple years, the idea of “software eating the world” has gathered quite a bit of traction. While software-lead companies (Uber, AirBnB, Netflix, Facebook, etc.) have created considerable disruption in many vertical industries, the vast majority of companies are still struggling to managing the transition from existing monolithic applications to more agile microservices implementations.

Over the last couple weeks, I’ve had the opportunity to dig into what it means to balance monoliths and microservices with several industry thought-leaders (here, here). From those discussions, a few important considerations come through:

  1. Not every application needs to be built using microservices. Plenty of existing applications can be improved by rethinking the sprint process (length, frequency).
  2. Instead of focusing on monoliths vs. microservices, the focus should be on what is needed to build and ship software more rapidly. The focus should be on making the overall business more agile, and able to react to digital feedback loops about how users interact with those applications.
  3. The testing and deploying processes are just as important as the application building process. Many companies should initially be focused on how well they are organized and prepared (automated testing systems) to test and integrate software. This is often focused on CI/CD tools like Jenkins.
  4. The culture aspects of more to more modular, microservices-based platforms should not be under-estimated. It requires a different understanding about what it means to build an “independent service”, both from a technology perspective and an internal communications perspective.
  5. It’s critical to have a platform in place that provides developers with self-service access to agile infrastructure resources, and the platform should abstract many of the complexities (service discovery, high availability, networking, storage, auto-scaling and load-balancing) that developers face.

Managing a portfolio of applications can be complicated, especially as it goes through an evolution that involves more than just technology updates. Cultural shifts like DevOps, and organizational shifts like “2-pizza teams” can seems extremely complicated and uncertain in the early stages. Sometimes they require breaking out new organizations to prototype the new habits for a large organization. It’s often the willingness to adapt the culture and process to a more iterative model that lays the foundation for faster monoliths and more agile microservices applications.

January 8, 2017  8:39 PM

Tech Leadership Doesn’t Last Forever

Brian Gracely Brian Gracely Profile: Brian Gracely
Amazon, AOL, Apple, AWS, China, IBM, Jeff Bezos, Microsoft, Netscape

growing-the-pie-lightspringThere tend to be two types of thinking about technology markets:

  1. New technologies will expand markets – The “growing the pie” philosophy.
  2. New technologies will kill old technologies – The “winner takes all” philosophy.

Since it’s more complicated to understand the dynamics of highly competitive market, people tend to gravitative to the possibilities of winner-take-all market outlooks. These are usually the leading platforms at the time – historically it’s been IBM Mainframes, DEC minis, Microsoft Windows, AOL, Google Search, Apple iPhone – and now people are talking about Amazon Web Services (AWS) in the same category.

While IBM, DEC and Microsoft all saw their dominance get interrupted by a shift in computing paradigms, not all leaders get disrupted because of major technology changes. Sometimes the reasons are beyond technology changes.

uewb_02_img0141For example, notorious 1920’s mobster Al Capone was only taken down by law enforcement for tax evasion – not murder, racketeering or corruption. Capone had things covered to avoid being imprisoned for those more serious crimes, but he wasn’t prepared for the new government strategy. It’s not a technology analogy, but it aligns to the idea that sometimes the rules of the game change and the dominant personality in the game gets tripped up.

Some other examples:

AOLMerger with Time Warner – At the time, the thought of marrying the “Internet” with Entertainment content was considered to be a match made in heaven. But sometimes cultures, egos and economics don’t work out the way the spreadsheets planned.

MicrosoftAnti-Trust (Windows OS) – Microsoft had add functionality to Windows before disrupting Netscape’s browser business by embedding Windows Explorer. But the world was moving from stand-alone computers to a world that would soon be connected to the Internet for all information.

GoogleEU Anti-Trust – Google has had a dominant position in search on the Internet ever since the browser become the dominant computing UI. But mobile computing is a different paradigm, and regulators were concerned about Google leveraging their dominance for ads, apps, maps, etc. on mobile screens.

AppleChina Manufacturing – With a new administration coming into power in the United States, no company has more at stake than Apple if the administration decides to significantly change foreign policies towards China. While design was once an Apple competitive advantage, their advantage is now distinctly about supply-chain management. Will they be able to continue to dominate the revenues of mobile computing if the US Gov’t changes the game about non-US manufacturing?

AWS/Amazon – Donald Trump feud with Jeff Bezos – I wrote in my 2017 predictions that it wouldn’t surprise me with President Trump didn’t go after either Jeff Bezos or Amazon. He ran on a platform of maintaining US jobs, and Amazon is pushing automation in many areas (distribution centers, shipping trucks, drone delivery, grocery stores, etc.). He also has shown to want to discredit the media / free press, and Jeff Bezos owns the Washington Post. Trump may also look to step up efforts to collect taxes on Internet sales (e.g. “Amazon tax“). While these possibilities may not directly impact the AWS business, which is highly profitable, but it could have second-order impacts if Amazon gets tied up on government litigation the way that Microsoft was for years in their anti-trust cases.

There is a lot of uncertainty in the world as we head into 2017, both in the US and around the world. It will be very interesting to watch and see if the dominant platforms of today will be disrupted by something outside of the competitiveness of the market.


December 31, 2016  6:04 PM

What if the Cloud moves to the Edge?

Brian Gracely Brian Gracely Profile: Brian Gracely
AWS, Azure, Bots, CDN, Cloud Computing, DDOS, DNS, Edge computing, HPE, iot, Security, Sensors, TCP

person-sitting-on-cliffWe know three things about the history of computing:

  1. Computing devices continues to get smaller and less expensive.
  2. As the form-factor of computing changes, the core architecture has frequently evolved from being centralized to decentralized, and then back again.
  3. Sometimes it’s useful to see where the “money people” (e.g. Venture Capitalists) are putting their bets on the future of computing trends.

If you follow the tech media, you know that things like Internet of Things (IoT), Drones, Robots and Autonomous Vehicles are gathering quite a bit of investment, business partnerships, and overall market interest. Industry analyst David Floyer of Wikibon calls Edge Computing “a critical element of IoT in 2017“. Of course, this isn’t the first time that people have called for architectures that prioritize intelligence at the edge of the network.

As functionality does move away from centralized computing architectures, it brings four key elements into consideration:

  1. How much computing is appropriate at the edge?
  2. How much storage is appropriate at the edge? (and how is it maintained)
  3. How much bandwidth is needed at the edge?
  4. How are devices secured at the edge?

How Much Computing is Needed?

It all depends on the application. Does it require heavy computing resources, such as the HPE devices? Does it require lesser computing like AWS Greengrass? Can it use very small, low-cost computing devices like Arduino?

How Much Storage is Needed? 

An on-going discussion that I’ve had with Wikibon’s Floyer is whether or not anyone really wants to manage disks (or other storage media) on remote devices. It would require backup systems to get data off the device (for capacity, archiving or analysis), and truck-rolls to repair failed disks. While the overall costs of storage have significantly dropped year over year, the cost of managing data has not dropped at nearly the same rate.

It’s possible that the data doesn’t need to remain on the device (or the location), in which case a “disposal” device could be replaced with another when the storage capacity is full.

How Much Bandwidth is Needed? 

This is a double-edged question. How much does bandwidth cost, and is bandwidth even available at the remote location? For many parts of the world, cellular data is still extremely expensive and not always available, especially in remote applications (wind farms, etc.)

How much data does the application/device generate? Does the application need to send large amounts of data back to a centralized location, or keep the majority of data local for localized actions. Can the application use cached data at the edge of the network? IoT standards-bodies and manufacturers are already working on TCP/IP protocols to better manage bandwidth usage and chatty protocols.

How to Secure Edge Devices? 

This is going to be an on-going question for many, many years. How to update 10s of millions of devices when a Linux kernel bug is found? How to make sure that a virus isn’t shipped with a piece of firmware before it even boots? How to make sure that the devices aren’t compromised and turned into bots to create DDoS attacks on major Internet services?

There is a good chance that the next evolution of the Internet will move more functionality to the edge. It will unlock new business opportunities and potential value creation for end-users. But what the new architectures will look still has many open-ended questions.


December 31, 2016  1:18 PM

4 Approaches to a Hybrid Cloud

Brian Gracely Brian Gracely Profile: Brian Gracely
AWS, Azure, containers, EMC, Google, Hybrid cloud, Kubernetes, Multi-cloud, Red Hat, VMs, VMware

hybrid-002The concept of “Hybrid Cloud” has gone through many definitions and iterations over the last few years. In 2009, EMC introduced the concept as matching hardware (e.g. Vblocks) + software (e.g. VMware) in both a private cloud environment and a managed service provider. Many other hardware vendors quickly adopted a similar definition, hoping to sell their gear into private clouds and managed service providers. In 2014/15 that model eventually evolved to include VMware’s “cross cloud architecture“, using VMware NSX to interconnect networks between private clouds and a new set of public clouds (e.g. AWS). These models depended on uniformity of technology, typically from a single vendor. They were primarily based on proprietary technology and were not driven by the cloud providers.

Over the last couple years, a few new approaches have emerged.

Kubernetes Federation based on Open Source technologies

k8s-logoWhen Google open sourced Kubernetes in 2014, this was the first time that one of the major cloud providers made core “cloud” technology available to open communities. Support for Kubernetes has grow at an incredible pace in 2015 and 2016, far surpassing any other open source cloud platform project. And with the v1.5 release of Kubernetes, “Federation” has now been enabled to link clusters of Kubernetes resources across clouds. While still very new technology, it has the ability to connect any cloud using open source technologies; built on proven cloud provider technology and enhanced by 1000+ developers. Beyond Google’s contributions and GKE service, The Kubernetes model has been adopted by enterprise companies such as Red Hat (OpenShift), Microsoft, Intel, IBM, SAP, Huiwei, Deis, Apprenda, Apcera, Canonical, VMware and many others.

“AzureStack” – Pushing Azure into the Private Data Center

In early 2015, Microsoft announced the preview of the AzureStack concept. The idea was to bring the breadth of the Azure public cloud services into a customer’s data center, running on servers owned by the customer. This would allow customers to consistently run Azure services in a private data center or in the public Azure cloud. At the time, the Azure team still had to evolve many concepts, including which sets of features (all, partial) would be included. AzureStack has determined which hardware platforms it will support, and the ship date has been moved to “mid-CY2017“. Given the breadth of open source usage in the Azure public cloud, it will be interesting to see what open source technologies are supported at GA in 2017. It is also an interesting strategic approach to attempt to ship a large set of Azure features into a single-bundled release. This seems more like the legacy Windows approach than the more modern “modular, independent services” approach used in the public cloud.

“Greengrass” – Pushing Lambda into the Private Data Center

For many years, AWS avoided the term “hybrid cloud” like the plague, even in partnerships. They still don’t embrace the concept (in terminology), but they do seem to be coming around more to the idea that not every use-case or workload will run in the centralized AWS cloud. I say “centralized AWS cloud” because their 2016 re:Invent announcements introduced a number of services (Snowmobile, Lambda @ Edge, Greengrass) that extended the reach of the AWS cloud beyond their data centers. One of those announcements was “AWS Greengrass“. This new service extends the AWS Snowball form factor into a service that can live at a customer’s location for a prolonged period of time, managed by AWS. It includes both storage services and AWS Lambda “serverless” compute services. In stark contrast to Azure’s approach, the AWS approach is much more of a lightweight, MVP (Minimum Viable Product) offering. While serverless computing is still in it’s infancy, it is beginning to show promise for specific use-cases.

Multiple Approaches, Multiple Customer Choices

There four approaches offer customers a wide variety of choices if they wish to use multiple cloud resources or build “hybrid” services across clouds. Some are based on hardware + software, while others are based solely on software. Some of specific to a vendor or cloud, while others embrace open source software and communities. And some offer different choices about who is responsible for acquiring, managing and operating the cloud services on an on-going basis. Being able to leverage multiple cloud resources (cost, geography, breadth of services) is still a top priority for many CIOs, so it will be interesting to see if these new approaches to hybrid cloud services gain greater traction than the previous incarnations.


December 18, 2016  7:49 PM

Predictions for 2017

Brian Gracely Brian Gracely Profile: Brian Gracely
Amazon, AWS, Cisco, Drones, Hardware, HPE, Jeff Bezos, taxes, VC

Yep, it’s time to write the “predictions” articles. Feel free to go back to my previous years (2015, 2014) predictions to see if I’m a complete idiot or only partially an idiot. We all know that in 2017 we’ll be using VDI or Linux on your desktop, right after you refuel your flying car and watch the Super Bowl on your 3D TV. But maybe some other things will happen too…

2016-to-2017

[1] – The Trump Administration will pick a fight with Jeff Bezos (and maybe Amazon):screen-shot-2016-12-12-at-9-27-29-am

I typically don’t like to discuss politics on this blog, but given the current environment of the US, it’s difficult to not envision the intersection of tech and politics. Looking at the incoming administration’s past actions, it’s not difficult to see how President-elect Trump doesn’t go after Amazon CEO Jeff Bezos for various reasons. The two exchanged words directly during the election over various issues (e.g. Internet Taxes), and the Bezos-owned Washington Post has frequently been critical of Trump. If Trump decides to pick fights about the US losing jobs, he could point to things like Amazon Robotic WarehousesAmazon Go Grocery Stores, Amazon Prime Air Drone Delivery, their evolving Autonomous Delivery Truck fleets or the decline of IT jobs from AWS. It will be interesting to see how Wall Street reacts to tweets about specific companies once Trump officially becomes president.

[2] The hardware-centric companies will go through significant reorganizations and consolidation.

Regardless of which forecast model you subscribe to, there is mostly consensus around the expectation that selling hardware into corporate data centers will be a more difficult business in 2017 (and beyond). The overall business is essentially flat, depending on the segment, and margins have been dropping for many years. The Dell/EMC acquisition has been at the forefront of this trend, and we’re already seeing the largest companies making moves (Dell/EMC, Cisco, HPE) to be less focused on software, less focused on cloud computing and less focused on business models (e.g. open source) that differ significantly from their core business of selling hardware. Some pundits believe that we’re in for even more consolidation or extinction in the hardware-centric portion of the IT vendor landscape.

[3] AWS will quietly launch a pseudo-VC firm to attract developers instead of letting them go to start-ups.

screen-shot-2016-12-06-at-8-45-34-pm

AWS is well-known for finding inefficiencies (or areas of profitability) in the IT industry and creating a new business offering to capture a portion of that space. With their insight into the usage models of many startups, it wouldn’t be unexpected to see them creating a direct incubator or VC-like program for new feature-building organizations. This was partially signaled by Adrian Cockcroft (@adrianco), VP of at Cloud Architecture Strategy at AWS, and former VC at Battery Ventures.

[4] “Flyover state technologies” become a serious conversation, driving companies to establish themselves in red states.

“Are you willing to relocate to San Francisco?” The question gets asked all the time for high-tech jobs, especially in software-centric industries. This means that there is software and engineering talent outside of Silicon Valley, today. But the framework for innovation (VC capital, meetup events, many local companies for job-hopping) is established in Silicon Valley. Over the next few years, it’s highly likely that we’ll see programs and incentives put in place to encourage more innovation to get created and grown in areas outside of Silicon Valley, Seattle, Boston, Boulder, Austin, Raleigh, etc. Will we see a rejuvenation of automotive technologies in the Great Lakes region? Will we see the next great wind-energy company in Kansas or Nebraska? Getting manufacturing jobs to return to the United States will be a complicated economic endeavor (tariffs, tax breaks, deals), but the opportunity to create the next set of technologies may be more realistic. Plenty of areas in the flyover states are looking to boost their local economies and have excellent university systems to draw ideas/research from. But will the other pieces of the needed ecosystem evolve as well?

[5] Creative options will be proposed to repatriate revenue for US tech companies.

American tech companies are holding hundreds of billions of dollars in overseas accounts. They have lobbied for years to try and get a “tax holiday” to repatriate those funds back to the United States. Without guarantees on how the funds would be used to improve the American economy, instead of being used for stock buybacks or executive bonuses, the US government has rejected these demands. But that impasse might be coming to an end as the US has massive budget deficits and big plans for job-creation programs under the new administration. I expect that we’ll see a program put in place that will allow a reasonable tax rate for repatriated funds (~10%) in exchange for the tax funds to go directly towards those job-creation programs. Whether or not the program would be successful towards that end goal is TBD, but it would be perceived as a win-win for both the government and tech executives.


December 6, 2016  10:46 PM

Key Areas to Watch from AWS re:Invent

Brian Gracely Brian Gracely Profile: Brian Gracely
"silicon valley", ai, AWS, Lambda, Machine learning, Managed Services, SaaS, Startups

30,000+ people descended on Las Vegas last week for the annual AWS re:Invent conference. As I’ve written before, it has become one of the top conferences for the IT industry. With so many people in attendance, we had to know that many new services would be announced; and AWS did not lack for quantity or creativity.

Between CEO Andy Jassy’s keynote on Day 2, and CTO Werner Vogel’s keynote on Day 3, the list of new capabilities that are now available or are in early preview was impressive.

So with all these new things to digest and understand, what are the most important to really dig into? IMHO, these are the key areas. BTW – if you really want to dig into the new services, go check out the AWS Blog and the dozens of new posts from Jeff Barr. Or go check out the keynote and session videos on AWS’ YouTube channel.

AWS Everywhere

We already know about AWS in their Cloud data centers, but now AWS wants to be everywhere. They added enhancements to their Snowflake boxes to include more storage and some amount of computing capacity. This is the beginning of a branch-office type of strategy. They drove semi-trucks on stage for their Snowmobile offering, which is the beginning of AWS in your data center. They added Greengrass to IoT devices, which gives them an expanded edge-device story. And Echo/Alexa continues to gain traction for home automation.

Lambda (Everywhere) 

The Serverless movement, primarily driven by AWS Lambda but also new entrants like OpenWhisk, Iron.io, Azure Functions, Serverless Framework, etc., has been growing rapidly. ServerlessConf occurs every 4 months and continues to grow. And AWS announced Lambda would be everywhere – on the edge of their CDN network, in the small Snowflakes and the big Snowmobiles, and in Greengrass IoT devices. Not to mention tons of expanded management and monitoring capabilities like X-Ray and Step Functions. AWS has some container functionality available via ECS and Blox (open-source), but it’s clear that they’d like to see their customers jump from using EC2-centric VMs to Serverless functions in the future.

The Future of the Ecosystem?

On Day 1, Andy Jassy told partners that he expected them to be much more committed to AWS than other cloud offerings.

On Day 2, Werner Vogel made an interesting statement during his keynote (paraphrased), “In the future, everyone will have access to the exact same software, so data is the only true differentiator.” 

It was an interesting statement, because it came around the time when AWS was picking off ecosystem partners that do things like Application Performance Monitoring. And then new AWS VP Adrian Cockcroft (@adrianco) sent out this tweet:

screen-shot-2016-12-06-at-8-45-34-pm

When a former Venture Capitalist moves over to the leading cloud provider and tells people to stop building startups the old fashion way (via VC funding) and start becoming a service on AWS, it signals the scope of AWS’ intentions with their current and future ecosystem. I’m not sure Silicon Valley is ready to concede the future of startups to becoming AWS teams, but it will definitely be a hot topic of discussion in 2017.

The Advancement of Machine Learning

If you want to read a great analysis of AWS vs. Google vs. Apple as product vs. services companies for the 21st century, go become a daily reader of Ben Thompson’s Stratechery blog, or listen to his weekly Exponent podcast. Within those discussions is a deep focus on why Alexa is a perfect offering from AWS for multiple reasons:

  • It’s centered around cloud-based services (AWS, Lambda)
  • It’s centered around a “new” digital interface (voice) instead of an existing digital device (phones, laptops, etc.).
  • It’s priced low enough to capture mass markets.
  • It’s cross-functional, so it can interact with both AWS and Amazon, and both companies can learn from the user interactions.

Those learnings about usability and technology are now being delivered as services via AWS – LEX, Polly and Rekognition. While some people will look at these and say that they are late to market compared to similar Google Cloud AI/ML services, it will be in the way that AWS markets these to existing Enterprise customers that will help them in those markets. For example, Google demonstrates their offerings as a “fun consumer service” (e.g. a compliment to Google Search). AWS demonstrates them as a next-generation call-center or personal business assistant. Their understanding of the target audience is much different.

2016 was a Tipping Point

While it’s taken many people a long time to grasp the scope of AWS and it’s growth trajectory, they have mostly begun to understand it’s role in the new world of IT. I think we’re going to see people taking many, many months to come to terms with the breadth of the foundation that AWS laid out at re:Invent 2016. There are some very foundational elements that will take many years to mature and evolve, but the pieces are there to have significantly bigger impact to business and the IT industry than their first 10 years in service did.


November 27, 2016  6:51 PM

Multiple Ways to View AWS re:Invent 2016

Brian Gracely Brian Gracely Profile: Brian Gracely
AWS, Developers, IaaS, Infrastructure, Lambda, Open source, PaaS, SaaS, VMware

For the last 20 years, the world of IT has been defined by fairly well-known swim lanes of technology, technology vendors, and technology supply chains. Developers build applications. Operations managed the applications and the underlying infrastructure. Companies bought technology from established vendors, or companies within a vendors partner-ecosystem (VARs, SIs, ISVs, etc.). Networking companies sold networking gear. Storage companies sold storage gear. Database companies sold databases and applications. etc, etc, etc. But the last few years have made those swim lanes very blurry, with the IT landscape dramatically changing via public cloud services (such as AWS), open source software, and a number of traditional technology companies evolving to become more solution providers than technology innovators.

So with AWS re:Invent this week, it’s always good to stop and look at the IT landscape from several different perspectives. The show is expecting to get 30k+ attendees, which would put it in the Top 5 major events for the industry, along with Salesforce Dreamforce, Oracle OpenWorld, Microsoft Insight and VMware VMworld.

Who is the AWS re:Invent audience?

To a certain extent, the IT industry is only so large, so it’s not unusual to have overlap (or migration) between large events. For example, the infrastructure audience to the VMware audience to the OpenStack audience and now some to the AWS audience. We can see that in the companies that sponsor events like VMworld. But the sponsors for re:Invent are different – they focus on things about the infrastructure (for the most part). Companies that manage cloud resources, manage cloud costs, manage cloud projects and integrations (SIs) monitor cloud applications, and secure cloud applications. They are also advanced application frameworks (Salesforce, OpenShift,  that can interact with native AWS services. They also include a whole range of companies that sell SaaS applications, either directly or through the AWS marketplace. AWS grew up with start-ups that evolved into web scale companies. But over time, they legitimized “shadow IT” and have been attracting more and more enterprise line-of-business groups that have immediate needs, or needs that exceed the day-to-day capacity (and organizational model) of their in-house IT organization. The evolution of “digital transformations” at many companies will ultimately decide which groups are making the key IT decisions going forward, and where those decisions will be executed.

Where does AWS fit into the IT Landscape?

Trying to identify where AWS fits in the IT landscape is complicated, if viewed from traditional lenses. It’s a Co-Lo Cloud Provider, a Managed Services Provider, a technology leasing company (which you’ll lease in increments from milliseconds to years), a SaaS provider (for some applications), and some combination of an IaaS and PaaS platform. But it’s also your new server vendor, and new storage vendor, and new security vendor. It’s been suggested that they are taking significant business from traditional vendors.

While no company seems to be immune from competition with AWS (yet), the more successful companies seem to be software vendors that are able to add value on top of the core AWS offerings. The less successful companies seem to be the ones that want to compete directly with them, especially cloud providers. As we’ve seen with many managed-service providers (Rackspace, VMware, etc.), they are moving towards a model of managing services on top of AWS instead of continuing to invest in their own infrastructure and data centers.

Is Your Company Prepared to Transform to an all-in AWS Model? 

We’ve all heard the stories about the two-pizza team culture at Netflix, but is your IT organization (or shadow IT teams) ready to fully adopt all of the changes needed to go all-in on an AWS model (or even a hybrid AWS model)? While some well-documented examples exist, there are still many examples of companies that are making transitions in-house. And given the intense demand for talent, it’s not unusual to hear people now say that they don’t want to share details of their cloud successes (in-house or public cloud) because they fear the talent poachers.

What are the “New Rules” of the IT landscape?

This is the real question that people should be asking themselves. It’s fairly clear that open source and public cloud are going to re-write the rules for the next 10-20 years (with changes coming every 3-5 years), but what will those new rules be? I’d be curious what your thoughts are. I’ll be digging into this in future posts.

screen-shot-2016-11-27-at-5-49-19-pm


November 16, 2016  1:52 PM

Digital Communications – Using Technology to Eliminate Your Resume

Brian Gracely Brian Gracely Profile: Brian Gracely
digital, newspaper, resume, Technology

When I was a senior in high school, I was a fairly good baseball player with ambitions to one-day play in the Major Leagues. I wrote a senior essay about getting injured while playing for the Chicago Cubs and missing an opportunity to play in the World Series.

This past week, I watched with millions of others as the Cubs finally broke their 108-year drought and finally won the World Series. It was a thrilling series, with the final game going into extra innings and late into the night. I stayed up to watch the end of the game on TV, also following along with the emotions of fans on Twitter.

And the next day I saw this image on my Twitter timeline. The picture shows the dramatic 8th inning home run by Cleveland’s Rajai Davis to tie the game at 6-6. It was an important moment in the game, but it wasn’t “the moment”. The readers of The New York Times were denied that moment because they had to go to press before the game ended in the 10th inning, about an hour later. The printed NYT missed the biggest sports news story because it is an irrelevant medium

nyt-cubs

This picture caught my attention not because of the Cubs, but because it closed the loop for me. My very first job was delivering newspapers door-to-door. This was in the early 1980s, when major cities often had multiple thriving newspapers.

By the time I got in the technology industry in the mid-1990s, printed newspapers were still a viable business, but the Internet was beginning to allow people to self-publish content for a fraction of the cost. This publishing model knocked down the local barriers of information, as it became simple to find information from around the world (thanks AOL and Yahoo and Google). But the Internet information was still not nearly real-time, as we didn’t have the information in our pocket as we do today.

But 20 years later, the cycle is complete for me. Newspapers have consolidated and gone digital and transformed into something very different that I delivered door-to-door as a child. And one pillar of communications in our country was disrupted, or destroyed, by the new pillar of communications in our world. I’ve had a front row seat to both models. It was a 30+ year cycle. It makes me wonder, how fast will the cycles evolve for other major pillars of our world?


November 16, 2016  1:47 PM

Does Digital Transformation need a Technical MBA?

Brian Gracely Brian Gracely Profile: Brian Gracely
Business, career, cloud, DevOps, Digital transformation, Github, MBA, Skills

When you attend quite a few technology conferences, you tend to hear the same massages and narratives over and over again. Stop me if you’ve heard this one – To drive a digital transformation within your business, you’re going to have to become a software company that uses DevOps to build cloud-native applications using microservices and continuous integration on immutable cloud infrastructure. Your company needs to become like Uber, Netflix or AirBnb in order to avoid getting Uber’d in your industry.

At a recent show, an Enterprise Architect came up to me and asked a straight-forward question – “Assuming I could figure out how to make all that technology work, how would I explain it to our business leaders in a way that they understand it….in business terms, not technical terms.” I took a stab at trying to explain that here, along with a ROI model and real transition example. [Disclosure, I work for Red Hat and had all those examples handy – this blog isn’t supposed to be a Red Hat advertisement.]

I’m reasonably versed in how to talk to technical and business audiences because I’m a weird mutt that’s been a solution architect and have an MBA. All that means is that I know that sometimes you need to talk about Agility vs. ROI vs. Cost of Capital vs. Internal Rate of Return. They all relate to similar things – are we getting measurable value out of the money and effort we spend, in the context of those areas where we could spend that time and money.

So this brings me back to that question. If, as a technology industry, we’re asking engineers to be interested in digital transformations, then we need to give them the basic tools and languages to be able to explain to a business leader how some piece of new technology (or process) will improve the business. Not just how to save technology or process costs, but truly impact how the company will grow revenue and profitability. These are essentially “Technical MBA” skills, for lack of a better term. And I’m not aware of any place that currently offers these frameworks or skills. I’m sure there are 50 page documents or spreadsheets that a high-priced consultant would provide you for a large fee, but shouldn’t think be something that is freely available to help our industry succeed and expand?

I’m willing to start building some of this content and putting it on GitHub, but I’m curious of other people think this is a needed set of knowledge. If so, what do you think should be included? I have some ideas, but I’d love to get your feedback or suggestions. And if you’re interested in participating, please reach out to me and we’ll figure out a way to collaborate.


November 10, 2016  9:57 AM

DevOps and the US Elections

Brian Gracely Brian Gracely Profile: Brian Gracely
DevOps, Networking, software-defined, Storage, Virtualization

This isn’t a deep-thoughts piece or a “hot take” on the election. There will be no shortage of those to fill your time, if you choose. This is simply an observation based on a few things I’ve seen and heard over the last few months – and then connected some dots while watching election coverage the other night.

For all the reasons* that people will claim for why the results of the election happened, one thing that appears to be “true” is that a large portion of the population is aligned to an economically struggling model (especially around manufacturing) and they have felt neglected or marginalized by another groups of people. This is sometimes tough for the technology crowd to understand, especially if you’ve never driven through a Rust Belt town that has been decimated because of a factory closing. “Just get re-trained” isn’t really a viable option for many of these people, for a wide variety of reason. These people were told that the business world moves fast and that they should get on-board with models that seem to replace the need for them.

* NOTE: I’m fully aware that there are many other issues/causes that impacted the results of the election. I’m not trying to minimize them or debate them here.

As I was attending several tech conferences recently, the topic of “DevOps” came up frequently. The discussions were about customers that wanted to “do some DevOps” or “add some DevOps”, usually because their management wanted them to understand that that the business world moves fast (and they need faster software, or better quality software). Now if you’re a fairly skilled SysAdmin, focused on the Infra/Ops for compute, then the DevOps push to automate all the things isn’t that big of a leap for you. You most likely have some of the basic skills needed to make this transition – understanding of Linux, basic scripting skills, etc. But if you’re from the rest of the Infra/Ops team, responsible for things like Networking, Storage, Virtualization, Security, etc. then you might be feeling like one of those Rust Belt workers. Your vendors haven’t really given you the tools to do all the necessary automation, and in some cases, they are also struggling to stay viable as these new DevOps approaches impact their existing customers (e.g. “Software Defined Everything”). Those people keep hearing that the skills and tools they have worked with for 10+ years are now “commodities” and should be marginalized or ignored.

I’m not sure if DevOps is the way forward for many IT organizations, mostly because I can rarely find two people that have the same definition or model of what DevOps is. There’s the Gene Kim “Phoenix Project” model, but I don’t see that out in the wild as much as I see the book on people’s desk. There are probably lots of reasons for that, but it seems like one of them might be that the DevOps world tends to treat the non-experts as a marginalized class of IT. The “they just don’t want to learn” set of people.

It’s not a perfect parallel or analogy, but since DevOps likes to draw Lean Manufacturing parallels to software development, I believe that we need to also be cognizant of the people that are part of the factory too – not just the processes. They are being told that they should get on-board with models that seem to replace the need for them.


Page 1 of 2012345...1020...Last »

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: