From Silos to Services: Cloud Computing for the Enterprise

Page 2 of 2112345...1020...Last »

March 20, 2017  9:52 PM

Tech Jobs: Is 45 the new 65?

Brian Gracely Brian Gracely Profile: Brian Gracely
"silicon valley", Family, IBM, Remote workers, Seattle, Telecommuting

evolution-of-man-to-machineIf you’re in the 35-45 year old demographic, in the tech industry, there is a frequent conversation that you have with colleagues of a similar age. Is a mix between “What’s next?” and “How are you dealing with things?. At that age, you typically have a few common characteristics:

  • A reasonably well paying, mid-level to senior role, often times moving up a management chain.
  • An established place of residency, which may or may not be located close to the headquarters of your company.
  • In many cases, a spouse and a family, with children that are beginning to have close ties to their school or clubs/groups associated with their activities.

If this was your situation and you lived in the Silicon Valley over the last 5-7 years, there were lots of stories about rising housing prices, and the shift in focus from the hardware companies in the South Bay towards the “social media and apps companies in San Francisco” that targets worker in their 20s who would be OK in the crowded apartments around the city. Many of the people in those situations looked to move to places like Portland or the Seattle suburbs to stay in the tech industry, but have a more family-friendly lifestyle.

But if you live outside Silicon Valley (or Seattle, Portland and maybe Boston….although Boston is less IT tech-centric these days), it’s a different story. For the last 10-15 years, the technology which allowed remote work has improved considerably – home broadband, WiFi, mobile devices, video conferencing, chat tools like Slack, social media, etc. – and many companies were able to grow their workforce and add expertise without requiring them to be in expensive headquarters buildings.

But it’s beginning to look like the trend around remote workers might be changing. Last year, there was a meme on Twitter about the requirement that workers “must relocate to San Francisco“. Unless you were moving from Seattle, NYC or Boston, you could expect at least a 3x cost-of-living increase with that move…so hopefully your options were going to be worth something BIG!

This year, we might be seeing the beginning of a more serious trend for the 35-45 crowd – IBM’s new policy which requires many workers to come into a HQ or Regional office, or their roles will no longer continue to exist. While IBM is pitching as a move to improve productivity, teamwork, and morale, it’s not hard to alternatively look at this as a move (in theory) to reduce the number of older employees without having to create an HR policy about “getting rid of older, more costly workers”. Many of those remote workers will not want to disrupt their established lives and hence may not be willing to pack up and move.

Where this may start to become interesting is if many of the other “incumbent” vendors, especially those that are struggling with declining margins or legacy business models, decide to follow IBM’s lead. Many of those companies have either been acquired by Private Equity firms (e.g. Dell/EMC) or have those PE firms influencing strategies at the board level (here, here, here). In those scenarios, cost-cutting tends to have as high a priority as innovation, sometimes more.

IBM is only one data point, but it’s a policy that will be watched by other management teams and should be watched by the 35-45 crowd too. The days of the remote office worker might be significantly changing…if you plan to stay with one of the incumbent vendors.

March 10, 2017  2:41 PM

The Evolution of Container Platforms

Brian Gracely Brian Gracely Profile: Brian Gracely
AWS, Azure, Cloud Foundry, Docker, Google Cloud, Kubernetes, OpenShift

I get asked all the time about the differences and benefits of the various container technologies, especially on the orchestration side – often called the “container platform”. At some point, it’s better to write down the answer than to keep repeating it, so here goes.

screen-shot-2017-02-27-at-9-40-54-pm

Even though most people only think about containers in the context of Docker (circa 2014), containers and container platforms have been around for quite a while. But lets start with the modern-day platforms that most people know – Heroku, Google AppEngine, Cloud Foundry, OpenShift, AWS Beanstalk, dotCloud.

The Early PaaS Days

Around 2009-2011, most of these platforms came into the market with the goal of making is easy for application developers to “just push code into production”. By most definitions, these platforms were known as Platform-as-a-Service (PaaS). To some extent, these platforms did a great job of making things simple for developers, by hiding all the complexity of IaaS and associated services (LBaaS, DBaaS, Authentication, etc). And under the covers, most of these platforms used some variant of Linux containers to isolate and run the developer’s applications. These were the original homegrown/DIY container schedulers/orchestrators.

But these early platforms also had some limitations:

  • Some of them only ran in a specific cloud environment
  • Some of them only supported 1 or 2 languages
  • Some of them were open-source, while others were various levels of proprietary software.
  • Most of the open-source platforms didn’t have large community support.

So in the 2011-2013 timeframe, there were plenty of articles written about the death of PaaS and how it hadn’t gained the massive adoption.

The Open Standards Emerge

And around 2014, two very important things happened. dotCloud went out of business, but it spun out the docker project, simplifying how containers could be used by developers to package their applications. Usage of docker for container packaging took off, and the market now had a pseudo-standard for packaging applications. In addition, Google decided to open source the Kubernetes project and the Mesos project spun out of work at Twitter and UC Berkeley. This provided the market with choices about open source container schedulers/orchestrators that had web-scale DNA built-in. These three activities helped developers know where to focus their energies in building scalable container platforms.

Open Container Platforms Everywhere

As a move into 2016-2017, the market has evolved quite a bit since the early 2009-2011 platforms. Almost every major platform (or public cloud) is now supporting either Kubernetes or Mesos as the scheduler/orchestrator, and the docker project as the format for container image and runtime. The Open Container Initiative (OCI) is further standardizing the container image and runtime formats into standards that are less aligned with a single company. Kubernetes has gained almost 4x the developer support of alternative container schedulers/orchestrators, mostly because of the openness of the community and because it supports many application types with the default scheduler.

Moving Forward…

The future moving forward will continue to accelerate at a rapid pace. With all of the community focus on open frameworks and standards, businesses can feel much more confident in the platform decisions they are making for the next 5-10 years.


February 28, 2017  2:24 AM

Will VCs stop funding Open Source companies?

Brian Gracely Brian Gracely Profile: Brian Gracely
"Venture capital", Cisco, Cloud Foundry, CloudStack, Docker, EMC, HPE, Kubernetes, Open source, OpenStack, Oracle, SDN, SDS, VC

dollar-1971102_640

For the last 7-8 years, we’ve seen a large increase in the number of infrastructure software companies that were funded by Venture Capital companies. Companies such as Cloud.com (CloudStack), Piston Cloud (OpenStack), Active State (Cloud Foundry), Docker (containers), Mesosphere (Mesos) and many others were funded to contribute to large open source projects, as well as commercialize that software.

These open source projects have become the new “standards”. The projects and the code have replaced the function of previous standards-bodies like IEEE or IETF in defining how infrastructure would work. Working code has become the new “standard”.

While some of these infrastructure companies have been acquired by traditional infrastructure companies (such as HPE, Cisco, EMC, and Oracle), almost none of those transactions would be considered “big exits” in terms of VC returns.

The VC community is at a cross-roads with open source. As we recently discussed with Scott Raney from RedPoint Ventures, open source is an opportunity and a challenge for VCs. It can enable their portfolio companies to invest less in software and more in engineering. But it can be difficult to monetize their projects unless their gain a significant community following and user-base. This challenge has also been addressed several times on the a16z podcast with former Nicira CEO, and current Andreessen Horowitz VC Martin Casado (here, here). The challenge for VCs is that many developers only want to work with open source software.

This brings up an interesting question…

If the returns for VC aren’t going to be at the levels their funds require, except in very special cases, would it be unexpected to see VCs begin to stop investing in this space?

So how would the markets react if that VC funding were to slow down or stop entirely? There’s a few interesting areas to consider:

  • For the last decade, open source communities have been rapidly innovating new areas – cloud, containers, big data, databases, IoT, software-defined infrastructure, etc. If the funding isn’t there for driving innovation or on-going engineering, then where will the new innovation come from? Is it possible that we’d see a resurgence of proprietary software offerings?
  • Many public cloud companies are leveraging open source infrastructure to deliver their cloud services, both behind the scenes and as on-demand services. Will they be able to continue to add new functionality as the same rate that they have over the past few years?

This will be an interesting trend to track in 2017 and 2018. With more businesses looking to leverage multiple cloud (or hybrid cloud) services, and build consistent operational models across clouds, it could be challenging without those broad open source standards.


February 19, 2017  5:03 PM

The Evolution of Serverless

Brian Gracely Brian Gracely Profile: Brian Gracely
AWS, Azure, Cloud Foundry, FaaS, Functions, IBM, Kubernetes, Lambda, Red Hat
687474703a2f2f6f6936362e74696e797069632e636f6d2f6a67676d36662e6a7067

Image Source: “Awesome Serverless”, from GitHub

It’s been a little more than six months since I last wrote about “serverless” and a good bit of change has occurred in the market during that time. In a nutshell, serverless has evolved from a niche topic to one that is gaining more technology options and more actual deployments are starting to be recognized at various meetups and events.

For a high-level view of the marketplace, this is a really nice list of services and technologies that are available today. This is another great source of information. As you can see, serverless is covering much more than just AWS Lambda, and it extends from services (e.g. Email, CI/CD Pipelines, etc.) to frameworks to entire platforms focused on vertical elements of a business technology stack (e.g. payments and eCommerce).

A number of trends are evolving across the industry:

  • Serverless or FaaS (Functions as a Service) is becoming more polyglot. The breadth of supported languages has evolved from Python or Node.js to include a much broader scope.
  • Serverless is becoming either an add-on or native service to many of the CaaS (Container-as-a-Service) or PaaS (Platform-as-a-Service) platforms. For example, many new projects for serverless have been built to run on Kubernetes – Kubeless, Fission, Funktion, Funcatron. At some point I would expect that there will be some consolidation of ideas between these projects and Kubernetes will eventually have a native “functions” job, similar to “batch” or “stateful sets”. Cloud Foundry has announced that Spring Cloud Function is supposed to be available in 2017 for Spring Boot applications. I would also expect that Docker will announce some more formalized plan for serverless as the DockerCon2017 event in April, beyond the basic demos from 2016.
  • Iron Functions, a previously proprietary technology is moving to open source and multi-cloud Iron Functions.
  • Serverless is gaining greater traction at events and meetups. ServerlessConf is now being held 3-4 times a year around the world, and events like FunctionConf are starting to pop up as well. These events are being augmented with a number of serverless-centric meetups, usually mixed into the local AWS, Docker, Kubernetes communities.
  • Many companies are beginning to ask for serverless implementations that can either run within their own data center, or a framework that would consistently run on multiple cloud environments,

The big “next steps” for serverless will be a focus on enabling data services to easily tie into serverless computing frameworks – e.g. all notifications from a wide-range of data services to trigger serverless functions – and work somewhat consistently in multiple cloud environments. The is also a huge need to educate developers about which types of application patterns are emerging as the best fit for early serverless adoption.

While serverless movement is evolving quickly, and AWS is pushing for all-in usage by their customers, it is by no means a silver bullet for all applications. Serverless still has many limitations around operations. It still has limitations about what type of application can utilize the services. And most people still haven’t figured out microservices or containers or DevOps, so the prospect of breaking down applications into even more finite elements is not going to be an easy transition.

NOTE: We’ve decided to not spin off a separate Serverless podcast from The Cloudcast (@thecloudcastnet), but rather include serverless topics within the same podcast feed. But we are monitoring serverless activities and sending those out on the @serverlesscast Twitter feed.


January 30, 2017  11:28 PM

Why Containers make Sense for Modern Applications

Brian Gracely Brian Gracely Profile: Brian Gracely
containers, Developers, DevOps, Docker, Google, Kubernetes, Microservices, Netflix, OpenShift, Red Hat, self-service, VMware

container-158362_640-1This past week, I was having a conversation with an industry analyst about some of the fast moving trends in the industry – specifically about the growth of containers and Kubernetes. While the size of the container ecosystem is still being determined (here, here), it can often be difficult to measure because some leading companies are private, some public companies don’t break out individual product revenues, or users are leveraging the open source DIY (“trunk”) bits. But regardless of the revenues, many people are trying to determine how the docker project has grown to 6B container downloads and projects like Kubernetes have grown to over 1000+ developers in just over 1 year.

When looking at this space, there are several macro-level concepts that are driving this growth:

Businesses are becoming more digital

For many years, businesses have admired the speed at which web-centric companies have delivered innovation. Via startups and forward-looking companies, this pace of innovation is now becoming commonplace in many industries. At the core of this innovation is the people (developers), process (DevOps) and technology (containers, microservices) that have been pioneered at Google, Netflix, Etsy and others. The commercial delivery of containers and Kubernetes is beginning to bring these capabilities to companies outside of Silicon Valley.

Developers expect a self-service experience

For many years, public cloud services were called “Shadow IT”. They allowed anyone to get nearly frictionless access to IT-like resources through a self-service mechanism. Some IT vendors attempted to deliver similar capabilities to their on-premises products, but these were often via costly upgrades and additional tools which mostly looked like extensions of IT dashboards. The container-centric platforms are now embedding these capabilities, and often leading with a developer-centric UI (or APIs, CLIs) and integrated automation and scaling.

“Portability” is a design consideration

Whether or not people believe that Hybrid Cloud is a valid design approach, many companies are looking at how to leverage multiple cloud services in the future. And many of these companies are still interested in finding ways to make their technology choices portable between those clouds, in one way or another. Containers provide that portability in a way that VMs never really did. Containers can run anywhere a cloud provides a Linux host (and eventually a Windows host). Containers also provide a consistent “unit” to integrate with CI/CD systems, helping to evolve the speed and quality of application pipelines.

IT organizations continue to need cost-effective solutions

As much as “digital transformation” and “agility” dominate today’s IT headlines, many IT organizations are still being asked to consistently take costs out of their systems. For many companies, software licensing and repeatable manual tasks are the focus areas of those reductions because they are quantifiable. Platforms like Kubernetes are designed to be highly automated, as well as integrate with external automation platforms for deployment and testing. Combine this with the ability to run containers on bare-metal Linux hosts and the costs of the hypervisor can often be reduced (for appropriate workloads). This report from IDC shows how some of those cost savings can be measured in a container-based platform.


January 16, 2017  10:07 PM

Evolving Monoliths vs. Microservices

Brian Gracely Brian Gracely Profile: Brian Gracely
containers, DevOps, Java, Kubernetes, Microservices, Software

mainlandFor the last couple years, the idea of “software eating the world” has gathered quite a bit of traction. While software-lead companies (Uber, AirBnB, Netflix, Facebook, etc.) have created considerable disruption in many vertical industries, the vast majority of companies are still struggling to managing the transition from existing monolithic applications to more agile microservices implementations.

Over the last couple weeks, I’ve had the opportunity to dig into what it means to balance monoliths and microservices with several industry thought-leaders (here, here). From those discussions, a few important considerations come through:

  1. Not every application needs to be built using microservices. Plenty of existing applications can be improved by rethinking the sprint process (length, frequency).
  2. Instead of focusing on monoliths vs. microservices, the focus should be on what is needed to build and ship software more rapidly. The focus should be on making the overall business more agile, and able to react to digital feedback loops about how users interact with those applications.
  3. The testing and deploying processes are just as important as the application building process. Many companies should initially be focused on how well they are organized and prepared (automated testing systems) to test and integrate software. This is often focused on CI/CD tools like Jenkins.
  4. The culture aspects of more to more modular, microservices-based platforms should not be under-estimated. It requires a different understanding about what it means to build an “independent service”, both from a technology perspective and an internal communications perspective.
  5. It’s critical to have a platform in place that provides developers with self-service access to agile infrastructure resources, and the platform should abstract many of the complexities (service discovery, high availability, networking, storage, auto-scaling and load-balancing) that developers face.

Managing a portfolio of applications can be complicated, especially as it goes through an evolution that involves more than just technology updates. Cultural shifts like DevOps, and organizational shifts like “2-pizza teams” can seems extremely complicated and uncertain in the early stages. Sometimes they require breaking out new organizations to prototype the new habits for a large organization. It’s often the willingness to adapt the culture and process to a more iterative model that lays the foundation for faster monoliths and more agile microservices applications.


January 8, 2017  8:39 PM

Tech Leadership Doesn’t Last Forever

Brian Gracely Brian Gracely Profile: Brian Gracely
Amazon, AOL, Apple, AWS, China, IBM, Jeff Bezos, Microsoft, Netscape

growing-the-pie-lightspringThere tend to be two types of thinking about technology markets:

  1. New technologies will expand markets – The “growing the pie” philosophy.
  2. New technologies will kill old technologies – The “winner takes all” philosophy.

Since it’s more complicated to understand the dynamics of highly competitive market, people tend to gravitative to the possibilities of winner-take-all market outlooks. These are usually the leading platforms at the time – historically it’s been IBM Mainframes, DEC minis, Microsoft Windows, AOL, Google Search, Apple iPhone – and now people are talking about Amazon Web Services (AWS) in the same category.

While IBM, DEC and Microsoft all saw their dominance get interrupted by a shift in computing paradigms, not all leaders get disrupted because of major technology changes. Sometimes the reasons are beyond technology changes.

uewb_02_img0141For example, notorious 1920’s mobster Al Capone was only taken down by law enforcement for tax evasion – not murder, racketeering or corruption. Capone had things covered to avoid being imprisoned for those more serious crimes, but he wasn’t prepared for the new government strategy. It’s not a technology analogy, but it aligns to the idea that sometimes the rules of the game change and the dominant personality in the game gets tripped up.

Some other examples:

AOLMerger with Time Warner – At the time, the thought of marrying the “Internet” with Entertainment content was considered to be a match made in heaven. But sometimes cultures, egos and economics don’t work out the way the spreadsheets planned.

MicrosoftAnti-Trust (Windows OS) – Microsoft had add functionality to Windows before disrupting Netscape’s browser business by embedding Windows Explorer. But the world was moving from stand-alone computers to a world that would soon be connected to the Internet for all information.

GoogleEU Anti-Trust – Google has had a dominant position in search on the Internet ever since the browser become the dominant computing UI. But mobile computing is a different paradigm, and regulators were concerned about Google leveraging their dominance for ads, apps, maps, etc. on mobile screens.

AppleChina Manufacturing – With a new administration coming into power in the United States, no company has more at stake than Apple if the administration decides to significantly change foreign policies towards China. While design was once an Apple competitive advantage, their advantage is now distinctly about supply-chain management. Will they be able to continue to dominate the revenues of mobile computing if the US Gov’t changes the game about non-US manufacturing?

AWS/Amazon – Donald Trump feud with Jeff Bezos – I wrote in my 2017 predictions that it wouldn’t surprise me with President Trump didn’t go after either Jeff Bezos or Amazon. He ran on a platform of maintaining US jobs, and Amazon is pushing automation in many areas (distribution centers, shipping trucks, drone delivery, grocery stores, etc.). He also has shown to want to discredit the media / free press, and Jeff Bezos owns the Washington Post. Trump may also look to step up efforts to collect taxes on Internet sales (e.g. “Amazon tax“). While these possibilities may not directly impact the AWS business, which is highly profitable, but it could have second-order impacts if Amazon gets tied up on government litigation the way that Microsoft was for years in their anti-trust cases.

There is a lot of uncertainty in the world as we head into 2017, both in the US and around the world. It will be very interesting to watch and see if the dominant platforms of today will be disrupted by something outside of the competitiveness of the market.


December 31, 2016  6:04 PM

What if the Cloud moves to the Edge?

Brian Gracely Brian Gracely Profile: Brian Gracely
AWS, Azure, Bots, CDN, Cloud Computing, DDOS, DNS, Edge computing, HPE, iot, Security, Sensors, TCP

person-sitting-on-cliffWe know three things about the history of computing:

  1. Computing devices continues to get smaller and less expensive.
  2. As the form-factor of computing changes, the core architecture has frequently evolved from being centralized to decentralized, and then back again.
  3. Sometimes it’s useful to see where the “money people” (e.g. Venture Capitalists) are putting their bets on the future of computing trends.

If you follow the tech media, you know that things like Internet of Things (IoT), Drones, Robots and Autonomous Vehicles are gathering quite a bit of investment, business partnerships, and overall market interest. Industry analyst David Floyer of Wikibon calls Edge Computing “a critical element of IoT in 2017“. Of course, this isn’t the first time that people have called for architectures that prioritize intelligence at the edge of the network.

As functionality does move away from centralized computing architectures, it brings four key elements into consideration:

  1. How much computing is appropriate at the edge?
  2. How much storage is appropriate at the edge? (and how is it maintained)
  3. How much bandwidth is needed at the edge?
  4. How are devices secured at the edge?

How Much Computing is Needed?

It all depends on the application. Does it require heavy computing resources, such as the HPE devices? Does it require lesser computing like AWS Greengrass? Can it use very small, low-cost computing devices like Arduino?

How Much Storage is Needed? 

An on-going discussion that I’ve had with Wikibon’s Floyer is whether or not anyone really wants to manage disks (or other storage media) on remote devices. It would require backup systems to get data off the device (for capacity, archiving or analysis), and truck-rolls to repair failed disks. While the overall costs of storage have significantly dropped year over year, the cost of managing data has not dropped at nearly the same rate.

It’s possible that the data doesn’t need to remain on the device (or the location), in which case a “disposal” device could be replaced with another when the storage capacity is full.

How Much Bandwidth is Needed? 

This is a double-edged question. How much does bandwidth cost, and is bandwidth even available at the remote location? For many parts of the world, cellular data is still extremely expensive and not always available, especially in remote applications (wind farms, etc.)

How much data does the application/device generate? Does the application need to send large amounts of data back to a centralized location, or keep the majority of data local for localized actions. Can the application use cached data at the edge of the network? IoT standards-bodies and manufacturers are already working on TCP/IP protocols to better manage bandwidth usage and chatty protocols.

How to Secure Edge Devices? 

This is going to be an on-going question for many, many years. How to update 10s of millions of devices when a Linux kernel bug is found? How to make sure that a virus isn’t shipped with a piece of firmware before it even boots? How to make sure that the devices aren’t compromised and turned into bots to create DDoS attacks on major Internet services?

There is a good chance that the next evolution of the Internet will move more functionality to the edge. It will unlock new business opportunities and potential value creation for end-users. But what the new architectures will look still has many open-ended questions.


December 31, 2016  1:18 PM

4 Approaches to a Hybrid Cloud

Brian Gracely Brian Gracely Profile: Brian Gracely
AWS, Azure, containers, EMC, Google, Hybrid cloud, Kubernetes, Multi-cloud, Red Hat, VMs, VMware

hybrid-002The concept of “Hybrid Cloud” has gone through many definitions and iterations over the last few years. In 2009, EMC introduced the concept as matching hardware (e.g. Vblocks) + software (e.g. VMware) in both a private cloud environment and a managed service provider. Many other hardware vendors quickly adopted a similar definition, hoping to sell their gear into private clouds and managed service providers. In 2014/15 that model eventually evolved to include VMware’s “cross cloud architecture“, using VMware NSX to interconnect networks between private clouds and a new set of public clouds (e.g. AWS). These models depended on uniformity of technology, typically from a single vendor. They were primarily based on proprietary technology and were not driven by the cloud providers.

Over the last couple years, a few new approaches have emerged.

Kubernetes Federation based on Open Source technologies

k8s-logoWhen Google open sourced Kubernetes in 2014, this was the first time that one of the major cloud providers made core “cloud” technology available to open communities. Support for Kubernetes has grow at an incredible pace in 2015 and 2016, far surpassing any other open source cloud platform project. And with the v1.5 release of Kubernetes, “Federation” has now been enabled to link clusters of Kubernetes resources across clouds. While still very new technology, it has the ability to connect any cloud using open source technologies; built on proven cloud provider technology and enhanced by 1000+ developers. Beyond Google’s contributions and GKE service, The Kubernetes model has been adopted by enterprise companies such as Red Hat (OpenShift), Microsoft, Intel, IBM, SAP, Huiwei, Deis, Apprenda, Apcera, Canonical, VMware and many others.

“AzureStack” – Pushing Azure into the Private Data Center

In early 2015, Microsoft announced the preview of the AzureStack concept. The idea was to bring the breadth of the Azure public cloud services into a customer’s data center, running on servers owned by the customer. This would allow customers to consistently run Azure services in a private data center or in the public Azure cloud. At the time, the Azure team still had to evolve many concepts, including which sets of features (all, partial) would be included. AzureStack has determined which hardware platforms it will support, and the ship date has been moved to “mid-CY2017“. Given the breadth of open source usage in the Azure public cloud, it will be interesting to see what open source technologies are supported at GA in 2017. It is also an interesting strategic approach to attempt to ship a large set of Azure features into a single-bundled release. This seems more like the legacy Windows approach than the more modern “modular, independent services” approach used in the public cloud.

“Greengrass” – Pushing Lambda into the Private Data Center

For many years, AWS avoided the term “hybrid cloud” like the plague, even in partnerships. They still don’t embrace the concept (in terminology), but they do seem to be coming around more to the idea that not every use-case or workload will run in the centralized AWS cloud. I say “centralized AWS cloud” because their 2016 re:Invent announcements introduced a number of services (Snowmobile, Lambda @ Edge, Greengrass) that extended the reach of the AWS cloud beyond their data centers. One of those announcements was “AWS Greengrass“. This new service extends the AWS Snowball form factor into a service that can live at a customer’s location for a prolonged period of time, managed by AWS. It includes both storage services and AWS Lambda “serverless” compute services. In stark contrast to Azure’s approach, the AWS approach is much more of a lightweight, MVP (Minimum Viable Product) offering. While serverless computing is still in it’s infancy, it is beginning to show promise for specific use-cases.

Multiple Approaches, Multiple Customer Choices

There four approaches offer customers a wide variety of choices if they wish to use multiple cloud resources or build “hybrid” services across clouds. Some are based on hardware + software, while others are based solely on software. Some of specific to a vendor or cloud, while others embrace open source software and communities. And some offer different choices about who is responsible for acquiring, managing and operating the cloud services on an on-going basis. Being able to leverage multiple cloud resources (cost, geography, breadth of services) is still a top priority for many CIOs, so it will be interesting to see if these new approaches to hybrid cloud services gain greater traction than the previous incarnations.


December 18, 2016  7:49 PM

Predictions for 2017

Brian Gracely Brian Gracely Profile: Brian Gracely
Amazon, AWS, Cisco, Drones, Hardware, HPE, Jeff Bezos, taxes, VC

Yep, it’s time to write the “predictions” articles. Feel free to go back to my previous years (2015, 2014) predictions to see if I’m a complete idiot or only partially an idiot. We all know that in 2017 we’ll be using VDI or Linux on your desktop, right after you refuel your flying car and watch the Super Bowl on your 3D TV. But maybe some other things will happen too…

2016-to-2017

[1] – The Trump Administration will pick a fight with Jeff Bezos (and maybe Amazon):screen-shot-2016-12-12-at-9-27-29-am

I typically don’t like to discuss politics on this blog, but given the current environment of the US, it’s difficult to not envision the intersection of tech and politics. Looking at the incoming administration’s past actions, it’s not difficult to see how President-elect Trump doesn’t go after Amazon CEO Jeff Bezos for various reasons. The two exchanged words directly during the election over various issues (e.g. Internet Taxes), and the Bezos-owned Washington Post has frequently been critical of Trump. If Trump decides to pick fights about the US losing jobs, he could point to things like Amazon Robotic WarehousesAmazon Go Grocery Stores, Amazon Prime Air Drone Delivery, their evolving Autonomous Delivery Truck fleets or the decline of IT jobs from AWS. It will be interesting to see how Wall Street reacts to tweets about specific companies once Trump officially becomes president.

[2] The hardware-centric companies will go through significant reorganizations and consolidation.

Regardless of which forecast model you subscribe to, there is mostly consensus around the expectation that selling hardware into corporate data centers will be a more difficult business in 2017 (and beyond). The overall business is essentially flat, depending on the segment, and margins have been dropping for many years. The Dell/EMC acquisition has been at the forefront of this trend, and we’re already seeing the largest companies making moves (Dell/EMC, Cisco, HPE) to be less focused on software, less focused on cloud computing and less focused on business models (e.g. open source) that differ significantly from their core business of selling hardware. Some pundits believe that we’re in for even more consolidation or extinction in the hardware-centric portion of the IT vendor landscape.

[3] AWS will quietly launch a pseudo-VC firm to attract developers instead of letting them go to start-ups.

screen-shot-2016-12-06-at-8-45-34-pm

AWS is well-known for finding inefficiencies (or areas of profitability) in the IT industry and creating a new business offering to capture a portion of that space. With their insight into the usage models of many startups, it wouldn’t be unexpected to see them creating a direct incubator or VC-like program for new feature-building organizations. This was partially signaled by Adrian Cockcroft (@adrianco), VP of at Cloud Architecture Strategy at AWS, and former VC at Battery Ventures.

[4] “Flyover state technologies” become a serious conversation, driving companies to establish themselves in red states.

“Are you willing to relocate to San Francisco?” The question gets asked all the time for high-tech jobs, especially in software-centric industries. This means that there is software and engineering talent outside of Silicon Valley, today. But the framework for innovation (VC capital, meetup events, many local companies for job-hopping) is established in Silicon Valley. Over the next few years, it’s highly likely that we’ll see programs and incentives put in place to encourage more innovation to get created and grown in areas outside of Silicon Valley, Seattle, Boston, Boulder, Austin, Raleigh, etc. Will we see a rejuvenation of automotive technologies in the Great Lakes region? Will we see the next great wind-energy company in Kansas or Nebraska? Getting manufacturing jobs to return to the United States will be a complicated economic endeavor (tariffs, tax breaks, deals), but the opportunity to create the next set of technologies may be more realistic. Plenty of areas in the flyover states are looking to boost their local economies and have excellent university systems to draw ideas/research from. But will the other pieces of the needed ecosystem evolve as well?

[5] Creative options will be proposed to repatriate revenue for US tech companies.

American tech companies are holding hundreds of billions of dollars in overseas accounts. They have lobbied for years to try and get a “tax holiday” to repatriate those funds back to the United States. Without guarantees on how the funds would be used to improve the American economy, instead of being used for stock buybacks or executive bonuses, the US government has rejected these demands. But that impasse might be coming to an end as the US has massive budget deficits and big plans for job-creation programs under the new administration. I expect that we’ll see a program put in place that will allow a reasonable tax rate for repatriated funds (~10%) in exchange for the tax funds to go directly towards those job-creation programs. Whether or not the program would be successful towards that end goal is TBD, but it would be perceived as a win-win for both the government and tech executives.


Page 2 of 2112345...1020...Last »

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: