Coffee Talk: Java, News, Stories and Opinions

Page 1 of 1512345...10...Last »

June 26, 2017  10:26 PM

Advancing JVM performance with the LLVM compiler

cameronmcnz Cameron McKenzie Profile: cameronmcnz
JVM

The following is a transcript of an interview between TheServerSide’s Cameron McKenzie and Azul Systems’ CTO Gil Tene.

Cameron McKenzie: I always like talking to Gil Tene, the CTO of Azul Systems.

Before jumping on the phone, PR reps often send me a PowerPoint of what we’re supposed to talk about. But  with Tene, I always figure that if I can jump in with a quick question before he gets into the PowerPoint presentation, I can get him to answer some interesting questions that I want the answers to. He’s a technical guy and he’s prepared to get technical about Java and the JVM.

Now, the reason for our latest talk was Azul Systems’ 17.3 release of Zing, which includes an LLVM-based, code-named Falcon, just-in-time compiler. Apparently, it’s incredibly fast, like all of Azul Systems’ JVMs typically are.

But before we got into discussing Azul Systems’ Falcon just-in-time compiler, I thought I’d do a bit of bear-baiting with Gil and tell him that I was sorry that in this new age of serverless computing and cloud and containers, and a world where nobody actually buys hardware anymore, that it must be difficult flogging a high-performance JVM when nobody’s going to need to download one and to install it locally. Well, anyways, Gil wasn’t having any of it.

Gil Tene: So, the way I look at it is actually we don’t really care because we have a bunch of people running Zing on Amazon, so where the hardware comes from and whether it’s a cloud environment or a public cloud or private cloud, a hybrid cloud, or a data center, whatever you want to call it, as long as people are running Java software, we’ve got places where we can sell our JVM. And that doesn’t seem to be happening less, it seems to be happening more.

Cameron McKenzie: Now, I was really just joking around with that first question, but that brought us into a discussion about using Java and Zing in the cloud. And actually, I’m interested in that. How are people using Java and JVMs they’ve purchased in the cloud? Is it mostly EC2 instances or is there some other unique way that people are using the cloud to leverage high-performance JVMs like Zing?

Gil Tene: It is running on EC2 instances. In practical terms, most of what is being run on Amazon today, it is run as virtual instances running on the public cloud. They end up looking like normal servers running Linux on an x86 somewhere, but they run on Amazon, and they do it very efficiently and very elastically, they are very operationally dynamic. And whether it’s Amazon or Azure or the Google Cloud, we’re seeing all of those happening.

But in many of those cases, that’s just a starting point where instead of getting server or running your own virtualized environment, you just do it on Amazon.

The next step is usually that you operationally adapt to using the model, so people no longer have to plan and know how much hardware they’re going to need in three months time, because they can turn it on anytime they want. So they can empower teams to turn on a hundred machines on the weekend because they think it’s needed, and if they were wrong they’ll turn them off. But that’s no longer some dramatic thing to do. Doing it in a company internal data center? It’s a very different thing from a planning perspective.

But from our point of view, that all looks the same, right? Zing and Zulu run just fine in those environments. And whether people consume them on Amazon or Azure or in their own servers, to us it all looks the same.

Cameron McKenzie: Now, cloud computing and virtualization is all really cool, but we’re here to talk about performance. So what do you see these days in terms of bare iron deployments or bare metal deployments or people actually deploying to bare metal and if so, when are they doing it?

Gil Tene: We do see bare metal deployments. You know, we have a very wide mix of customers, so we have everything from e-commerce and analytics and customers that run their own stuff, to banks obviously, that do a lot of stuff themselves. There is more and more of a move towards virtualization in some sort of cloud, whether it’s internal or external. So I’d say that a lot of what we see today is virtualized, but we do see a bunch of the bare metal in latency-sensitive environments or in dedicated super environments. So for example, a lot of people will run dedicated machines for databases or for low-latency trading or for messaging because they don’t want to take the hit for what the virtualized infrastructure might do to them if they don’t.

But having said that, we’re seeing some really good results from people on consistency and latency and everything else running just on the higher-end Amazon. So for example, Cassandra is one of the workloads that fits very well with Zing and we see a lot of turnkey deployments. If you want Cassandra, you turn Zing on and you’re happy, you don’t look back. In an Amazon, that type of cookie-cutter deployment works very well. We tend to see that the typical instances that people use for Cassandra in Amazon with or without us is they’ll move to the latest greatest things that Amazon offers. I think the i3 class of Amazon instances right now are the most popular for Cassandra.

Cameron McKenzie: Now, I believe that the reason we’re talking today is because there are some big news from Azul. So what is the big news?

Gil Tene: The big news for us was the latest release of Zing. We are introducing a brand-new JIT compiler to the JVM, and it is based on LLVM. The reason this is big news, we think, especially in the JVM community, is that the current JIT compiler that’s in use was first introduced 20 years ago. So it’s aging. And we’ve been working with it and within it for most of that time, so we know it very well. But a few years ago, we decided to make the long-term investment in building a brand-new JIT compiler in order to be able to go beyond what we could before. And we chose to use LLVM as the basis for that compiler.

Java had a very rapid acceleration of performance in the first few years, from the late ’90s to the early 2000s, but it’s been a very flat growth curve since then. Performance has improved year over year, but not by a lot, not by the way that we’d like it to. With LLVM, you have a very mature compiler. C and C++ compilers use it, Swift from Apple is based on its, Objective-C as well, the RAS language from Azul is based on it. And you’ll see a lot of exotic things done with it as well, like database query optimizations and all kinds of interesting analytics. It’s a general compiler and optimization framework that has been built for other people to build things with.

It was built over the last decade, so we were lucky enough that it was mature by the time we were making a choice in how to build a new compiler. It incorporates a tremendous amount of work in terms of optimizations that we probably would have never been able to invest in ourselves.

To give you a concrete example of this, the latest CPUs from Intel, the current ones that run, whether they’re bare metal or powered mostly on Amazon servers today, have some really cool new vector optimization capabilities. There’s new vector registers, new instructions and you could do some really nice things with them. But that’s only useful if you have some optimizer that’s able to make use of those instructions when they know it’s there.

With Falcon, our LLVM-based compiler, you take regular Java loops that would run normally on previous hardware, and when our JVM runs on a new hardware, it recognizes the capabilities and basically produces much better loops that use the vector instructions to run faster. And here, you’re talking about factors that could be, 50%, 100%, or sometimes 2 times or 3 times faster even, because those instructions are that much faster. The cool thing for us is not that we sat there and thought of how to use the latest Broadwell chip instructions, it’s that LLVM does that for us without us having to work hard.

Intel has put work into LLVM over the last two years to make sure that the backend optimizers know how to do the stuff. And we just need to bring the code to the right form and the rest is taken care of by other people’s work. So that’s a concrete example of extreme leverage. As the processor hits the market, we already have the optimizations for it. So it’s a great demonstration of how a runtime like a JVM could run the exact same code and when you put it on a new hardware, it’s not just the better clock speed and not just slightly faster, it can actually use the instructions to literally run the code better, and you don’t have to change anything to do it.

Cameron McKenzie: Now, whenever I talk about high-performance JVM computing, I always feel the need to talk about potential JVM pauses and garbage collection. Is there anything new in terms of JVM garbage collection algorithms with this latest release of Zing?

Gil Tene: Garbage collection is not big news at this point, mostly because we’ve already solved it. To us, garbage collection is simply a solved problem. And I do realize that that often sounds like what marketing people would say, but I’m the CTO, and I stand behind that statement.

With our C4 collector in Zing, we’re basically eliminating all the concerns that people have with garbage collections that are above, say, half a millisecond in size. That pretty much means everybody except low-latency traders simply don’t have to worry about it anymore.

When it comes to low-latency traders, we sometimes have to have some conversations about tuning. But with everybody else, they stop even thinking about the question. Now, that’s been the state of Zing for a while now, but the nice thing for us with Falcon and the LLVM compiler is we get to optimize better. So because we have a lot more freedom to build new optimizations and do them more rapidly, the velocity of the resulting optimizations is higher for us with LLVM.

We’re able to optimize around our garbage collection code better and get even faster code for the Java applications running it. But from a garbage collection perspective, it’s the same as it was in our previous release and the one before that because those were close to as perfect as we could get them.

Cameron McKenzie: Now, one of the complaints people that use JVMs often have is the startup time. So I was wondering if there’s anything that was new in terms of the technologies you put into your JVM to improve JVM startup? And for that matter, I was wondering what you’re thinking about Project Jigsaw and how the new modularity that’s coming in with Java 9 might impact the startup of Java applications.

Gil Tene: So those are two separate questions. And you probably saw in our material that we have a feature called ReadyNow! that deals with the startup issue for Java. It’s something we’ve had for a couple of years now. But, again, with the Falcon release, we’re able to do a much better job. Basically, we have a much better vertical rise right when the JVM starts to speed.

The ReadyNow! feature is focused on applications that basically want to reduce the number of operations that go slow before you get to go fast, whether it’s when you start up a new server in the cluster and you don’t want the first 10,000 database queries to go slow before they go fast, or whether it’s when you roll out new code in a continuous deployment environment where you update your servers 20 times a day so you rollout code continuously and, again, you don’t want the first 10,000 or 20,000 web request for every instance to go slow before they get to go fast. Or the extreme examples of trading where at market open conditions, you don’t want to be running your high volume and most volatile trades in interpreter Java speed before they become optimized.

In all of those cases, ReadyNow! is basically focused on having the JVM hyper-optimize the code right when it starts rather than profile and learn and only optimize after it runs. And we do it with a very simple to explain technique, it’s not that simple to implement, but it’s basically we save previous run profiles and we start a run assuming or learning from the previous run’s behavior rather than having to learn from scratch again for the first thousand operations. And that allows us to run basically fast code, either from the first transaction or the tenth transaction, but not from the ten-thousandth transaction. That’s a feature in Zing we’re very proud of.

To the other part of your question about startup behavior, I think that Java 9 is bringing in some interesting features that could over time affect startup behavior. It’s not just the Jigsaw parts, it’s certainly the idea that you could perform some sort of analysis on code-enclosed modules and try to optimize some of it for startup.

Cameron McKenzie: So, anyways, if you want to find out more about high-performance JVM computing, head over to Azul’s website. And if you want to hear more of Gil’s insights, follow him on Twitter, @giltene.
You can follow Cameron McKenzie on Twitter: @cameronmckenzie

June 26, 2017  10:04 PM

Maarten Ectors explains what open source wireless means for developers

George Lawton Profile: George Lawton
Open source

Open source software has completely transformed the opportunities for software development. Now, Canonical is launching a new effort to bring these principles to programmable software defined radios and industrial equipment. Developers will be able to experiment with different wireless protocols for efficiency, security, or new architectures. It also threatens to disrupt the business model of wireless carrier providers. TheServerSide caught up with Maarten Ectors, VP of IoT for Canonical to find out what this revolution means for developers.

TSS:Noticing the way that startups like Uber, Netflix, and Airbnb, are disrupting legacy business models, it is fascinating to what the birth of a new dramatic shift like this might mean to traditional telco service and hardware business models. What makes it possible to build a better base station compared to legacy offerings?

Maarten Ectors: The magic lies in commoditization of software defined radios and IT infrastructure in general. Via the LimeSDR any type of wireless protocol can be used to receive and transmit data. The end result is that open source hardware based on standard Intel CPUs and programmable components like FPGAs becomes “good enough”. You no longer need specialized chips which are extremely expensive to develop. It is this “good enough” for service which is changing the economics.

The other element is “software defined” via app stores. If you can move away from a model whereby only one company can program a base station to a world where anybody can, then innovation is multiplied. Base station designers aren’t normally skilled in cloud, content distribution networks, games, and so on. By opening up the base station and simplifying how to deliver software for it, people with completely different skill-sets can help lower costs, reduce time to market and generate new revenues.

TSS: It looks like the initial crowdfunding is sort of experimental and relatively simple compared to larger wireless switches in terms of capacity and throughput. Is it the case that this is better suited for smaller scale deployments and experimentation, and that companies might innovate on top of this to build more cost-efficient carrier grade implementations?

Maarten Ectors: The LimeSDR was a crowdfunding campaign aimed towards developers. The LimeNet crowdfunding campaign is aimed towards anybody who wants to run or experiment with production-ready base stations. The target group is different. We first needed developers to make apps before we can launch a product with an app store. The LimeNet campaign is focused on carrier grade and creating a market for the developers that made apps in round one.

TSS: How might this kind of system make it easier to prototype out different wireless configurations?

Maarten Ectors: We have created two cloud backend stores for apps for base stations. The LimeSDR app store is an open store for developers to show whatever they have prototyped. Any new protocol, any new use case, new idea, new approach, can be easily shared with others even before it is a complete product. This allows Lime Micro to understand what is hot. Afterwards the best community initiatives will be offered to make a production ready version for the by invitation only app store: LimeNet. LimeNet is connected to the production-ready equipment and both best of community and commercial vendors can offer their solutions on it. The aim is to allow telecom operators to quickly launch new solutions and test them in field trials. If they work, they can be scaled out. If they don’t then they get removed. Both approaches will allow many different ideas to go very fast from idea to prototype to potentially production.

TSS: What do you see as some of the best tools and frameworks to simplify the development of this new style of wireless apps?

Maarten Ectors: Lime Microsystems is working with the open source community on packaging tools and frameworks that simplify the development of wireless apps. They have created their own tools to make apps with the LimeSuite and are working with PhotosWare, GNU Radio and many others. The tools we see today are just the beginning. We expect many more tools to emerge from the community.

TSS: What are some of the testing challenges in building these wireless apps and some of the best tools and practices to reduce defects and bugs?

Maarten Ectors: The app format, snaps (see snapcraft.io), allows for DevOps for Devices in which you can easily upload new code to the source repository, get it automatically tested via your CI and have a new version of the snap automatically uploaded to the store on what is called the “Edge Channel”. The channel approach allows developers to create nightly builds and have automated tools test their correct working from a unit test perspective. Afterwards a nightly build can be promoted to a beta channel and go through more thorough automated testing. Finally, it can pass via the release candidate to the stable channel. Telecom operators can have equipment in their R&D centers that pick up release candidates and test them automatically. If these versions work, they can get blessed and as soon as they become available on the stable channel, the production systems can automatically upgrade in a safe time window. If this upgrade goes wrong, then a rollback is always possible thanks to the transactional rollback capabilities of Ubuntu Core (ubuntu.com/core).

TSS: What kind of FCC and other regulatory concerns to buyers of these programmable devices need to address? In other words, how can you allow programmer flexibility while addressing concerns that developers/users will not accidentally or purposefully interfere with license holders?

Maarten Ectors: Lime Micro will be working on solutions for developers of protocols, holders of license spectrum and developers of apps that use both to be more easily managed. If an app uses a protocol for which you don’t have spectrum license, then in the future this SDR will not allow it. At the same time, we have partners looking into models whereby you can buy spectrum as a service. You just pay for the minutes and the type and location you need.

TSS: How do you expect open source SDR hardware and the ecosystem to evolve over the next 2-3 years, and what does this mean for developers?

Maarten Ectors: We are launching step 2 of a three-step process. Step 1 was about building an ecosystem of developers. Step 2 is about giving the developers a market to sell their apps. Step 3 will be about reducing the cost of an SDR to as close to zero as possible. This will be done by putting the SDR on a chip and even in the future include this inside other chips. With this approach, we will go from a $285-$500 SDR to a below $10 SDR which will open the possibilities to have an SDR in any wireless device, including smartphones. The future will likely be about each mobile app having the possibility to use their own protocol by negotiating this with the software defined radio in the base station. Imagine a world whereby Netflix and YouTube use a custom protocol. This world whereby wireless protocols are built to solve end-user’s and developer’s problems and are no longer a closed committee slow moving paywall exercise will be the real innovation SDR will bring. Wireless Innovation at Internet speeds, driven by GitHub and App Stores.

TSS: What is the state of adopting similar principles to bring innovation to other value chains like MRI-scanning, car engines, and industrial robotics?

Maarten Ectors: We are starting to App Store define IoT gateways, PLCs (called ALC or App Logic Controller, which controls anything from traffic lights, elevators, industrial machinery and robots, etc.), vending machines, other types of telecom and network equipment (e.g. Facebook Wedge Top of the Rack Switch), etc. At Mobile World Congress, we will show several other devices with app stores. If they are successful open source hardware will follow soon after.


June 16, 2017  1:58 AM

Interview with “Practical Guide to Continuous Delivery” author Eberhard Wolff

cameronmcnz Cameron McKenzie Profile: cameronmcnz

Practical Guide to Continuous Delivery Book CoverA few weeks ago TheServerSide published a story about the discussion TSS editor Cameron McKenzie (@cameronmcnz) had with Eberhard Wolff  (@ewolff) about not only his latest book, but also the various trends and technologies that he sees shaking up the world of DevOps, continuous delivery and enterprise software development. The podcast of the interview is presented here again, along with a full transcription of the interview. Enjoy.

 

 

An interview with Eberhard Wolff

Cameron McKenzie: Eberhard Wolff is one of the experts that the server side follows on Twitter. A software engineer with going on 20 years’ worth of experience, you may have read one of his books on Spring or continuous delivery or micro-services, or you may have seen him speaking on one of those topics at a conference in North America or Europe. We noticed that he’s got a new book out, “A Practical Guide to Continuous Delivery,” so we scheduled an interview, and the first question was, “What is the current state of continuous integration and continuous delivery?”

Eberhard Wolff: I’d say that continuous integration is, nowadays, a commodity up to the point where people actually forgot what it actually means. They start to use continuous integration just to do regular compiles while in fact, they’re doing feature branches, postponing the integration. Oftentimes, people are actually doing feature branches, and that means a team might be doing something for quite a long time and not integrating. They sort of miss the original point about continuous integration. So I think that’s the status that we are in, that people actually forgot about the original ideas behind continuous delivery and are now rediscovering them.

Cameron McKenzie: So when you encounter clients using this feature branch approach, what do you tell them and how do you go about correcting them?

Eberhard Wolff: The first thing I tell them is that if you don’t have a problem, don’t fix it. Feature branches are actually a fine way of doing it. If they do have problems integrating back into the main branch, then I would set up some additional measures like doing regular integrations or maybe even changing it altogether so that you just have the master branch and you do all work on trunk.

Continuous delivery changes the whole thing a little bit because usually, it means that the master branch is special because it’s the one that goes into production. Even if you do feature branches and you branch them off, they are different from the master branch because they are usually not going into production. And that means that I would argue, through continuous delivery, probably it’s not the best idea to combine the feature branches.

Cameron McKenzie: Now, are there any software tools that you recommend for organizations that are going to take a continuous delivery type of approach to DevOps, or are you one of those advocates that falls into that camp that says tooling is one of the last things you talk about when you go down this path?

Eberhard Wolff: Concourse CI is an interesting tool because it does everything on Docker. Actually, a colleague of mine is quite an advocate of that, so I found that interesting. Generally speaking, I would argue, in particular, if you talk about continuous delivery and if you look at the book, it’s actually all about testing.

So in the book, I talk about unit tests and acceptance tests and performance testing and all these kinds of things. And usually, if I go to a customer and I need to consult them concerning continuous delivery and also probably micro-services, the first thing that I talk to them about is, “What is it really that keeps you from doing much more frequent releases?” Usually there is some manual sign off or some manual acceptance testing, and I think that is something that’s often forgotten. So, you know, just having a Jenkins and doing deployment automation, I think that’s not the key challenge. The key challenge is to go into production without manual sign-off.

Cameron McKenzie: Now, we’re all familiar with unit test and testing our business logic and, even to a certain extent, integration testing, but what about testing the UI? I mean, how can we do changes to a user interface to the way that a person interacts with their tool and actually test that and validating that? Aren’t there some serious challenges in terms of automating UI testing?

Eberhard Wolff: I would tend to agree that if you talk about UI testing concerning, you know, look and feel and these kinds of things, that’s hard to automate and probably even impossible. Having said that, what I usually see in customers is that they rely on UI. They are doing UI testing, but I think what they are really doing is acceptance testing. So, you know, it’s not about “does the UI work propertly” and it’s also not about “that the UI looks nice.” If they do automated UI testing, it’s usually about does the system do what it’s supposed to do, which, in my opinion, is acceptance testing.

The road that usually leads there is that that’s what testers do, right? They start the application, they take a look at the UI, they do stuff in the UI, they validate that the results are correct, and they automate that. So you have a UI test, which is, in fact, really an acceptance test. And I think that’s bad because that way, you get fragile tests because if you do a minor tweak to the UI, it breaks your test even though the business logic might still be the same. And also, those tests tend to be slow and even sometimes unreliable.

So if you ask me about the main challenge concerning UI tests, I would argue that the main challenge is that they become acceptance tests, and you should look at different ways of doing testing in that regard and, you know, you should do proper acceptance testing using other tools. And that is why I really wanted to include that chapter about behavior driven design because I think it’s key to understand how to do proper automated acceptance testing and how to try to avoid manual sign-offs from the customer.

Cameron McKenzie: Now, you clearly have an issue with manual sign-offs, but what about exploratory testing? I believe you talked about it in your book. What’s the role of exploratory testing and the manual aspect of exploratory testing?

Eberhard Wolff: I think it’s fine to do manual testing. I mean, that’s basically what exploratory testing is about, you know, to have a manual look at specific parts of the application that’s causing problems or something like that. So I would argue it is a tool where you can have a specific focus on specific areas of the application that are somehow the problem, you know, from a performance perspective, from a business logic perspective, etc.

It’s something that is going on, you know, to fix problems in specific areas, but it should not be a block on the continuous delivery pipeline, which would clearly be if it required a sign-off.

Cameron McKenzie: Now, there’s no denying the containerization trend that I’m seeing more and more clients being pressured into using Docker and integrating Docker and containers into their architecture. How has containerization and tools like Docker change the way that continuous integration and continuous delivery is done?

Eberhard Wolff: Yeah, I think you are pointing at a very important point. Docker allows different ways of doing integration and continuous integration and so on. And I would agree that there would probably be a new generation of tools. I believe Concourse is one of the tools that’s already mentioned. So I think that is an interesting point, you know, to really rethink continuous integration, to make the continuous integration server aware of Docker containers, to make the continuous integration server itself run in Docker containers and also to obviously create Docker containers. I believe that, how shall I put it, Jenkins is not the greatest tool anyway, right? So I think, for that reason, the additional pressure that you mentioned from the move towards Docker, I sort of welcome it because I’m looking forward to a new generation of CI tools.

Cameron McKenzie: So what are the things that you would like to see in the next generation of continuous integration and continuous delivery tools?

Eberhard Wolff: A more rigid isolation of the different builds of the different projects. If you look at Jenkins, I would argue that it’s been built with the idea of setting up your continuous integration platform, your continuous integration pipeline in the tool itself and in the web UI. Of course, that has changed now because there are now job description languages. But I would argue that ideally, a modern continuous integration tool should be code-based anyway. So, you know, that there’s a job description for the continuous integration pipeline from the very beginning and it should be built with that in mind. So I think those are the points.

What I do like about Jenkins, of course, is the huge ecosystem and all the plug-ins that you have, which, again, are sort of an issue because it means that no Jenkins server looks like the other one and it’s hard to get all those plug-ins to work together. So that’s another shortcoming, probably, but, you know, it’s trade-off because if it wasn’t for those plug-ins  then Jenkins would be much less valuable.

Cameron McKenzie: So I’m assuming that you believe that adopting a DevOps culture is important to making continuous integration and continuous delivery work.

Eberhard Wolff: Oh, yeah, absolutely. So I believe DevOps is all about development and operations working together, which is to say that it’s not about an organizational change, so I think it’s not so important to have dev and ops report into the same managers. What I do think is important is that both share in the responsibility for the application and that there is no finger pointing but true collaboration. So I think that’s the key point. And of course, if you do change the organization, then that might be beneficial for such collaboration.

I think continuous delivery is one way or one thing that you can achieve more easily if you do DevOps. That is to say if I look at the continuous delivery pipeline at one point, it’s about deploying stuff into production, which is usually an operation’s thing. And on the other hand, if the start of the pipeline, just doing the compilation and building the software, unit testing that’s on, that’s more a dev thing. So if you have the whole pipeline, you can only build it if you have a DevOps organization. So let’s say that continuous delivery is one thing that you can enable by having a DevOps organization. That’s sort of how it fits together in my opinion.

Cameron McKenzie: Now, one of the reasons we got together today was so we could talk about your latest book, “A Practical Guide to Continuous Delivery.” What is it that you wanted to achieve by writing this book and what is the takeaway that readers will have once they finished reading it?

Eberhard Wolff: What I want to achieve with my book, and that is actually something that also sets it apart from the original “Continuous Delivery” book, is  that it is the practical introduction. So I actually talk about specific tools. I give advice about how to use them. I have some examples how to use the tools. People can learn from that and build their own experiments. So that’s basically how it works in my opinion.

Cameron McKenzie: Now, if that conversation doesn’t convince you that Eberhard Wolff knows what he’s talking about, I don’t know what will. I strongly suggest that you, like the server side, follow him in Twitter. That’s @ewolff. That’s with two Fs. And more importantly, go pick up his book, “A Practical Guide to Continuous Delivery.” I know you’ll enjoy it.

You can follow Cameron McKenzie on Twitter: @cameronmcnz
You can follow Eberhard Wolff, too: @ewolff


June 12, 2017  11:25 PM

Why Waterfall sometimes wins the Agile versus Waterfall debate

Daisy.McCarty Profile: Daisy.McCarty

Agile gets all the press, but Waterfall has proven to be a fairly trustworthy approach to software development for a very long time. It’s definitely not going anywhere. In fact, it’s still the preferred methodology for many of the world’s largest enterprises for some very good reasons, so don’t ever think that the Agile versus Waterfall debate has been concluded.

At a recent Agile vs. Waterfall Debate, which pitted the two methodologies against each other, four experienced technology professionals dug deep into both approaches. The conclusion at the end of the night was unanimous: It’s imperative to choose a process that works for the organization and for the situation at hand. And sometimes, that’s still Waterfall. Here are five ways to tell if that’s the case in an upcoming project.

#1 The project has to be right the first time

With all its flaws, Waterfall is still the most time-tested method for software development. It is designed to provide maximum control and reduce uncertainty. For situations with clear requirements, regulatory compliance factors, and a strong possibility that failure will mean lots of very bad press, most enterprises will opt for Waterfall in the Agile versus Waterfall debate, just to stay on the safe side. Mark Yarbro, performance test manager at a major financial institution, put it plainly. “When we screw up, it is front page news. It’s got to be right the first time.” Agile may get things done fast, but there’s still a risk of missing the mark. “You’ve got to be doing the right thing, not just doing the wrong things faster.” Cyclic waterfall is often used where speed and control need to be balanced.

#2 Timing and coordination are essential

Mark deals with the issue of synchronizing multiple teams on a daily basis. “We have twelve hundred applications. We release many of them on a ninety day cycle. The way it works is we get everyone together so they are working in lock step. It’s very difficult to release one product when it doesn’t line up with the others. We are doing performance testing on thirty different apps to get them all ready at the same time.” It’s not just the apps, but the platforms and the entire ecosystem that must be upgraded simultaneously. That’s not something that Agile is good at doing, and one reason why Agile is so difficult to scale. “Agile is a great way to get things done if you don’t know what you need, if the customer is not being clear. But it’s much more difficult to get everyone lined up and releasing at the same time.”

#3 Fulfilling the scope is the point of the project

Obviously, it’s ideal to have a workable piece of software at the end of the day, but both Agile and Waterfall have been known to fail in that regard. Waterfall tends to place more emphasis on the process of doing the work according to the plan. Sometimes, that’s just about making the project as profitable as possible for those providing the software development or related services. Enterprise Agile Coach Jay Packlick pulled no punches in shining a light on why Waterfall is a tempting choice when pockets are deep. “If you’re a government contractor getting paid by the number of hours, the answer might be I’m going to optimize for stuff. What is your customer optimized on? IS my problem getting solved? To be clear, Agile tends to be biased on delivering what you need. It’s generally value focused. Waterfall is biased toward delivering lot of billable stuff.”

Satyapal Chhabra, Founder and Managing Principal at Ideliver, agreed to an extent, but pointed out that well-defined scope can be a good thing. “Waterfall is about the scope. Not because you don’t want to deliver the value, but because the scope is driven by someone who is qualified, who knows what is needed.” Knowledge and expertise are highly valued in Waterfall when it comes to creating the overall plan and setting the course of action. In contrast, “Agile tends to think, ‘We can do it because we have inclusion by the right people.’ That’s where Agile becomes cyclic and goes on forever. Waterfall tends to at least deliver something that you can hand out at the end.”

Of course the notorious Motorola phone and the original Obamacare marketplace website were both brought up in the debate as examples of technically delivering the scope but not a usable end result. But in a reasonably well run Waterfall project, the scope would encompass the value and help the project from getting pulled off course to pursue other objectives.

#4 High level stakeholders don’t like risk

For organizations that are used to Waterfall, making the transition to Agile may prove a bridge too far to cross, making the whole Agile versus Waterfall debate moot. Mark admitted that Agile is a superior methodology in many ways, but it relies heavily on the human factor. “Agile is ugly and messy. Real Scrum is so much fun. It’s a battle royale, muddy and messy, and you get so much accomplished. But it’s not pretty. Management, especially in big organizations, is very intolerant of messy, seemingly out of control processes. The whole organization has to change. People have to think and act differently all the way up the chain, and it’s hard to change human behavior.” In that case, getting Agile off the ground might actually take longer and deliver poorer results than sticking with traditional methods.

#5 The culture doesn’t mesh with Agile

Speaking of people, when teams are made up of less skilled or less self-directed people, or when feedback and collaboration are not highly valued, Waterfall may be the only functional option. Jay cautioned against trying to go Agile in the wrong circumstances. “If I’m working in an organization where I give the boss bad news and get fired, if everyone shuts up when the boss walks into the room, if I’m in a place where people don’t give a crap, they just want to get in, get a paycheck and get out, they don’t want to have a conversation or do Kumbaya, they’re probably not going to succeed with that framework. You have to fit your framework to your problem, cultural or otherwise. Don’t try to shove Cinderella’s slipper on her big fat sister, because it ain’t gonna fit.” To mix one final metaphor, when the Waterfall fits, go with the flow.


June 7, 2017  10:46 PM

Top seven ways to ruin an Agile project

Daisy.McCarty Profile: Daisy.McCarty
Agile

According to a 2013 survey by Ambysoft, projects following the Agile method have about a 64% success rate. That’s higher than the dismal numbers for Waterfall (at less than 50%), but it still means there are an awful lot of projects that should be agile that end up being simply fragile. What happens to make Agile fail? Here are seven of the obstacles that tend to block success, with advice from four experienced professionals in the enterprise software development field.

#1 Waterfall by any other name…

Performance Test Manager Mark Yarbro described what he thinks most software dev teams are doing when they claim to be doing Agile. “In Scrum, you have a bunch of people sitting around in a meeting, picking things off the backlog, starting to give estimates, and the developer’s just making stuff up, and the QA guys are just copying what the developers are saying. After a day or so of that, it’s all story pointed out, so then they all start hammering out code. QA’s sitting there, writing some test cases, but really waiting for the code to be delivered. So a week or two into the sprint, code starts being given to the QA folks, they’re starting to test things, Development is sitting around waiting. They’re planning the next sprint, pulling more things off the back log and figuring out what they’re going to be working on next. You know what that is? That’s Waterfall. You’re doing four week waterfalls. That’s RAD (Rapid Application Development). That’s what most people are doing when they call it Agile.”

#2 Feedback doesn’t happen when it should

In the perfect world, the end user or true customer would be sitting in from the very first meeting. Sadly, that’s rarely the case. That means there are always assumptions being made. Satyapal Chhabra, Founder and Managing Principal at Ideliver, highlighted this problem. “If you have continuous sprints, when does the real end customer start participating? Probably sprint eighteen or nineteen. That participation is the fundamental premise of Agile. But it doesn’t happen.” Mark agreed, “They should be there from the start.” In his experience, teams end up using proxies to stand in for the customer, with predictable failures in their ability to correctly predict what the customer really wants.

Enterprise Agile Coach Jay Packlick pointed out that, if there is a known lack of information, Agile must adapt to take the missing knowledge into account. Obviously, the later the customer becomes involved, the longer it will take to identify and satisfy their requirements. This is a common cause of cost and budget overruns in Agile. “Here’s where things go wrong. They have a situation where the customer isn’t going to be available until sprint nineteen, but they don’t solve the problem of ‘What am I going to do to get feedback?’ That’s a failure in one of the core principles of Agile which is to change your approach so that you solve your problem. You inspect and adapt. You have to change the system, or they will go on for months and fail. I don’t care what process you’re doing. If it’s not working and you don’t change it, it’s going to fail.”

#3 Agile documentation is almost always atrocious

According to Mark, who worked in Agile shops for many years, it’s easy to get carried away with the rush and bustle of Scrum and not pay attention to sustainability. “In daily standups there aren’t supposed to be any status updates. No ‘I’ve been working on…’ only what you’ve accomplished, what you hope to accomplish, and any impediments to getting that done. Accomplishing things constantly is the fuel that keeps Agile working. Otherwise, it’s a grind. But it’s also used as an excuse not to do any documentation. Enterprise needs to think bigger. The actual development is the tip of the iceberg. The maintenance is what’s under the water.” He described organizational amnesia that results when developers move on and there is no one left who remembers the details of the code. Greater accountability is a must. “Pure Play Agile doesn’t take into account that what gets done isn’t what’s expected, it’s what’s inspected.”

#4 Agile testing often stinks as well

Not only is there a serious shortfall in documentation, but testing throughout the project often suffers from lack of attention. Mark commented again, “Agile needs to have better testing. To make things work, you need QA people who are developers and developers who aren’t afraid of testing.” Quality Management specialist Brian Bernknopf agreed. “You can’t have an Agile SDLC with only Agile development where you’ve got devs working on stories and QA working on releases. Other traditional QA responsibilities may change, but you still need governance. In daily builds, what type of testing is going to happen at check-in? How do you know if those are the rights tests?”

Unfortunately, many teams haven’t figured out how to make things work when they go Agile and QA often gets sidelined. That means they need to regain a seat at the table, even if it’s in a different chair. “It’s a changing role. They need to be able to sit inside a Scrum and jump in and solve a problem. In a similar way, Developers need to be able to help write a test script. In an agile project, QA is a role and not necessarily a person. You could have a developer with a QA mindset in that position.”

#5 Key stakeholders become less involved over time

Agile is not a free for all. It requires discipline. Shreyas Batt, Cofounder and CEO of FusePLM, found that out the hard way. His team’s recent project thankfully did not fail, but it certainly took longer than planned because they weren’t consistent about following the rules. In this particular case, working with an offshore development team added logistical and time zone issues to the mix. “We had weekly meetings. But over time, the project manager or Scrum manager was sometimes not available. Then it was just a bunch of devs but no one to guide them and keep them on track on their side. On our side, we weren’t privy to the internal planning. You need the right people on the call.”

#6 Demos become optional instead of mandatory

Failing to prioritize the Agile structure and ensure constant feedback and communication proved to be an error for Batt’s team as well. “We took their word for what was going on. It didn’t give us a good picture. We didn’t want to micromanage, but to have insight into progress. At the start, we would have a demo each week. Later, we didn’t enforce that well enough. They told us what they were doing but didn’t actually deliver the code for testing until eight weeks later. That’s when we realized they didn’t understand the requirements. Agile enforced correctly would have really helped.”

#7 The desire to go Agile is fuzzy

Jay revealed that the first issue he seeks to help resolve with organizations is figuring out what they actually need to change. Most of the time, the people he speaks with aren’t actually certain. “That’s the first problem. They want to do something but aren’t clear about what outcome they really want.” Then comes the challenge of figuring out if changing to an Agile methodology can actually happen. “You have to determine who is the locus of control or the champion. Sometimes, nobody within the organization is a champion, it’s unclear, or the person isn’t acting to lead that change.” The right conditions for change must be met and the right people must have an appropriate level of involvement. Otherwise, trying to do an Agile project in a traditionally Waterfall culture simply won’t work—and the project is ruined before it even begins.


May 25, 2017  1:49 AM

A concise definition of cloud-native computing and development

cameronmcnz Cameron McKenzie Profile: cameronmcnz
containers, DevOps, Docker

We recently published an interview in which Ken Owens, Cisco Systems’ chief technology officer of Cloud Platforms, provided a very concise and definition of cloud-native computing that pulled together the concepts of DevOps, containers, microservices and modern software development. We recently had the audio for that podcast transcribed, as we thought it was worthwhile having it reproduced in text for those who aren’t particularly in love with audio files. The following is the audio podcast, with the transcription of Owen’s definition of cloud-native computing below.

Cameron McKenzie: If you’ve been following TheServerSide’s Twitter feed lately you’d know that we’ve been doing a boatload of articles lately on cloud-native computing. Now in our Quixotic quest to nail down a good definition of exactly what cloud-native means, we’ve been given the names of a number of experts in the field, one of whom is Cisco’s Kenneth Owens.

microservices-everywhere

Now it’s actually been incredibly difficult trying to nail down a precise definition of what cloud-native computing means. To the purest, it’s simply the deployment of a micro-service into a managed and orchestrated container. To others, you’re not doing cloud-native if you’re not Agile and doing DevOps. There really is no expert consensus on the term. Now having said that, Ken Owens gave us the best definition of cloud-native computing that we have heard to date. In his definition he pulls together the idea of cloud-native, he pulls together the idea of microservices, he pulls together Docker, and he then discusses how all of that rounds back upon itself to incorporate things like automation and DevOps. It’s like he’s provided a unified theory of cloud-native computing. And what’s even more amazing, he does it in less than four minutes. So here’s Ken’s response when we asked him to define the term cloud native, I think you’ll find it’s the best definition of cloud-native computing that’s around.

Ken Owens: I kind of have two ways to answer that question. Sorry for that, but as you know, I’m on the TOC (Technical Oversight Committee) for the CNCF (Cloud Native Computing Foundation) and the way that we as a community have defined cloud-native is container-packaged, dynamically orchestrated (and I always add ‘managed’, so it’s kind of managed-systems-architecture type of approach, which is where Kubernetes comes in), and  microservices-architected. And that microservices-architected part is where all the magic happens, if you will.

So I like to then sort of further define microservices-architected. I think that gets to the roots of your question as to why there are two different opinions about how cloud-native is defined.

So when you think about microservices-architected, there’s a wide range of, I guess you could say opinions, about what that means. But I believe the patterns or the common sort of view that most everyone in the space has is that it is about completely automating your software, a focus on a small set of interfaces that you define as services, like a containerized service if you will, and then looking at how to consume and expose services out of your interfaces.

And so it’s really a mind-shift in how software development occurs because now the application developer kind of owns all of it. In some cases you can say that they’re still relying on the infrastructure team to provide them some infrastructure, but for the most part, they kind of own the life-cycle of the development, the deployment, and the ongoing integration and deployment aspects of that service for the rest of the life of that service. Some people call it DevOps, some people call it, you know, 12-Factor application architecture, but in effect, the developer kind of owns the life-cycle, end to end, of the services they create, whereas in the past there was an infrastructure team that they would pass some things to and there was the operations team that they would pass some aspects to. Now, all of those three teams are sort of being merged together into a cloud-native team.

Cameron McKenzie: So there you go, a concise definition of cloud-native computing that pulls together containers, microservices, the cloud, automation, and DevOps. It’s a pretty impressive feat.

You can follow Ken Owens on Twitter: @kenowens12
You can follow Cameron McKenzie too: @cameronmcnz

More articles from Cameron McKenzie:


May 19, 2017  2:12 PM

“AI First” the mantra for Google I/O 2017

BarryBurd Profile: BarryBurd
Uncategorized

Google’s annual worldwide developer conference (Google I/O) kicked off at the Shoreline Auditorium in Mountain View, California on Wednesday morning. Seven thousand people are attending live, and others are viewing the event online at 400 Google I/O Extended events in 85 countries.

This year’s mantra is “AI first,” integrating machine learning with each of Google’s software and hardware products. The keynote was a rapid-fire delivery of announcements:

  • A new initiative named Google Lens integrates visual recognition with user help. Point your phone’s camera to a label containing a network password. Google Assistant enters the password automatically and connects you to the network. Point the camera to a phone number and Google Assistant dials that number. Point to a concert advertisement. Google Assistant plays a sample of the band’s music and offers to book concert tickets.
  • Developing artificially intelligent apps requires two phases — a training phase and an inference phase. In the training phase, the software learns about the problem domain. In the inference phase, the software applies the learning to new situations.
    The training phase is computationally intensive. To address this issue, Google announced its new Cloud TPU, which is available on the Google Compute Engine immediately. The Cloud TPU hardware is optimized for both training and inference, and can deliver a whopping 180 teraflops of computing power. Developers can visit http://g.co/tpusignup to sign up.
  • Google announced its new Google.ai initiative to coordinate AI efforts and teams. The initiative has three parts: research, tools, and applied AI. The research part includes AutoML, in which neural nets design other neural nets. This task is computationally challenging, and Cloud TPU is making it possible.
  • Starting immediately, Google Assistant will accept commands that are typed or tapped as well as spoken. Typing is advantageous because, in public venues, people may not want to speak commands. Typing, tapping and speaking to Google Assistant are all integrated, so an interaction with the Assistant may use all three interaction modes.
  • Google Assistant is now available on the iPhone!
  • Google Assistant is now available in French, German and several other languages. Google Home will launch in Canada, Australia, France, Germany and Japan.
  • Effective immediately, Actions on Google handles purchase transactions. With voice interaction and fingerprint scan, you can use Google Pay. You don’t have to enter an address or credit card number.
  • A new feature in Google Home is called Proactive Assistance. Here’s how it works: Google Home knows about an upcoming event on your calendar, knows where the event takes place, and calculates the travel time to the event given the current traffic conditions. When you say “What’s up?” to Google Home, the device reminds you that it’s time to leave for the upcoming event.
  • In the next few months, Google Home will make no-cost, hands-free calls to any landline within the United States or Canada. Google Home recognizes up to six different voices in a household. So if you say “Call Mom,” the device determines which member of the household is making the request, and calls that person’s mother.
  • Spotify will offer free music service to Google Home.
  • Google Home will have Bluetooth support, so you’ll be able to play music from any Bluetooth enabled device on the Google Home speaker.
  • In addition to its voice responses, Google Home will display information on your phone’s screen and, through Chromecast, on your TV.
  • Google Photos will have three new features. With Suggested Sharing, Photos identified the people in your images and offers to share the images with those people. With Shared Libraries, Photos automatically shares images with certain characteristics to people you select. With Photo Books, you can purchase a hard copy of your best images based on criteria that you specify.
  • In the next few weeks, YouTube will provide 360 degree video on your Android TV. You’ll issue voice commands to request a certain video. You’ll use your remote to move from side to side within the video scene. Live 360 content will be available.
  • Earlier this year, YouTube launched Super Chat where users pay to pin comments on live streams. Users can now trigger physical actions using Super Chat. During the keynote, users paid to drench two fellows known as the Slow Mo Guys with 500 water balloons. All proceeds went to charitable causes.
  • TensorFlow is Google’s machine intelligence software library. With the newly created TensorFlow Lite version of that library, developers can add deep learning capabilities to apps that run on small, mobile devices. Smartphones will become even smarter.
  • Samsung’s Galaxy S8 and S8+ will add virtual reality features using Google Daydream.
  • HTC and Lenovo will use Google Daydream in their standalone VR headsets. All the processing power will be in the headsets. You’ll experience virtual reality without having to attach a cable or a smartphone.
  • The new Android Go initiative optimizes the Android system to run on entry level phones. In this context, an entry level phone is one with between half-a-gigabyte and one gigabyte of memory. As one part of this initiative, a Data Saver feature economizes on the use of network resources by compressing the data that’s being sent. Another part named YouTube Go Offline Sharing saves videos for viewing when the network isn’t available.
  • With Google Expeditions, students experience things that they can’t ordinarily experience in the safety and comfort of their own classrooms. Students move around a room while they look at a tablet device’s screen. The tablet show anything from the terrain in a far away land to a view from inside the human body.
    Later this year, the Expeditions platform will add augmented reality to its repertoire. The tablet’s display will be able to superimpose virtual images onto real objects in the room.
  • A new Android App Directory helps user discover new apps. Users can try an app before buying the app.
  • Google’s Instant Apps API is now available to all Android developers.
  • Many Firebase SDKs will soon be made open-source.
  • The new Play Console Dashboards summarize app diagnostics to help developers analyze and improve their apps. In addition, a developer can add Firebase Performance Monitoring to an app with only one line of code. In addition,
  • For enhanced security, Firebase will include phone number authentication.

Beta availability of Android O

I write books about Android development, so for me, the most interesting announcement was the availability of a beta for the next Android version – codenamed Android O. In this version of Android, developers will be able to write code in the Kotlin programming language. This is a big deal for developers because it’s a departure from Android’s long-standing Java-only tradition.

Kotlin is completely interoperable with Java, so existing Java code will work without modification. New apps can be built using Java, using Kotlin, or using any combination of the two languages. JetBrains (the company that created Kotlin) will work alongside Google to help the language evolve as a language for mobile platform development. Best of all, Kotlin is available immediately in the new Android Studio 3.0.


May 10, 2017  3:46 PM

Java modularity’s future takes a hit as Project Jigsaw (JPMS) is voted down

cameronmcnz Cameron McKenzie Profile: cameronmcnz

Can you believe all of this drama surrounding Project Jigsaw and the Java modularity debate? I was so thankful yesterday when President Donald Trump fired the director of the FBI, drowning out all of the Java Jigsaw finger pointing tweets and usurping them with delightful, 140-character opinion pieces on American politics.

Voting on JSR-376, the Java Platform Module System (JPMS), thirteen JCP members voted ‘no,’ while 10 voted ‘yes.’ Unlike US elections, the Java Community Process (JCP) does not employ an ‘electoral college’ system, so votes are won or lost using an archaic ‘majority wins’ type of system.

JCP votes down JPMS JSR-376

JPMS Java modularity JSR-376 is voted down.

Java Jigsaw’s missing pieces

Part of the Project Jigsaw melodrama came from the fact that both IBM and Red Hat had announced before the JCP vote on Java modularity that they were not going to support JPMS, which is bit of a break from traditional decorum. JCP members don’t typically announce their intentions before a vote happens. Having said that, very few JCP projects are as contentious as Java’s Jigsaw.

“There is still work required to bring the community closer to an agreement on the proposed standard,” said IBM’s Tim Ellision in an April 28th, community email reply to Mark Reinhold, the JCP lead and the highly respected Chief Architect of the Java Platform Group at Oracle. “IBM is also voting ‘no’ which reflects our position that the JSR is not ready at this time to move beyond the Public Review stage and proceed to Proposed Final Draft.” The word ‘also’ in Ellision’s quote refers to Red Hat’s prior announcement that they were not satisfied with the way the Java modularity puzzle was coming together.

“Project Jigsaw’s implementation will eventually require millions of users and authors in the Java ecosystem to face major changes to their applications and libraries,” was Scott Stark’s April 14th bloodletting on what Red Hat perceived as some of the Java Platform Module System’s shortcomings. “Jigsaw’s implementation will eventually require millions of users and authors in the Java ecosystem to face major changes to their applications and libraries, especially if they deal with services, class loading, or reflection in any way.”

All of this public consternation resulted in an impassioned plea from Reinhold to move the Java modularity project forward, despite some of the existing hesitation. “What we have now does not solve every practical modularity-related problem that developers face, but it meets the agreed goals and requirements and is a solid foundation for future work” said Reinhold. “It is time to ship what we have, see what we learn, and iteratively improve. Let not the perfect be the enemy of the good.”

Ulterior motives and Project Jigsaw

Seldom spoken in these public discussions is that there are often very private motives behind the JCP wranglings of the big players. Red Hat has their own, open source modularity project which they use in their Wildfly server. JBoss modules has always competed with Jigsaw. IBM’s WebSphere has always had a long history of supporting OSGi. Who knows how these private interests compete with a company’s public ones.

Falling into the ‘no good deed goes unpunished’ category, many of the JCP committee members who voted in favor of Java modularity found themselves in the unusual position of having to defend the fact that they wanted to move Project Jigsaw forward. Azul’s CTO Gil Tene took to Twitter to defend his company’s ‘yes’ vote. “Can a better module system be built? Yes. So can a much better generics systems. And a better way to do Lambda expressions. And…(sic),” tweeted Tene. “Some will adopt JPMS as it is. Many others won’t until it gets better. And that’s OK.”  Personally, I’m kicking myself a little bit because I had Tene on the phone a few weeks ago talking about the Azul’s Falcon LLVM compiler, and I should have asked him more about the JCP vote. We did speak about project Jigsaw, but it was more in terms of JVM performance and improved startup times, as opposed to the upcoming vote.

Gil Tene defends Azul's JCP vote.

Gil Tene defends Azul’s ‘yes’ vote on JPMS

Java modularity and the OSGi spec

A number of years ago, way back in 2011 to be exact, Peter Kriens was OSGi’s Technical Director, and he penned a few interesting articles about implementing modular systems in Java, many of which sparked intense debate in the TSS forums. We spoke a few times about Project Jigsaw, and while our conversations took place far too far back in the past for me to quote him accurately on anything, the impression he always gave me was that tackling the classloader issue in Java was an incredibly complicated task, that people who try to implement modularity in Java will run into far more unanticipated complications than they could ever have imagined, and that the purveyors of Project Jigsaw were just a little bit naive about how easy or hard it would be to build a system of modularity right inside the JDK. Six years later, it would appear that many of Kriens’ consternations have been borne out.

Classloaders are a mess in Java. Their existence is understandable given the evolution of the JDK, but what was fine back in 1996 is a bit of an embarrassment as we do software development in 2017. I’ve got complete faith in a guy like Mark Reinhold to get the Java Platform Module System back on track. Hopefully that will happen sooner rather than later.

You can follow Cameron McKenzie on Twitter: @cameronmcnz

Interested in more opinion pieces? Check these out:


May 9, 2017  6:12 PM

IBM’s Watson is a joke, and Oracle won’t be ‘winning’ for long

cameronmcnz Cameron McKenzie Profile: cameronmcnz

Hedge Fund manager Kyle Bass used to be my favorite industry analyst, but after watching a short, fire-breathing, three minute clip from CNBC’s Closing Bell, I think Social Capital CEO, Chamath Palihapitiya, may have wrestled away Bass’ title.

“IBM’s Watson is a joke, just to be completely honest” said Palihapitiya when asked about IBM’s hedgeway into the world of artificial intelligence and machine learning, asserting that while Big Blue has done a great job marketing the Jeopardy champion, the fact of that matter is, Google and Amazon have done a far better job amassing reams of big-data, processing it on their systems, and fundamentally understanding it, which after all, is the whole point of artificial intelligence.

IBM no longer an innovator

Given, the Bay Area based search engine giant has never won a game show, but there are indeed other, less scientific metrics that can be used to evaluate AI systems. “Companies advancing machine learning and AI don’t brand it with some nominally specious name, naming it after a Sherlock Holmes character,” said a laughing Palihapitiya. CNBC’s Brutus then stuck the knife deeper into Endicott’s Caesar saying “IBM is a services business. They aren’t building anything. They aren’t innovating.”

For those who love to hate Oracle, the stewards of the JDK weren’t spared the commentator’s scorn either. “Oracle is not a business you can short today, but it is also not a business that is going to win tomorrow,” said Palihapitiya. In the short term, the assertion is that companies like Oracle will keep the coffers full simply by squeezing income out of their existing customers. “It has an unbelievable sales and marketing machinery that will figure out how to tax its existing customers in umpteen numbers of Byzantine ways.”

Profiling the IBM customer

According to Palihapitiya, the problem lies in the fact that the marketing machines of companies like IBM and Oracle are more intelligent and organized than the clients who have to choose between them. “What IBM is excellent at is using their sales and marketing to convince people who have asymmetrically less knowledge to buy something,” said Palihapitiya. “Can you fundamentally be long these two business over the next decade? I think the answer is no.”

It’s a short little three minute video, but the guy simply rains brimstone down on Oracle and IBM. If you like that sort of thing, it’s worth a watch. If not, fee free to go search ‘cat videos’ on YouTube.

IBM’s Watson ‘is a joke,’ says Social Capital CEO Palihapitiya

You can follow Chamath Palihapitiya on Twitter: @chamath
You can follow Cameron McKenzie too: @cameronmcnz

Interested in more opinion pieces? Check out these:


May 8, 2017  3:10 PM

The 12-Factor App is cloud-native development for dummies

cameronmcnz Cameron McKenzie Profile: cameronmcnz

Yegor Bugayenko wrote an amusing blog the other day entitled “SOLID is OOP for Dummies.” Well, if SOLID is OOP for dummies, I wonder if he’d agree with my assertion that the 12-factor app mantra is the dummies equivalent for cloud-native development?

I enjoyed Bugayenko’s article, although it seems like he took quite a bit of flack in the comments for it. But I completely agree with his premise. To me, telling software developers that their apps should follow SOLID principles is like telling a marathon runner that the best strategy for moving forward is to cyclically move one leg in front of the other. Sure, the statement is true, but does something so completely self-evident in its nature actually count as advice?

Revisiting the SOLID principles

Quite quickly, the SOLID principles are as follows:

· Have single responsibilities (S)
· Be object-oriented (O)
· Use polymorphism, or the Liskov Substitution Principle (L)
· Leverage interfaces (I)
· Abstract your code using the dependency inversion principle (D)

So I will see Bugayenko’s criticism of the five tenets of SOLID and raise him a similar criticism of the cloud-native world’s 12-factor app. (By the way, if you’re not familiar with all of the latest catch-phrases, Ken Owens provides a great definition of cloud-native in the article Tying Agile, DevOps, 12-factor apps and cloud native computing together)

Revisiting the 12-Factor App

For the uninitiated, these are the dozen tenets of cloud-native computing’s 12-factor App:

1. Codebase: One codebase tracked in revision control, many deploys
2. Dependencies: Explicitly declare and isolate dependencies
3. Config: Store config in the environment
4. Backing services: Treat backing services as attached resources
5. Build, release, run: Strictly separate build and run stages
6. Processes: Execute the app as one or more stateless processes
7. Port binding: Export services via port binding
8. Concurrency: Scale out via the process model
9. Disposability: Maximize robustness with fast startup and graceful shutdown
10. Dev/prod parity: Keep development, staging, and production as similar as possible
11. Logs: Treat logs as event streams
12. Admin processes: Run admin/management tasks as one-off processes

Self-evident truths

Seriously, do we really need to tell software developers to keep production, dev and staging environments as similar as possible, as app factor ten, dev/prod parity, instructs? Honestly, I can’t ever remember working on a project where the team said ‘hey, let’s make DEV and PROD completely different. Like, let’s use MongoDB in DEV, and DB2 in production.’

App factor one, using one codebase tracked in revision control, hardly seems like a revolutionary concept either, nor does it seem like a principle that any rational, cloud-native software development team would violate. Maybe if they were using MSD, the Masochistic Software Development methodology, they might spread their code across GIT, CVS, Clearcase and PVCS, but I can’t see anyone who wasn’t a sadist doing so.

Factor nine is outright comical. Has anyone ever actually sat down and written out a user-story or non-functional requirement describing how they wanted it to take a long time for the application to load, and they wanted general havoc to ensue when an application gets shut down? The 12-factor app’s factor nine, the disposability principle of maximizing robustness with fast startups and graceful shutdowns would imply that some software development teams weren’t aware that extended start-up times and hanging threads at shutdown were a bad thing.

Breaking the unbreakable

Quite honestly, there are tenets that I don’t even know how to violate. App factor four says backing services should be treated as attached resources. I don’t even know how I would write a cloud-native app that didn’t treat backing services as an attached resource. Isn’t that statement tautological in that just by definition of the fact that it’s a backend resource that it isn’t attached to your cloud-native application?

Maybe it’s because I’ve been developing on Spring and the Java EE platform for the last twenty years that some of these points seem rather superfluous. App factor three instructs cloud-native developers to store configuration details in the environment and not as a set of constants or if-then-else statements peppered into a variety of different classes throughout the code base. I honestly can’t see an experienced professional adding Java code that brackets every class with conditional statements that change the runtime behavior based on which environment is currently hosting the code. Furthermore, storing configuration outside of the application and abstracting away dependencies has always been a basic principle of Spring and Java EE. It’s exactly why resource references and JNDI bindings exist.

Half-baked ideas

And if it’s not Spring or Java EE enforcing these best practices, it’s the application server itself doing so. App factor eleven states that logs should be captured by the execution environment and collated together with all of the other logging streams used by the cloud-native app. I honestly can’t remember a time when WebSphere didn’t do that. Maybe IBM can swallow up all of the little players in the cloud-native computing industry, create a new, cloud-native application server and show the industry how logging is done? That type of functionality has always been baked right into the application server runtime, even if you’re just using System.out calls instead of a proper logging framework like slf4j.

The 12-factor app does provide some food for thought, with most of what’s intellectually edible coming from the insistence that using a process, as opposed to threading, is the best way to scale. Although I would assert that this is really just a semantic argument, as opposed to one grounded in practice. While it’s true that traditional Java application servers ran one process with many threads, Java microservices tend to be single processed, and even within a microservice, there will be multiple functions that leverage threading, so it’s not really an either-or type of thing. If anything, factors six and eight, executing apps as processes and scaling out using the process model, really comes down to the assertion that monoliths are bad, and breaking up a monolith into smaller pieces is good. I think we’ve all heard that mantra being chanted enough lately. Yes, we get it.

Facebook shares and Internet memes

Each maxim of the 12-factor app credo reads to me like something that you’d put on one of those annoying, inspirational posters you find hanging in offices across the country. I can almost visualize a poster of an Italian sports car with the word DISPOSABILITY at the top, and the phrase “Maximize robustness with fast startup and graceful shutdowns” at the bottom. We do live in an age where in order to be consumed, every message must be delivered as a meme, or as something that can be shared as a Facebook post or delivered in a 140 character tweet. Maybe it’s the new developer, the social-media millennial, who the 12-factor app mantra is catering to?

12-factor app and cloud-native meme

The 12-factor app as an Internet meme

Telling me to eat less and exercise more in order to lose weight is pretty useless advice. Telling me to pay my taxes on time and avoid interest and penalties is useless advice too. I mean, the statements are true, but it’s nothing I don’t already know, which makes stating these tautological statements a waste of time. You can frame these statements as advice, but they really aren’t advice if they provide nothing new, nothing of added value and nothing that’s particularly actionable. I can’t help but feel as though the entire discussion about building a 12-factor app falls into the same category. I wonder if Yegor Bugayenko would agree with me?

Here’s Bugayenko’s article: SOLID Is OOP for Dummies

You can follow Yegor Bugayenko on Twitter: @yegor256
You can follow Cameron McKenzie too: @cameronmcnz

Tweet this article!

Interested in more opinion pieces? Check out these:


Page 1 of 1512345...10...Last »

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: