Coffee Talk: Java, News, Stories and Opinions

Page 1 of 1912345...10...Last »

July 3, 2017  11:26 AM

Advancing JVM performance with the LLVM compiler

cameronmcnz Cameron McKenzie Profile: cameronmcnz

The following is a transcript of an interview between TheServerSide’s Cameron W. McKenzie and Azul Systems’ CTO Gil Tene.

Cameron McKenzie: I always like talking to Gil Tene, the CTO of Azul Systems.

Before jumping on the phone, PR reps often send me a PowerPoint of what we’re supposed to talk about. But  with Tene, I always figure that if I can jump in with a quick question before he gets into the PowerPoint presentation, I can get him to answer some interesting questions that I want the answers to. He’s a technical guy and he’s prepared to get technical about Java and the JVM.

Now, the reason for our latest talk was Azul Systems’ 17.3 release of Zing, which includes an LLVM-based, code-named Falcon, just-in-time compiler. Apparently, it’s incredibly fast, like all of Azul Systems’ JVMs typically are.

But before we got into discussing Azul Systems’ Falcon just-in-time compiler, I thought I’d do a bit of bear-baiting with Gil and tell him that I was sorry that in this new age of serverless computing and cloud and containers, and a world where nobody actually buys hardware anymore, that it must be difficult flogging a high-performance JVM when nobody’s going to need to download one and to install it locally. Well, anyways, Gil wasn’t having any of it.

Gil Tene: So, the way I look at it is actually we don’t really care because we have a bunch of people running Zing on Amazon, so where the hardware comes from and whether it’s a cloud environment or a public cloud or private cloud, a hybrid cloud, or a data center, whatever you want to call it, as long as people are running Java software, we’ve got places where we can sell our JVM. And that doesn’t seem to be happening less, it seems to be happening more.

Cameron McKenzie: Now, I was really just joking around with that first question, but that brought us into a discussion about using Java and Zing in the cloud. And actually, I’m interested in that. How are people using Java and JVMs they’ve purchased in the cloud? Is it mostly EC2 instances or is there some other unique way that people are using the cloud to leverage high-performance JVMs like Zing?

Gil Tene: It is running on EC2 instances. In practical terms, most of what is being run on Amazon today, it is run as virtual instances running on the public cloud. They end up looking like normal servers running Linux on an x86 somewhere, but they run on Amazon, and they do it very efficiently and very elastically, they are very operationally dynamic. And whether it’s Amazon or Azure or the Google Cloud, we’re seeing all of those happening.

But in many of those cases, that’s just a starting point where instead of getting server or running your own virtualized environment, you just do it on Amazon.

The next step is usually that you operationally adapt to using the model, so people no longer have to plan and know how much hardware they’re going to need in three months time, because they can turn it on anytime they want. So they can empower teams to turn on a hundred machines on the weekend because they think it’s needed, and if they were wrong they’ll turn them off. But that’s no longer some dramatic thing to do. Doing it in a company internal data center? It’s a very different thing from a planning perspective.

But from our point of view, that all looks the same, right? Zing and Zulu run just fine in those environments. And whether people consume them on Amazon or Azure or in their own servers, to us it all looks the same.

Cameron McKenzie: Now, cloud computing and virtualization is all really cool, but we’re here to talk about performance. So what do you see these days in terms of bare iron deployments or bare metal deployments or people actually deploying to bare metal and if so, when are they doing it?

Gil Tene: We do see bare metal deployments. You know, we have a very wide mix of customers, so we have everything from e-commerce and analytics and customers that run their own stuff, to banks obviously, that do a lot of stuff themselves. There is more and more of a move towards virtualization in some sort of cloud, whether it’s internal or external. So I’d say that a lot of what we see today is virtualized, but we do see a bunch of the bare metal in latency-sensitive environments or in dedicated super environments. So for example, a lot of people will run dedicated machines for databases or for low-latency trading or for messaging because they don’t want to take the hit for what the virtualized infrastructure might do to them if they don’t.

But having said that, we’re seeing some really good results from people on consistency and latency and everything else running just on the higher-end Amazon. So for example, Cassandra is one of the workloads that fits very well with Zing and we see a lot of turnkey deployments. If you want Cassandra, you turn Zing on and you’re happy, you don’t look back. In an Amazon, that type of cookie-cutter deployment works very well. We tend to see that the typical instances that people use for Cassandra in Amazon with or without us is they’ll move to the latest greatest things that Amazon offers. I think the i3 class of Amazon instances right now are the most popular for Cassandra.

Cameron McKenzie: Now, I believe that the reason we’re talking today is because there are some big news from Azul. So what is the big news?

Gil Tene: The big news for us was the latest release of Zing. We are introducing a brand-new JIT compiler to the JVM, and it is based on LLVM. The reason this is big news, we think, especially in the JVM community, is that the current JIT compiler that’s in use was first introduced 20 years ago. So it’s aging. And we’ve been working with it and within it for most of that time, so we know it very well. But a few years ago, we decided to make the long-term investment in building a brand-new JIT compiler in order to be able to go beyond what we could before. And we chose to use LLVM as the basis for that compiler.

Java had a very rapid acceleration of performance in the first few years, from the late ’90s to the early 2000s, but it’s been a very flat growth curve since then. Performance has improved year over year, but not by a lot, not by the way that we’d like it to. With LLVM, you have a very mature compiler. C and C++ compilers use it, Swift from Apple is based on its, Objective-C as well, the RAS language from Azul is based on it. And you’ll see a lot of exotic things done with it as well, like database query optimizations and all kinds of interesting analytics. It’s a general compiler and optimization framework that has been built for other people to build things with.

It was built over the last decade, so we were lucky enough that it was mature by the time we were making a choice in how to build a new compiler. It incorporates a tremendous amount of work in terms of optimizations that we probably would have never been able to invest in ourselves.

To give you a concrete example of this, the latest CPUs from Intel, the current ones that run, whether they’re bare metal or powered mostly on Amazon servers today, have some really cool new vector optimization capabilities. There’s new vector registers, new instructions and you could do some really nice things with them. But that’s only useful if you have some optimizer that’s able to make use of those instructions when they know it’s there.

With Falcon, our LLVM-based compiler, you take regular Java loops that would run normally on previous hardware, and when our JVM runs on a new hardware, it recognizes the capabilities and basically produces much better loops that use the vector instructions to run faster. And here, you’re talking about factors that could be, 50%, 100%, or sometimes 2 times or 3 times faster even, because those instructions are that much faster. The cool thing for us is not that we sat there and thought of how to use the latest Broadwell chip instructions, it’s that LLVM does that for us without us having to work hard.

Intel has put work into LLVM over the last two years to make sure that the backend optimizers know how to do the stuff. And we just need to bring the code to the right form and the rest is taken care of by other people’s work. So that’s a concrete example of extreme leverage. As the processor hits the market, we already have the optimizations for it. So it’s a great demonstration of how a runtime like a JVM could run the exact same code and when you put it on a new hardware, it’s not just the better clock speed and not just slightly faster, it can actually use the instructions to literally run the code better, and you don’t have to change anything to do it.

Cameron McKenzie: Now, whenever I talk about high-performance JVM computing, I always feel the need to talk about potential JVM pauses and garbage collection. Is there anything new in terms of JVM garbage collection algorithms with this latest release of Zing?

Gil Tene: Garbage collection is not big news at this point, mostly because we’ve already solved it. To us, garbage collection is simply a solved problem. And I do realize that that often sounds like what marketing people would say, but I’m the CTO, and I stand behind that statement.

With our C4 collector in Zing, we’re basically eliminating all the concerns that people have with garbage collections that are above, say, half a millisecond in size. That pretty much means everybody except low-latency traders simply don’t have to worry about it anymore.

When it comes to low-latency traders, we sometimes have to have some conversations about tuning. But with everybody else, they stop even thinking about the question. Now, that’s been the state of Zing for a while now, but the nice thing for us with Falcon and the LLVM compiler is we get to optimize better. So because we have a lot more freedom to build new optimizations and do them more rapidly, the velocity of the resulting optimizations is higher for us with LLVM.

We’re able to optimize around our garbage collection code better and get even faster code for the Java applications running it. But from a garbage collection perspective, it’s the same as it was in our previous release and the one before that because those were close to as perfect as we could get them.

Cameron McKenzie: Now, one of the complaints people that use JVMs often have is the startup time. So I was wondering if there’s anything that was new in terms of the technologies you put into your JVM to improve JVM startup? And for that matter, I was wondering what you’re thinking about Project Jigsaw and how the new modularity that’s coming in with Java 9 might impact the startup of Java applications.

Gil Tene: So those are two separate questions. And you probably saw in our material that we have a feature called ReadyNow! that deals with the startup issue for Java. It’s something we’ve had for a couple of years now. But, again, with the Falcon release, we’re able to do a much better job. Basically, we have a much better vertical rise right when the JVM starts to speed.

The ReadyNow! feature is focused on applications that basically want to reduce the number of operations that go slow before you get to go fast, whether it’s when you start up a new server in the cluster and you don’t want the first 10,000 database queries to go slow before they go fast, or whether it’s when you roll out new code in a continuous deployment environment where you update your servers 20 times a day so you rollout code continuously and, again, you don’t want the first 10,000 or 20,000 web request for every instance to go slow before they get to go fast. Or the extreme examples of trading where at market open conditions, you don’t want to be running your high volume and most volatile trades in interpreter Java speed before they become optimized.

In all of those cases, ReadyNow! is basically focused on having the JVM hyper-optimize the code right when it starts rather than profile and learn and only optimize after it runs. And we do it with a very simple to explain technique, it’s not that simple to implement, but it’s basically we save previous run profiles and we start a run assuming or learning from the previous run’s behavior rather than having to learn from scratch again for the first thousand operations. And that allows us to run basically fast code, either from the first transaction or the tenth transaction, but not from the ten-thousandth transaction. That’s a feature in Zing we’re very proud of.

To the other part of your question about startup behavior, I think that Java 9 is bringing in some interesting features that could over time affect startup behavior. It’s not just the Jigsaw parts, it’s certainly the idea that you could perform some sort of analysis on code-enclosed modules and try to optimize some of it for startup.

Cameron McKenzie: So, anyways, if you want to find out more about high-performance JVM computing, head over to Azul’s website. And if you want to hear more of Gil’s insights, follow him on Twitter, @giltene.
You can follow Cameron McKenzie on Twitter: @cameronmckenzie

December 13, 2017  12:04 AM

Is there a hidden threat embedded in the Management Engine of your Intel chip?

George Lawton Profile: George Lawton

A couple of years ago, Intel invited me to a press luncheon to talk about how great their new chips were. They had new chips that were faster and used less power, and they were selling like hot cakes. The food was good and the new machines were smaller and ran a few minutes longer on batteries than last year’s models. Almost in passing I heard one of their product managers describing a secret operating system buried on enterprise computers, called the Management Engine (ME). They called it a feature, and all I could see was a hidden threat.

They said it only ran on “enterprise computers,” and I remember sleeping a little better at night imagining that this little gremlin did not run inside my consumer laptop at the time. I just found out they have a new test for this hidden threat that can determine if your computer is infested with this incurable disease. Yep, I have it. You probably have it too, along with most of the cloud servers keeping trillions of dollars of enterprise apps secure.

They have also released a so called cure for the symptoms, which is thus far only available from Lenovo. But it is not really a cure in the way an antibiotic eradicates an infection. Its more like those $50,000/year cocktails that manages AIDs, but leaves its hosts at risk of communicating it to others. The fundamental problem is that Intel has thus far not shared much about how this hidden threat works, or whether it can in fact be eradicated. They have just patched some of the vulnerabilities, which thus far are probably not a great danger to cloud apps since someone must physically insert a USB drive to compromise them.

All systems are vulnerable

The fundamental problem in other words is not the news that someone found a vulnerability and patched it. The problem is that Intel has relied on a very flawed theory that something running on virtually every enterprise and cloud server out there is protected because no one outside of Intel knows how it works. This was the same theory that the utility industry relied upon until the US and Israel figured out how Stuxnet could be used to take out the Iranian nuclear program and perhaps an Iranian power plant. But once this attack was shared, all the power infrastructure in the world became vulnerable to Stuxnet’s progeny.

I am sure Intel’s greatest minds did a great job of identifying and mitigating every vulnerability they could dream up at the time. So did the folks that developed SSL, and none of the craftiest minds in the security industry recognized that hidden threat until after the code had been in the public domain for two years.

One of the key developments over the last couple of years has been a move towards DevSecOps, which assumes that all code has vulnerabilities. It’s just that no one has figured out how to exploit them at the time of deployment. Therefore, a mechanism must be in place to quickly and automatically find and update these systems smoothly when a new patch is required. DevSecOps breaks down when it relies on 3rd parties like Lenovo, Dell, and HP to tune the update to their particular configurations.

Its not clear how bad this whole episode will end up being for Intel. Thus far, they have done a pretty good PR job of suggesting that these attacks requiring physical access are not a big deal. This whole thing might blow over by the time they release a new series of chips that leave the little demon out.

The keys to the hidden threat

But then again, the final impact of Intel’s foray into security by obscurity will have to get past the test of the NSA and Joe. The NSA because it seems credible that Intel decided it was important to share such important details to protect American cyber security. We all know that the NSA has the best resources and commitment to protecting these secrets from foreign states, angry contractors, and Wikileaks, so they obviously will never let the secret get out.

No, the real threat is probably someone like Joe. ME runs in a kind of always on mode that allows it to communicate on a network even when the power is off, as long as the computer is plugged in. It is protected by an encryption key. I would like to imagine that the only key to all the Intel computers in the world is locked inside a secret vault with laser beams protecting it from mission impossible style attacks.

It would not be surprising if the reality was much more mundane. Its probably on a little security token that Joe took home one day to debug a few components of the ME server. Joe is probably well meaning, but made a copy of this key one day when management was pushing him to meet an unrealistic software delivery target. Joe’s a good guy and would never do anything deliberately to hurt the company, much less all Intel users around the world.

Unfortunately for the rest of us, Joe has been trading Bitcoins lately. No one will come looking for the key to all the Intel computers when they penetrate his workstation trying to steal his Bitcoin wallet. But some nefarious hacker may see this discovery as a divine omen of his destiny to create a business around penetrating the most sensitive cloud servers in the world by exploiting this hidden threat. And maybe, just maybe, if Joe happens to be reading this, he’ll have the foresight to delete the keys before its too late.


December 12, 2017  2:18 AM

DevOps for enterprise development a key theme at the Gartner Solutions Conference

cameronmcnz Cameron McKenzie Profile: cameronmcnz

Whenever I attend a software conference, I always like to survey the exhibition floor to discover what the latest trends, fashion or fad is that’s taking over the industry. At the 2017 Gartner Software and Solutions conference, held last week in Las Vegas, the dominant theme was once again DevOps for enterprise development, continuing a craze that has been pervasive at pretty much every software conference I’ve attended this year. But the big shift that seems to be happening in this space is the fact that the top DevOps vendors are finally starting to talk about what it means to deliver better software faster.

Moving beyond software delivery

“Doing DevOps means getting more things to market faster”, said Lance Knight, COO of Go2Group, a DevOps vendor that specializes in software and services that help enterprises deal with DevOps transitions. But DevOps isn’t just about delivering software. Knight is quick to point out the fact that the software applications companies deploy to their clients, and the features they deliver, are key elements that can set them apart from the rest of their competition. “Don’t just think about it as delivering software faster, but think of it as delivering market differentiating factors to the customer faster,” Knight said. Knight uses the ability to deposit a check into your bank account simply by taking a photo of it as an example.

“Doing DevOps means getting more things to market faster.”
Lance Knight, Go2Group COO

Nowadays, it’s a fairly common feature of most mobile apps to be able to use a cell-phone’s on-board camera to take a picture of a check and make an immediate deposit simply by uploading the photo through the bank’s mobile app. In the banking industry, there were leaders who delivered this type of functionality to their clients early, and there are laggards who are still yet to offer this feature. The ones who are using DevOps for enterprise development are able to accelerate the delivery of features and get applications to market faster. That becomes a differentiating factor among competitors.

Empowering digital transformations

IBM’s Eric Minick, a product management lead in the continuous delivery space, was another software professional on the Gartner Software and Solutions conference’s exhibitors floor advocating the DevOps approach to software development. “As organizations undergo digital transformations, they need better business and technical agility, and DevOps provides that,” said Minick. “From recognizing trends to reacting to new technical innovations, organizations that want to work faster, deliver height quality software and take advantage of changes in the industry are adopting DevOps techniques.”

The benefits of differentiating oneself in the market, or empowering an organization to undergo a digital transformation are fairly high level benefits of adopting DevOps for enterprise software delivery. There are certainly a variety of reasons for adopting a DevOps process that might resonate more with the applications developers who are actually delivering the software, with predictability and transparency being two key features of note. “Automating tasks makes software development more predictable,” said Knight. “And when the human element is removed from tasks, the processes become more transparent.”

And of course, there is the fundamental fact that organizations who implement DevOps for the enterprise tend to release software faster and have fewer bugs in their code, and that’s a pretty compelling attribute in itself. Of course, change is never easy, and organizations are still struggling as they try to incorporate DevOps processes into their application lifecycle. But as they struggle, it’s good to know that there are plenty of DevOps vendors out there providing solutions that will make their DevOps transitions easier.


What types of tools do you think would make DevOps for enterprise development easier and more manageable?


November 28, 2017  2:20 AM

MVC 1.0: The perfect fit for microservice admin tools

cameronmcnz Cameron McKenzie Profile: cameronmcnz

The following is a transcript of the conversation TheServerSide’s Cameron McKenzie had with Ivar Grimstad out hot topics in the Java ecosystem, with an emphasis on MVC 1.0 and the new security specification, JSR-375.



Getting people talking about MVC 1.0 and JSR-375

Cameron McKenzie: TheServerSide was really lucky to catch up with Ivar Grimstad earlier this year. These days he’s evangelizing a couple of what I think are pretty important topics. One is the new MVC framework, and the other one is Java security.

The interesting thing, though, is that despite how important these specifications are, MVC and JSR-375 just don’t quite get the headlines like, say, microservices and containers do. So I wanted to know from Ivar, what are the big things that people need to know about the new MVC specification and JSR-375.

Ivar Grimstad: If I take MVC first, we had a lot of attention around that a couple of years ago when the spec was a part of the EE platform. And there was some noise about it when Oracle took it out. And then, happily, I was fortunate to be in the position that I could be the lead of that specification, so I got it from Oracle and keep on doing it. I also brought on Christian Kaltepoth. Since we were the two most active members of that spec, we were the best guys to take it further.

And there has been a little bit of silence around MVC, and we don’t get much attention anymore. The community really wanted MVC when it started and then they kind of moved away towards microservices and containers.

So while we are kind of in the back stream of the cool technology, MVC is still something I think will be used. We get a lot of community responses when I tweet or blog or say anything about it. We have a lot of contributors on the mailing list, and it’s doing fine.

Cameron McKenzie: Now, one of the things about MVC 1.0 is the fact that it seems to work really well with microservices. And I can see it being used heavily to create UIs for container-based applications. Is that where you see the focus being?

Ivar Grimstad: I also think it’s going to be used a lot in more enterprise, in-house applications, but that’s not the sexy topic that attracts the audience at conferences.

MVC 1.0 and JSR-RS

Cameron McKenzie: So in your eyes, what is that makes MVC 1.0 so special?

Ivar Grimstad: Well, the most important thing, the way I see it, is that it’s built on top of JAX-RS, so if you’re using JAX-RS to create your REST endpoints, the transition to also add some web interfaces to your applications becomes easy. Most REST applications also have some kind of admin tool going on along with it. With MVC 1.0 we can actually build on the exact same technology used by the REST application, because with MVC we just add some flavors to JAX-RS and then we’re good to go.

Cameron McKenzie: So is MVC the new UI framework for container-based applications?

Ivar Grimstad: Definitely. I mean, if you’re creating a containerized service that also has some kind of UI to it, it makes sense to use MVC. If you have developers that are on JAX-RS platform and know Java EE and you’re building on that infrastructure, I see MVC as a very good fit there.

Cameron McKenzie: Now, you are also an expert on JSR-375, the new security API that’s going into Java EE. What can you tell us about that?

Ivar Grimstad: This is a brand new security API for Java EE 8.

I think it’s an important specification because it bridges some of the gaps that were lacking in previous versions. We introduce a common terminology, so we are kind of talking about the same thing, no matter, when you’re talking about security such as the authentication mechanism. We also have the more application developer-managed support. So you can, with annotations, easily add security, and you don’t need to do any container or vendor-specific configuration to get it up and running.

Standardized security with JSR-375

Cameron McKenzie: Now, when I read the JSR-375 spec, I kinda say to myself, you know, “Really? Have we not standardized a lot of this stuff already?” I guess a lot of the stuff like custom user registry APIs and how we connect to user repositories, stuff that’s been managed by the vendor in the past. So the developer really hasn’t had to think about it. But, yeah, I mean, do you not get that impression, “Jeez, how did we get to 2017 and not have this stuff standardized already?”

Ivar Grimstad: Yeah, that’s true. And we have the same feeling. But now it’s there, and that’s a good thing. And it’s definitely a good foundation to build upon.

Cameron McKenzie: So what is it about JSR-375, the Java security spec 1.0, that makes it so conducive to working with microservices?

Ivar Grimstad: You do the security in the application, so you don’t need to configure it from the outside. So it’s contained in your application, the security configuration.

Cameron McKenzie: So what are the big topics you see going forward into 2018?

Ivar Grimstad: Since I’m moving around in the Java EE world, I think that one of the main topics we are gonna discuss is the Java SE 9 move to the Eclipse Foundation. And there’s also a lot of discussion already on Twitter about the naming because they released the name for it to be Eclipse Enterprise for Java, and people of course have opinions about that. So I think that’s gonna be discussed a lot.

Java: A curse or a blessing

Cameron McKenzie: Now, here is a question I have been asking a number of people lately. It’s this: looking past, over the past six or seven years, do you think being the steward of the Java Platform has been a blessing or curse for Oracle?

Ivar Grimstad: I think they are making big money on Java, So I think it’s been pretty good for them. So I don’t think it’s been a curse. I think the handling of EE 8 in 2016 was not good. And we saw the community react to that with the Java EE guardians and the MicroProfile which grew out of that. But the turn they have now taken to open-source things, like open-source NetBeans to Apache and the EE to Eclipse Foundation and also open-sourcing more of the JDK tooling, they’re taking a step in the right direction. I think it’s gonna be positive reception.


November 23, 2017  2:15 AM

The impact of Java SE 9 on operations and development teams

cameronmcnz Cameron McKenzie Profile: cameronmcnz

Just prior to JavaOne, TheServerSide spoke with ZeroTurnaround’s Simon Maple about all of the things going on with Java SE 9 and the greater Java ecosystem. A couple of interesting articles eluted from the conversation, so we thought it might be worthwhile publishing the interview in its entirety.



Cameron McKenzie: There’s a million things going on in the world of Java these days. What are the topics you believe to be of the most importance when it comes to Java and Java SE 9?

Simon Maple: Well, let’s start with what’s happening in Java. There are so many interesting things happening in Java right now. Java is being pushed over to Eclipse, Java SE is going forward being driven first with open JDK, the cadence of Java SE releaeses is now every six months; there are some really, really interesting things happening.

If you look at Java SE 9, you’ll see there are some interesting things going on with the JDK. Obviously, it was delayed by an extra year, so it’s been three years in the making. But one of the things which JDK 9 delivers is the module system. For me, it’s something which developers aren’t really gonna get involved with too much. People who really want modularity are gonna be using OSGi or something similar. People who think, “Yeah. Okay, it’s gonna be an okay idea,” aren’t necessarily chewing at the bit to get it anyway. So I’m not sure people are going to be jumping at modularity.

It does have big benefits in terms of the future of Java, how we can reduce the footprint of Java, how we can develop modules that can be incubated like HTTP/2 and things like that. So it provides a lot of promise, but it’s just not there yet. I think it’s gonna take a long time for the industry and the ecosystem to really embrace modules. Because obviously all frameworks, libraries, tools, vendors of access and things like that, it’s gonna take them time to support the other application developers can use of these frameworks and tools alongside their average developments. We believe that the adoption of Java 9 is gonna be as big as any of the previous releases really.

Who benefits from Java modularity?

Cameron McKenzie: Now, this year at JavaOne, project Jigsaw and modularity is a huge topic. But who benefits the most from modularity? Is this something that only the tool vendors are really gonna start using or is it something that the typical everyday enterprise software developer can start using and leverage in the code that they develop?

Simon Maple: I think it actually benefits a number of different people but not everyone in a massive way. But across the board when it helps a number of different divisions, like operations or development or the business, then when it helps everyone it might be a good choice. Let’s take them one at a time. If you look at development team, how it benefits them largely is, if you have a large distributed team where you have different developers writing different components of an application, on a large application, it’s actually a great way to make sure that other teams using your components in your APIs are using them as you’d expect them to use it. So this gives you much greater power to then make changes to your code knowing you’re not gonna break any people who use your code. Because with modularity, you can effectively say, “These are the APIs I want to expose, everything else I wanna hide.”

So from a developer point of view, that’s actually really beneficial. You can also, as a developer, deliver your update quicker because you couldn’t, for an typical Java application that doesn’t use modularity, just upgrade a single module at a time.

Let’s take Java as an example? We’ve seen in the past huge releases containing many, many things and the reason they get pushed out a year, six months, two years, is because largely we have been waiting for different features. So Java 8 was delayed because of lambdas, Java 9 was delayed because of Jigsaw. In fact, Java 8 largely was also delayed because of Jigsaw which was later pulled. So, everything else, all the other benefits in Java, you can’t get hold of. And it’s because you’re constantly waiting for this one big drop. If you’re looking at something more modular, you could actually upgrade some modules without needing to upgrade the whole application at once.

So from an operations and a development point of view, and in fact from the business point of view as well, you’re much more reactive in terms of how quick you could fix bugs, how quick you can move in terms of your feature planning and things like that. So from that point of view, it’s very, very powerful for the business and very powerful to actually push your features to market. And that’s purely from the business point of view.

From the operations point of view, it’s actually quite nice. Well, it can be a pain and it can be good. It can be good because when you are dealing with individual modules, you’re far more isolated in terms of where your code is, where your applications are modular. However, it can also be slightly more tricky because you have dependencies. The actual deployment can be trickier.

But modularity is not a silver bullet for everyone, but individually different people will find different values for modules.

The timing of Java SE 9

Cameron McKenzie: Now why is it that all of these announcements that pertain to Java SE 9 and the Java platform came out just before JavaOne? Is that just coincidental or is this a matter of Oracle just getting better at doing PR before the big conference?

Simon Maple: So, let’s take each announcement, because I think each announcement there’s no one reason why they came out when they did other than maybe lining them up for JavaOne 2017. But I think each announcement has different drivers. You know, I’m not getting any information directly from Oracle so I’m speculating. But in my opinion, let’s look at Java EE. Obviously, Oracle has the year of drought where they weren’t talking about Java EE specs. Personally, I believe Oracle was pushed into a corner to say, “Hey let’s progress Java EE 8 and 9.” If you actually look at what happened over the last year of Java EE in terms of delivering features to 8, a lot of it’s been pushed over to 9, and personally I think moving Java EE to the Eclipse foundation benefits both Oracle and Java EE because I think it relieves Oracle of the burden that Java EE has bestowed upon them.

From that point of view, they don’t need to worry about it anymore. From the Java EE point of view and the community point of view, I think they’re gonna have a lot more ownership of Java EE obviously it being now part of the Eclipse foundation and it can now go at the speed at which the community wants to drive it. So I think it’s a move whereby everyone’s happy. Oracle doesn’t look like the bad guy anymore, they’re not going to hold it up and the community can push it as far as they wanna push it. So it’s gonna be interesting to see over the next years how much effort Oracle will put in in terms of supporting the projects in Eclipse, in terms of how many developers are they gonna be providing to support each of the specs, not just delivering code but in pushing the specifications forward. So I think that’s really gonna show how much Oracle planned to invest in Java EE, but I think it’s that beneficial.

In terms of them pushing the cadence to six months now, I think there are two reasons for that. The first reason is because everyone is getting a little bit fed up of Java slipping constantly. We’ve seen releases three, four, five years in delay. Obviously, the five years…was it five or the six years? That was really because the stuff in Oracle moved. But since Java 9 is now being pushed out and we have a module system, we can develop much faster and we can provide smaller features quicker. So it does make sense for Java, now it’s modularized, to make use of that and to say, “Right, now we’re gonna be pushing out different pieces of different modules when they’re ready. So every six months what’s ready to be pushed out let’s make it available.” So I think that’s really, really good for Java.

From the ecosystem, it’s gonna be hard work. I think it’s probably gonna be harder work in the ecosystem than it is for Oracle to maintain. Because for Oracle they just need to continue developing and when a feature is ready to go into the main branch they push it into the main branch and we’re good to go. But for the ecosystem, we’re now dealing with…if we look at just next year, we’ve got…what we’ve got Java 9 to the ecosystem’s support, and next year we’re gonna have 18.3 and 18.9 to support. Java 9 is gonna be a well-supported release. Java 18.9 is gonna be a long-term support release. Java 8 is now gonna be commercially supported till 2025. So there are a large number of releases that tools are gonna have to support and it’s no good for tools just to say, “We’re only gonna support the long-term support releases.” Because they’re gonna lose customers.

So they’re gonna have to support every single release of Java, frameworks the same, application servers are more likely gonna support the main long-term support versions. Libraries are gonna have to support all the versions of Java. So for the ecosystem, it’s gonna be a lot more work, a lot more testing. It’s gonna be interesting to see how they keep up particularly for those libraries and frameworks which don’t have large communities and large numbers of committers to do this work, so that’s gonna be extremely interesting. And the third thing was Oracle pushing or putting OpenJDK first. They’re obviously gonna still have their own commercial kind of support branch, which is fine.

What’s new at ZeroTurnaround?

Cameron McKenzie: Now I notice ZeroTurnaround has a booth on the exhibitors’ floor. What’s going on with ZeroTurnaround at JavaOne? What are the big things that you guys are working on? Are there any big product announcements and what are you guys doing to draw people into your booth in the exhibition hall?

Simon Maple: We are obviously working very, very hard to support Java 9 and JRebel which is gonna be a big, big talk obviously because JRebel is so deeply connected to the low-level parts of the JVM. We’re looking to release support for that very soon after the Java 9 release. Yes, we are making some big moves in and around the developmental productivity…I’m sorry, the developmental performance market. We already have XRebel as you know. So we’re gonna be making some announcements over the next week or so and it’s gonna be extremely interesting because we’re gonna be very disruptive in the performance management space. That’s gonna be extremely interesting to me over the next few weeks as well.

Cameron McKenzie: So to get more insights from Mr. Maple, you can always follow him on Twitter @sjmaple. And for that matter if you wanna learn more about some product announcements that are coming out from ZeroTurnaround, you might wanna follow them on Twitter as well, @zeroturnaround.

You can follow Cameron McKenzie on Twitter: @cameronmcnz


November 23, 2017  1:26 AM

Twelve ways to be a more trustworthy serverless services vendor

Daisy.McCarty Profile: Daisy.McCarty

Serverless services may be the big trend in IT these days, but it’s still a service-full world out there, and virtually every organization is relying on third party services to keep their technology going. Now, more than ever, companies are finding it critical to choose the right partnerships. Your clients are relying on you to keep your commitments so they can keep theirs. But what practices and habits make it clear that you’re a vendor that clients can trust?

Patrick Debois, CEO of Zender TV, has definite opinions about what makes a high-quality, trustworthy serverless services vendor. Here are twelve things Patrick takes into account when selecting vendors to bring into his circle of trust. Follow these guidelines to become a better vendor for your customers—and use the same criteria to choose better partners for your ecosystem, regardless of whether you’re in the field of serverless services or not.

  1. Communicate about your status. Amazon learned the hard way about the importance of transparency during their first major outage. The services company was swamped with inquiries about what was going on. A simple status page would have answered many of these questions and kept tech support from being so overwhelmed.
  1. Monitor the other agents you depend upon. This is the key to not being blindsided when critical services are experiencing issues. You can only let your customers know what’s going on when you’re keeping your finger on the pulse of your ecosystem.
  1. Do a post mortem after a failure. Debois put it bluntly, “If you had a real failure, man up and describe what actually happened.” When people understand what went wrong, and how the issue is being addressed to prevent future failure, they can make better choices about managing their own risk.
  1. Be proactive. This could be as simple as warning customers to take action to head off issues. For example, if they are using a service outside the scope of what it was intended to handle, let them know up front that they might run into issues. Also, publish a change log with new features so customers know what’s coming up and can check other dependencies on their end.
  1. Expose your metrics in more detail. Even if this means revealing that your company is having trouble with a specific aspect of performance, honesty is valuable. Engineers working in your clients’ organizations want to see what’s going wrong so they don’t waste time trying to debug something on their side when the real issue is on your side.
  1. Keep people updated. This can be done with email or other communication. Consider what information you’d want to receive from your vendors, and use that as a guideline for what to share.
  1. Make it easy to get data out. This is a big deal for Debois. “Something I look for when I use a service is whether I can get data out.” Even more important, he wants to know if he can easily reproduce settings. Going back to the “factory default” and having to rebuild settings can be a huge task after a failure. However, he says it is rare to find a vendor that allows a full data dump including settings. Be the exception.
  1. Talk at conferences like JavaOne. Even if it’s not specifically about your services, being willing to share knowledge is a sign that you are there to help and that you care about helping users have a better experience.
  1. Contribute to open source. Again, this allows potential customers to see the quality of your work and your commitment to supporting best practices. They can see how you do documentation as well.
  1. Give users a voice. Allow the community to vote on upcoming features so that your offerings are tailored to more effectively address their concerns and needs.
  1. Show that you listen. Responding to all requests quickly is crucial for maintaining trust. Patrick says he is particularly impressed by organizations that “listen in” to conversations about their services on Twitter and jump in to respond in real time to ask questions and propose solutions. Having engineers that aren’t afraid to talk to people is a huge plus.
  1. Provide feedback to other vendors you depend on for services. Sometimes, it takes an outside voice to prompt action—even if internal engineering teams have been pushing for the change for a while. Being a good customer makes you a better vendor by helping improve the entire ecosystem.

So do you want to be a more trustworthy, serverless services vendor, or just a more trustworthy vendor in general? These are twelve ideas you should take into account.


November 23, 2017  1:26 AM

Constructors or static factory methods?

yegor256 Profile: yegor256
Uncategorized

I believe Joshua Bloch said it first in his very good book “Effective Java”: static factory methods are the preferred way to instantiate objects compared with constructors. I disagree. Not only because I believe that static methods are pure evil, but mostly because in this particular case they pretend to be good and make us think that we have to love them.


November 23, 2017  1:25 AM

How spending too much time debugging causes trouble

OverOps Profile: OverOps
Uncategorized
We asked some of the engineering teams that we work with how much time their developers spend on debugging processes. The answer? On average, 25% of developers’ time is spent on debugging alone. That means that more than a full day of the work week is dedicated to troubleshooting and problem solving.
 
In a previous post, we looked at 5 of the top areas where this time is spent (think searching through log files, reproducing errors, “war room”-type situations, etc.). Now, we need to talk about how all of this time spent debugging impacts your company.
 


November 13, 2017  1:39 PM

Five features to make Java even better

yegor256 Profile: yegor256
Uncategorized

I stumbled upon this proposal by Brian Goetz for data classes in Java, and immediately realized that I too have a few ideas about how to make Java better as a language. I actually have many of them, but this is a short list of the five most important.


November 13, 2017  2:27 AM

Shortcomings of DevOps causes security bug detection to suffer

cameronmcnz Cameron McKenzie Profile: cameronmcnz

Eariler this year we spoke with Jim Manco of Manicode security. It was immediately prior to Oracle OpenWorld 2017, in which Manico was delivering a JavaOne session on Java SE 9 security.

There are plenty of new tools and technologies in the latest version of the JDK to help minimize the number of Java security bugs that developers might encounter. Of course, it’s not good enough just having technologies like JEP-273 (DRBG-Based SecureRandom Implementations), JEP-290 (Filtering of Incoming Serialization Data), and the new unlimited JCE (Java Cryptography Extension) in the Java 9 specification. What’s important in terms of minimizing the number of Java security bugs that get into production is having developers that know what these various security controls do and how to use them.

Talking to Jim Manico about Security

The following is a transcript of the interview between TheServerSide’s Cameron McKenzie and Mr. Manico in which a variety of Java security topics are addressed, including how Java modularity will impact how security bugs are addressed, the shortcomings of DevOps automation tools when Java security bugs arise, and of course, insights on various Java security controls that are new in Java SE 9.



Cameron McKenzie: When it comes to enterprise Java security, what are the things that concern you and what are the things that should concern people who are doing enterprise software development with Java?

Jim Manico: The things that really concern me are the risks against Java and Java is being attacked. To a lot of developers, to be honest with you, this is esoteric stuff. So it’s not necessarily the most exciting or sexy feature of Java, but I daresay it’s the necessary stuff. Like for example, there’s a new JSR that addresses serialization and tries to allow for direct white list filtering of exact classes at multiple tiers within your Java application. This is not exciting stuff, right? But it’s necessary to have a secure Java application or at least it gives you a way to address known Java risks. But, you know, it’s not sexy; But it’s important.

Avoiding Java security bugs

Cameron McKenzie: Quite often, the enterprise Java developer just focuses on fulfilling the business requirements and doesn’t really concern themselves about some security implications of the code that they’re writing. For some of the code the developer would write on an everyday kind of basis, what are some of the security concerns they should be taking into account in order to avoid Java security bugs from finding their way into the code?

Jim Manico: Sometimes the use of numerics for financial processing is a lot more tricky than people would expect. Now other problems range from, when you’re using older school Java technologies and you wanna provide certain kinds of controls, like escaping, to stop cross-site scripting, that’s not part of the core language. There’s some parts of it in J2EE, there’s some frameworks that provide it, but it’s not in the core of the language. And it’s a control that I think should be made more readily available to the developer.

There’s also the issue of all the cryptographic APIs. I think some of the best cryptographic APIs are not in the core of Java. They’re in different places like Google projects. Google Tink is a new project. It’s a cryptographic API that makes Java’s world, the Java developer’s world of interacting with low-level cryptographic APIs easier. I’d love to see more of those kind of APIs closer to the core. Let me rephrase that. The closer these APIs are to the core language, the more likely we’re gonna get use from the developers, right?

We’re all Java security engineers

Cameron McKenzie: Now I’m paraphrasing you a bit here, so correct me if I’m wrong. But in one of the previous sessions of yours that I attended, I remember you talking about how you felt that software developers and DevOps people should actually consider themselves security engineers nowadays. What exactly did you mean by that?

Jim Manico: You know, the world’s changing. So I challenge that most developers, whether they believe it or not, whether they think it or not, or whether they even do it or not, they’re security engineers now. Because the code that they’re writing is on the front-lines of protecting organizations from data loss, financial damage, reputation damage, privacy violations, compliance regulations and fines. So these developers and the code they write are on the front-line of all those issues and more. And so they’re security engineers. So it’s a matter of if they’re gonna do it or not. And if you beat a developer over a head with like pen tests and having some security teams run tools, and, you know, especially early in the maturity, training the developers to attack their code really will help change things.

Later on in the maturity, developers are part of conversations at the early part of design, with existing security libraries and knowledge and controls in place, like rigorous authentication and access control services and advanced cryptographic services available to developers. And these are controls at very early stages of designing software; that’s the ideal, right? But at first, if you’re at the early stage of maturity and maybe you haven’t done application security in your organization ever before, then I think a lot of assessment just to see initially where developers are, show them exploits against their software, that’s usually a good way early on to help shake up a culture. But you know, we wanna do this way more proactively, as we get better at writing secure software.

The limits of DevOps based security

Cameron McKenzie: Does DevOps change the software security game? How do you feel about DevSecOps?

Jim Manico: It’s a piece of the puzzle.

I think it’s a nice word but the concept is that we’re automating all aspects of the software development life cycle. Like we’re automating the build process, we’ve done that a lot of times. We’re integrating the security tests into the build life cycle at different phases of building and deploying software. We’re automating. What else we are we doing? We’re automating dynamic tests, we’re automating code skin tests, we’re automating deployment of software. And as we deploy software, we run this huge batch of tests, automated unit tests and dynamic tests and static tests against our code looking for security bugs. Maybe we find security bugs, we’re gonna stop the build and not allow that code to be deployed. Maybe it’s just a warning and we ship it anyways. There’s many gradients how we do that automation.

DevOps also talks about automating, you know, different dashboards and alerts, so people who are monitoring the application real-time get better intel on security when it’s happening. These are things we’ve done for a long time in software. I think DevOps is putting a little more rigor around it, putting a nice name in front of it, and trying to, at least from my world, add more security to it as much automation as possible.

Now the other side to that is there are some elements of application security that don’t translate really well to automation. Like, especially if like, if you wanna look at a turnkey tool, it’s not always great at finding access control problems, or business logic problems, or deeper problems that maybe a pen test would find, that a turnkey tool, especially a tool that’s tuned to work fast in a DevOps environment, you know, maybe a pentester will find that manually where automation may miss the problem. The dark side of DevOps is that we still need people. We can’t just automate everything. We still need people involved at some level of a deep review, I think, to really provide deeper security assurance, if that’s what you want.

Cameron McKenzie: Now tell me honestly, is the deprecation of the Java Web Browser Plug-in the greatest thing to happen to security analysts?

Jim Manico: As a programmer, I’m like, “whatever,” but as an infrastructure person who’s trying to manage a fleet of PCs or Macs or whatnot to keep an organization secure, not supporting that is usually a good thing, right? I don’t wanna trash Java but Java in the client traditionally has not been a great thing. A lot of policies are to heavily limit how Java runs on the client. And so this is one kick in that direction, which is nice. I’m not saying Java-client is bad, it’s just…it’s good for administrators that can do as much as possible to control clients’ JVMs as much as they can. And having rogue applets from any website just running the browser and similar technology just is not an ideal situation, right? So we wanna restrict that and manage that as best as we can.

Software security tooling

Cameron McKenzie: Now when you’re working as a security consultant and you go into an organization, what are some of the tools and the governance models and the policies that you like to see already in place as soon as you get in there?

Jim Manico: So it’s like a good authentication service is usually a good idea, right? So for a developer, especially as your organization matures, it’s really…rather than having every developer recode in some way parts of an authentication service, having or using a rigorous one that all developers can use in a standard way makes that front gate of your application easier to lock down. And the second layer I look at is, of course, access control. Having a good access control service and series of methodologies and database driven rules and configuration capability, all that good stuff for really good horizontal, detailed, permission-level access control in a way that all developers of your team can leverage, that’s a good idea.

You know, we also wanna be able to build user interfaces in a secured fashion. Whatever framework that we’re using, we wanna understand how to keep that locked down via scripting. And we also wanna make sure developers understand, you know, what’s proper data flow through a client, what we should and should not be storing on a client long term. What kind of logic should and should not run on a client? What stuff should we push more server side? What shouldn’t we even do on the client? You know, how are we relating data access and user interface functionality access with the same consistent set of access control rules on both the client and the server?

You know, all these things are things that developers do day in and day out that they really need to understand how to use and get right. And if we don’t have these tools available, if we’re like, “Yo, developer, you know, welcome to the team, go figure out access control now,” you know, good luck with that. You know, that’s so critical of an area of your software, it’s not something you can just…it’s not something you…it’s something you can but I daresay it’s something you shouldn’t just casually put into your application. It should be extremely intentional.

Cameron McKenzie: And, you know, there are a whole bunch of hot topics rising in the Java ecosystem. There’s lots of talk about containers and microservices and coding. But security is important. And if you wanna learn about security, there’s no better speaker to learn from then Jim Mancio.


You can follow Jim Manico on Twitter: @manicode
You can follow Cameron McKenzie on Twitter: @cameronmcnz

 


November 7, 2017  1:18 AM

From monoliths to cloud native composition with Apprenda’s Sinclair Schuller

cameronmcnz Cameron McKenzie Profile: cameronmcnz

In our series on cloud native computing, TheServerSide spoke with a number of experts in the field, including a number of members of the Cloud Native Computing Foundation. The following is the transcription of the interview between Cameron McKenzie and Apprenda’s Sinclair Schuller.

 


An interview with Apprenda’s Sinclair Schuller


How do you define cloud native computing?

Cameron McKenzie: In the service side’s quest to find out more about how traditional enterprise Java development fits in with this new world of cloud native computing that uses microservices and Docker and container and Kubernetes, we tracked down Sinclair Schuller. Sinclair Schuller’s the CEO of Apprenda. He’s also a Kubernetes advocate and he also sits on the governing board of the Cloud Native Computing Foundation.

So the first thing we wanted to know from Sinclair was, well, how do you define cloud native computing?

Sinclair Schuller: Great question. I guess I’ll give you an architecturally-rooted definition. To me, a cloud native application is an application that has an either implicit or explicit capability to exercise elastic resources that it lives on and that has a level of portability that makes it easy to run in various scenarios, whether it be on cloud A or cloud B or cloud C. But in each of those instances, its ability to understand and/or exercise the underlying resources in an elastic way is probably the most fundamental definition I would use. Which is different than traditional non-cloud native applications that might be deployed on a server. Those have no idea that the resources below them were replaceable or elastic, so they could never take advantage of those sorts of infrastructure properties. And that’s what makes them inherently not scalable, inherently not elastic and so on.

Cameron McKenzie: Now, there’s a lot of talk about how the application server is dead, but whenever I look at these environments that people are creating to manage containers and microservices, they’re bringing in all of these tools together to do things like monitoring and logging and orchestration. And eventually, all this is going to lead to some sort of dashboard that gives people a view into what’s happening in their cloud native architecture. I mean, is the application server really dead or is this simply going to redefine what the application server is?

Sinclair Schuller: I think you’re actually spot-on. I think, over this whole application server is dead thing is, unfortunately, a consequence of cheesy marketing where new vendors try to reposition old vendors. And it’s fair, right? That’s just how these things go, but nothing about even the old app server’s dead. In many cases they’ll still deploy their apps to something like Tomcat sitting in a container. So that happens, so that hasn’t gone away per se, even if you’re building new microservices.

Has the traditional heavyweight app server gone away? Yes. So I think that has fallen out of favor. Take the older products like a WebLogic or something to that effect, you don’t see them used in new cloud native development anymore. But you’re right, what’s going to happen, and we’re seeing this for sure, is there’s a collection of fairly loosely coupled tooling that’s starting to surround the new model and it’s looking a lot like an app server in that regard.

This is a spot where I would disagree with you: I don’t know if there’s going to be consolidation. But certainly, if there is, then what you end up with is a vendor that has all the tools to effectively deliver now a cloud native app server. If there isn’t consolidation and this tooling stays fairly loosely coupled and fragmented, then we have a slightly better outcome than the traditional app server model. We have the ability to best of breed piecemeal tooling the way we need it.

Cloud native computing and UI development

Cameron McKenzie: One of the things that a lot of developers and architects are asking is, “What is the role of the UI in a world of cloud native computing?” Is it simply to be developed by a front-end JavaScript framework like Angular UI? Is it actually embedded inside of the application that gets deployed to the container? What’s actually going on in the UI development space in terms of cloud native development?

Sinclair Schuller: I think to a degree, the UI battle has been won, so to speak, right? It seems like the market’s settled on JavaScript and libraries like Angular to build out effectively frontends for these backend services. I guess there isn’t too much tumultuousness or controversy anymore on that side, which is why you don’t hear about it too much. And the backend side of things lagged a bit and now containers came in and that’s being revolutionized. So I think of what’s happening with backend services and containers to be very similar to what happened with JavaScript, say, 10 years ago when it really started to take hold. So I think the role that JavaScript and HTML5 play is already pretty well defined and we’re not going to see a ton of change there.

Cameron McKenzie: I can develop a “hello world” application. I can write it as a microservice. I can package it in a Docker container and I can deploy it to some sort of hosting environment. What I can’t do is I can’t go into the data center of an airline manufacturer and break down their monolith, turn it into microservices, figure out which ones should be coarse grain microservices and figure out which ones should be fine grain microservices. I’ve no idea how many microservices I should end up with and once I’ve done all that, I wouldn’t know how to orchestrate all of that live in production.

Apprenda’s role in advancing cloud native computing

How do you do that? How do organizations take this cloud native architecture and this cloud native infrastructure and scale?

Sinclair Schuller: Yeah, so you actually just described our business. Effectively, what we notice in the market is that if you take the world’s largest companies that have been built around these monolithic applications, they have the challenge of “how do we decompose them, how do we move our message forward into a modern era? And if we can, of course, some applications don’t need that, how do we at least run them in a better way in a cloud environment so that we can get additional efficiencies, right?”

So what we focused on is providing a platform for cloud native and also ensuring that it provides, for lack of a better term, error bridging capabilities. Now, in Apprenda, the way we built it, our IP will allow you to run a monolith on the platform and we’ll actually instrument changes into the app and have it behave differently so that it can do well on a cloud based infrastructure environment, giving that monolith some cloud architecture elements, if you will.

Now, why is that important? If you can do that and you can quickly welcome a bunch of these applications onto a cloud native platform like this and remove that abrupt requirement that you have to change the monolith and decompose it quickly into who knows how many microservices, to your point, it affords a little bit of slack so the development teams in those enterprises can be more thoughtful about that decision. And they can choose one part of the monolith, cleave it off, leverage that as an actual pure microservice, and still have that running on the same platform and working with the now remaining portion of the monolith.

By doing that, we actually encourage enterprises to accelerate the adoption of a cloud native architecture since it’s not such an abrupt required decision and such an abrupt change to the architecture itself. So for us, we’re pretty passionate about that step. And then second, it is the how do I manage tons of these all at once? The goal of a, in my opinion, good cloud abstraction like Apprenda, a good platform that’s based on Kubernetes is to make managing 10, 1,000 or N number of microservices feel as easy as running 1 or 2.

And if you can do that, if you can turn running 1 or 2 to kind of the M.O. for running just tons of microservices, you remove that cost from the equation and make it easier for enterprise to digest this whole problem. So we really put a lot of emphasis on those two things. How do we provide that bridging capability so that you don’t have to have such an abrupt transition and you can do so on a timeline that’s more comfortable to you and that fits what you need and also deal with the scale problem?

Ultimately the only way that scale problem does get solved, however, is that a cloud platform has to truly extract the underlying infrastructure resources and act as a broker between the microservices tier and those infrastructure resources. If it can do that, then scaling actually becomes a bit trivial because the consequence of a properly architected platform.

Cameron McKenzie: Now, you talked about abstractions. Can you speak to the technical aspects of abstracting that layer out?

Sinclair Schuller: Yeah, absolutely. So there are a couple of things. One is if you’re going to abstract resources, you typically need to find some layer that lets you project a new standard or a new type of resource profile off the stack. Now, what do I mean by that? Let’s look at containers as an example.

Typically I would have the rigid structure of my infrastructure like this VM or this machine has this much memory. It has one OS instance. It has this specific networking layout and now if I want to actually abstract it, I need to come up with a model that can sit on top of that and give you that app with some sort of meaningful capacity and divvy it up in a way that is no longer specifically tied to that piece of infrastructure. So containers take care of that and we all understand that now.

But let’s look at subsystems. Let’s say that I’m building an application and we’ll start with actually a really trivial one. I have something like logging, right? My application logs data to disk, usually dumping something into a file someplace. If I have an existing app that’s already doing that, I’m probably writing a bunch of log information to a single file. And if I have 50 copies of my application across 50 infrastructure instances, I now have 50 log files sitting around in the infrastructure who knows where. And as a developer, if I wanted to debug my app, I have to go find all of that.

With Apprenda, what we focus on is doing things like, “Well, how can we abstract that specific subsystem? How can we intervene in the logging process in a way that actually allows us to capture log information and route it to something that is something like a decentralized store that can aggregate logs and let you parse through them later? So for us, whenever we think about doing this as a practical matter, it’s identifying the subsystems in an app architecture like logging, like compute consumption, which the containers take care of, like identity management, and actually intercepting and enhancing those capabilities so that it can be dealt with in a more granular way and in a more affordable way.

Preparing for cloud computing failures

Cameron McKenzie: Earlier this year we saw the Chernobyl-esque downfall of the Amazon S3 cloud and it had the ability to pretty much take out the internet. What type of advice do you give to clients to ensure that if their cloud provider goes down that their applications don’t go out completely?

Sinclair Schuller: I think the first part of that is culturally understanding what cloud actually is and making sure that all the staff and the architects know what it is. In many cases, we think of cloud as some sort of, like, literally nebulous and decentralized thing. Cloud is actually a very centralized thing, right? We centralize resources among a few big providers like Amazon.

Now, what does that mean? Well, if you’re depending on one provider, like Amazon, for storage through something like S3, you can imagine that something could happen to that company or to that infrastructure that would render that capability unavailable, right? Instead what I think happens is that culturally people have started to believe that, you know, cloud is foolproof, it has 5-9s and 9-9s, pick whatever SL you want, and they rely on that as their exclusive means for guaranteeing availability.

So I think number one, it’s just encouraging a culture that understands that when you’re building on a cloud, you are building on some sort of centralized capacity, some centralized capability and that it still can fail. As soon as you bring that into the light and people understand that, then the next question is, “How do I get around that?” And we’ve done this in computing many, many times. To get around that, you have to come up with architecture patterns that can properly deal with things like segregation of data and fails, right?

So could I build an application architecture that maybe uses S3 and potentially strikes data across something like Azure or multiple regions in S3? So if you think that if you had a mentality that something like S3 can fail, you suddenly push that concern into the app architecture itself and it does require that a developer starts to think in a way like that where they say, “Yeah, I’m going to start striping data across multiple providers or multiple regions.” And that gets rid of these sort of situations.

I think part of the reason that we saw the S3 failure happen and affect so many different properties is that people weren’t thinking that way and they saw the number of 9s in the SLA and said, “Oh, I’ll be fine,” but it’s just not the case. So you have to take that into consideration in the app architecture itself.

Cameron McKenzie: Apprenda is both a leader and an advocate in the world of Kubernetes. What is the role that Kubernetes currently plays in the world of orchestrating Docker containers and cloud native architectures?

Sinclair Schuller: So when we look at kind of the world around containers a couple things became very clear. Configurations, scheduling of containers, like making sure that we can do container placement, these all became important things that people cared about and certain projects evolved, like Docker Swarm, like Kubernetes, compete with that in a common way.

So when we look at something like Kubernetes as part of our architecture, the goal was let’s make sure that we’re picking a project and working against a project that we believe has the best foundational primitives for things like scheduling, for things like orchestration. And if we can do that, then we can look at the set of concerns that surround that and move up the application stack to provide additional value.

Now in our case, by adopting Kubernetes as the core scheduler for all the cloud native workloads, we then looked at that and said, “Well, what’s our mission as a company?” To bring cloud into the enterprise, right? Or to the enterprise. And what’s the gap between Kubernetes and that mission? The gap is dealing with existing applications, dealing with things like Windows because that wasn’t something that was native to Kubernetes. And we said, “Could we build IP or attach IP around Kubernetes that can solve those key concerns that exist in the world’s biggest companies as they move to cloud?”

So for us, we went down a couple very specific paths. One, we took our Windows expertise and built the Windows container and Windows notes support in Kubernetes and contributed that back to the community, something I would like to see getting into production sometime soon. Number two, we surrounded Kubernetes with a bunch of our IP that focuses on dealing with the monolith problem and decomposing monoliths into microservices and having them run on a cloud native platform. So for us, it was extending the Kubernetes vision beyond container orchestration, container scheduling, and placement and tackling those very specific architectural challenges across platform support and the ability to run and support existing applications side by side with cloud native.

Cameron McKenzie: To hear more about Sinclair’s take on the current state of cloud native computing, you can follow him on Twitter @sschuller. You can also follow Apprenda and if you’re looking to find out more information on cloud native computing, you can always go over to the Cloud Native Computing Foundation’s website, cncf.io.

You can follow Cameron McKenzie on Twitter: @cameronmcnz


Page 1 of 1912345...10...Last »

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: