A couple of years ago, Intel invited me to a press luncheon to talk about how great their new chips were. They had new chips that were faster and used less power, and they were selling like hot cakes. The food was good and the new machines were smaller and ran a few minutes longer on batteries than last year’s models. Almost in passing I heard one of their product managers describing a secret operating system buried on enterprise computers, called the Management Engine (ME). They called it a feature, and all I could see was a hidden threat.
They said it only ran on “enterprise computers,” and I remember sleeping a little better at night imagining that this little gremlin did not run inside my consumer laptop at the time. I just found out they have a new test for this hidden threat that can determine if your computer is infested with this incurable disease. Yep, I have it. You probably have it too, along with most of the cloud servers keeping trillions of dollars of enterprise apps secure.
They have also released a so called cure for the symptoms, which is thus far only available from Lenovo. But it is not really a cure in the way an antibiotic eradicates an infection. Its more like those $50,000/year cocktails that manages AIDs, but leaves its hosts at risk of communicating it to others. The fundamental problem is that Intel has thus far not shared much about how this hidden threat works, or whether it can in fact be eradicated. They have just patched some of the vulnerabilities, which thus far are probably not a great danger to cloud apps since someone must physically insert a USB drive to compromise them.
All systems are vulnerable
The fundamental problem in other words is not the news that someone found a vulnerability and patched it. The problem is that Intel has relied on a very flawed theory that something running on virtually every enterprise and cloud server out there is protected because no one outside of Intel knows how it works. This was the same theory that the utility industry relied upon until the US and Israel figured out how Stuxnet could be used to take out the Iranian nuclear program and perhaps an Iranian power plant. But once this attack was shared, all the power infrastructure in the world became vulnerable to Stuxnet’s progeny.
I am sure Intel’s greatest minds did a great job of identifying and mitigating every vulnerability they could dream up at the time. So did the folks that developed SSL, and none of the craftiest minds in the security industry recognized that hidden threat until after the code had been in the public domain for two years.
One of the key developments over the last couple of years has been a move towards DevSecOps, which assumes that all code has vulnerabilities. It’s just that no one has figured out how to exploit them at the time of deployment. Therefore, a mechanism must be in place to quickly and automatically find and update these systems smoothly when a new patch is required. DevSecOps breaks down when it relies on 3rd parties like Lenovo, Dell, and HP to tune the update to their particular configurations.
Its not clear how bad this whole episode will end up being for Intel. Thus far, they have done a pretty good PR job of suggesting that these attacks requiring physical access are not a big deal. This whole thing might blow over by the time they release a new series of chips that leave the little demon out.
The keys to the hidden threat
But then again, the final impact of Intel’s foray into security by obscurity will have to get past the test of the NSA and Joe. The NSA because it seems credible that Intel decided it was important to share such important details to protect American cyber security. We all know that the NSA has the best resources and commitment to protecting these secrets from foreign states, angry contractors, and Wikileaks, so they obviously will never let the secret get out.
No, the real threat is probably someone like Joe. ME runs in a kind of always on mode that allows it to communicate on a network even when the power is off, as long as the computer is plugged in. It is protected by an encryption key. I would like to imagine that the only key to all the Intel computers in the world is locked inside a secret vault with laser beams protecting it from mission impossible style attacks.
It would not be surprising if the reality was much more mundane. Its probably on a little security token that Joe took home one day to debug a few components of the ME server. Joe is probably well meaning, but made a copy of this key one day when management was pushing him to meet an unrealistic software delivery target. Joe’s a good guy and would never do anything deliberately to hurt the company, much less all Intel users around the world.
Unfortunately for the rest of us, Joe has been trading Bitcoins lately. No one will come looking for the key to all the Intel computers when they penetrate his workstation trying to steal his Bitcoin wallet. But some nefarious hacker may see this discovery as a divine omen of his destiny to create a business around penetrating the most sensitive cloud servers in the world by exploiting this hidden threat. And maybe, just maybe, if Joe happens to be reading this, he’ll have the foresight to delete the keys before its too late.
The following is a transcript of the conversation TheServerSide’s Cameron McKenzie had with Ivar Grimstad out hot topics in the Java ecosystem, with an emphasis on MVC 1.0 and the new security specification, JSR-375.
Getting people talking about MVC 1.0 and JSR-375
Cameron McKenzie: TheServerSide was really lucky to catch up with Ivar Grimstad earlier this year. These days he’s evangelizing a couple of what I think are pretty important topics. One is the new MVC framework, and the other one is Java security.
The interesting thing, though, is that despite how important these specifications are, MVC and JSR-375 just don’t quite get the headlines like, say, microservices and containers do. So I wanted to know from Ivar, what are the big things that people need to know about the new MVC specification and JSR-375.
Ivar Grimstad: If I take MVC first, we had a lot of attention around that a couple of years ago when the spec was a part of the EE platform. And there was some noise about it when Oracle took it out. And then, happily, I was fortunate to be in the position that I could be the lead of that specification, so I got it from Oracle and keep on doing it. I also brought on Christian Kaltepoth. Since we were the two most active members of that spec, we were the best guys to take it further.
And there has been a little bit of silence around MVC, and we don’t get much attention anymore. The community really wanted MVC when it started and then they kind of moved away towards microservices and containers.
So while we are kind of in the back stream of the cool technology, MVC is still something I think will be used. We get a lot of community responses when I tweet or blog or say anything about it. We have a lot of contributors on the mailing list, and it’s doing fine.
Cameron McKenzie: Now, one of the things about MVC 1.0 is the fact that it seems to work really well with microservices. And I can see it being used heavily to create UIs for container-based applications. Is that where you see the focus being?
Ivar Grimstad: I also think it’s going to be used a lot in more enterprise, in-house applications, but that’s not the sexy topic that attracts the audience at conferences.
MVC 1.0 and JSR-RS
Cameron McKenzie: So in your eyes, what is that makes MVC 1.0 so special?
Ivar Grimstad: Well, the most important thing, the way I see it, is that it’s built on top of JAX-RS, so if you’re using JAX-RS to create your REST endpoints, the transition to also add some web interfaces to your applications becomes easy. Most REST applications also have some kind of admin tool going on along with it. With MVC 1.0 we can actually build on the exact same technology used by the REST application, because with MVC we just add some flavors to JAX-RS and then we’re good to go.
Cameron McKenzie: So is MVC the new UI framework for container-based applications?
Ivar Grimstad: Definitely. I mean, if you’re creating a containerized service that also has some kind of UI to it, it makes sense to use MVC. If you have developers that are on JAX-RS platform and know Java EE and you’re building on that infrastructure, I see MVC as a very good fit there.
Cameron McKenzie: Now, you are also an expert on JSR-375, the new security API that’s going into Java EE. What can you tell us about that?
Ivar Grimstad: This is a brand new security API for Java EE 8.
I think it’s an important specification because it bridges some of the gaps that were lacking in previous versions. We introduce a common terminology, so we are kind of talking about the same thing, no matter, when you’re talking about security such as the authentication mechanism. We also have the more application developer-managed support. So you can, with annotations, easily add security, and you don’t need to do any container or vendor-specific configuration to get it up and running.
Standardized security with JSR-375
Cameron McKenzie: Now, when I read the JSR-375 spec, I kinda say to myself, you know, “Really? Have we not standardized a lot of this stuff already?” I guess a lot of the stuff like custom user registry APIs and how we connect to user repositories, stuff that’s been managed by the vendor in the past. So the developer really hasn’t had to think about it. But, yeah, I mean, do you not get that impression, “Jeez, how did we get to 2017 and not have this stuff standardized already?”
Ivar Grimstad: Yeah, that’s true. And we have the same feeling. But now it’s there, and that’s a good thing. And it’s definitely a good foundation to build upon.
Cameron McKenzie: So what is it about JSR-375, the Java security spec 1.0, that makes it so conducive to working with microservices?
Ivar Grimstad: You do the security in the application, so you don’t need to configure it from the outside. So it’s contained in your application, the security configuration.
Cameron McKenzie: So what are the big topics you see going forward into 2018?
Ivar Grimstad: Since I’m moving around in the Java EE world, I think that one of the main topics we are gonna discuss is the Java SE 9 move to the Eclipse Foundation. And there’s also a lot of discussion already on Twitter about the naming because they released the name for it to be Eclipse Enterprise for Java, and people of course have opinions about that. So I think that’s gonna be discussed a lot.
Java: A curse or a blessing
Cameron McKenzie: Now, here is a question I have been asking a number of people lately. It’s this: looking past, over the past six or seven years, do you think being the steward of the Java Platform has been a blessing or curse for Oracle?
Ivar Grimstad: I think they are making big money on Java, So I think it’s been pretty good for them. So I don’t think it’s been a curse. I think the handling of EE 8 in 2016 was not good. And we saw the community react to that with the Java EE guardians and the MicroProfile which grew out of that. But the turn they have now taken to open-source things, like open-source NetBeans to Apache and the EE to Eclipse Foundation and also open-sourcing more of the JDK tooling, they’re taking a step in the right direction. I think it’s gonna be positive reception.
Just prior to JavaOne, TheServerSide spoke with ZeroTurnaround’s Simon Maple about all of the things going on with Java SE 9 and the greater Java ecosystem. A couple of interesting articles eluted from the conversation, so we thought it might be worthwhile publishing the interview in its entirety.
Cameron McKenzie: There’s a million things going on in the world of Java these days. What are the topics you believe to be of the most importance when it comes to Java and Java SE 9?
Simon Maple: Well, let’s start with what’s happening in Java. There are so many interesting things happening in Java right now. Java is being pushed over to Eclipse, Java SE is going forward being driven first with open JDK, the cadence of Java SE releaeses is now every six months; there are some really, really interesting things happening.
If you look at Java SE 9, you’ll see there are some interesting things going on with the JDK. Obviously, it was delayed by an extra year, so it’s been three years in the making. But one of the things which JDK 9 delivers is the module system. For me, it’s something which developers aren’t really gonna get involved with too much. People who really want modularity are gonna be using OSGi or something similar. People who think, “Yeah. Okay, it’s gonna be an okay idea,” aren’t necessarily chewing at the bit to get it anyway. So I’m not sure people are going to be jumping at modularity.
It does have big benefits in terms of the future of Java, how we can reduce the footprint of Java, how we can develop modules that can be incubated like HTTP/2 and things like that. So it provides a lot of promise, but it’s just not there yet. I think it’s gonna take a long time for the industry and the ecosystem to really embrace modules. Because obviously all frameworks, libraries, tools, vendors of access and things like that, it’s gonna take them time to support the other application developers can use of these frameworks and tools alongside their average developments. We believe that the adoption of Java 9 is gonna be as big as any of the previous releases really.
Who benefits from Java modularity?
Cameron McKenzie: Now, this year at JavaOne, project Jigsaw and modularity is a huge topic. But who benefits the most from modularity? Is this something that only the tool vendors are really gonna start using or is it something that the typical everyday enterprise software developer can start using and leverage in the code that they develop?
Simon Maple: I think it actually benefits a number of different people but not everyone in a massive way. But across the board when it helps a number of different divisions, like operations or development or the business, then when it helps everyone it might be a good choice. Let’s take them one at a time. If you look at development team, how it benefits them largely is, if you have a large distributed team where you have different developers writing different components of an application, on a large application, it’s actually a great way to make sure that other teams using your components in your APIs are using them as you’d expect them to use it. So this gives you much greater power to then make changes to your code knowing you’re not gonna break any people who use your code. Because with modularity, you can effectively say, “These are the APIs I want to expose, everything else I wanna hide.”
So from a developer point of view, that’s actually really beneficial. You can also, as a developer, deliver your update quicker because you couldn’t, for an typical Java application that doesn’t use modularity, just upgrade a single module at a time.
Let’s take Java as an example? We’ve seen in the past huge releases containing many, many things and the reason they get pushed out a year, six months, two years, is because largely we have been waiting for different features. So Java 8 was delayed because of lambdas, Java 9 was delayed because of Jigsaw. In fact, Java 8 largely was also delayed because of Jigsaw which was later pulled. So, everything else, all the other benefits in Java, you can’t get hold of. And it’s because you’re constantly waiting for this one big drop. If you’re looking at something more modular, you could actually upgrade some modules without needing to upgrade the whole application at once.
So from an operations and a development point of view, and in fact from the business point of view as well, you’re much more reactive in terms of how quick you could fix bugs, how quick you can move in terms of your feature planning and things like that. So from that point of view, it’s very, very powerful for the business and very powerful to actually push your features to market. And that’s purely from the business point of view.
From the operations point of view, it’s actually quite nice. Well, it can be a pain and it can be good. It can be good because when you are dealing with individual modules, you’re far more isolated in terms of where your code is, where your applications are modular. However, it can also be slightly more tricky because you have dependencies. The actual deployment can be trickier.
But modularity is not a silver bullet for everyone, but individually different people will find different values for modules.
The timing of Java SE 9
Cameron McKenzie: Now why is it that all of these announcements that pertain to Java SE 9 and the Java platform came out just before JavaOne? Is that just coincidental or is this a matter of Oracle just getting better at doing PR before the big conference?
Simon Maple: So, let’s take each announcement, because I think each announcement there’s no one reason why they came out when they did other than maybe lining them up for JavaOne 2017. But I think each announcement has different drivers. You know, I’m not getting any information directly from Oracle so I’m speculating. But in my opinion, let’s look at Java EE. Obviously, Oracle has the year of drought where they weren’t talking about Java EE specs. Personally, I believe Oracle was pushed into a corner to say, “Hey let’s progress Java EE 8 and 9.” If you actually look at what happened over the last year of Java EE in terms of delivering features to 8, a lot of it’s been pushed over to 9, and personally I think moving Java EE to the Eclipse foundation benefits both Oracle and Java EE because I think it relieves Oracle of the burden that Java EE has bestowed upon them.
From that point of view, they don’t need to worry about it anymore. From the Java EE point of view and the community point of view, I think they’re gonna have a lot more ownership of Java EE obviously it being now part of the Eclipse foundation and it can now go at the speed at which the community wants to drive it. So I think it’s a move whereby everyone’s happy. Oracle doesn’t look like the bad guy anymore, they’re not going to hold it up and the community can push it as far as they wanna push it. So it’s gonna be interesting to see over the next years how much effort Oracle will put in in terms of supporting the projects in Eclipse, in terms of how many developers are they gonna be providing to support each of the specs, not just delivering code but in pushing the specifications forward. So I think that’s really gonna show how much Oracle planned to invest in Java EE, but I think it’s that beneficial.
In terms of them pushing the cadence to six months now, I think there are two reasons for that. The first reason is because everyone is getting a little bit fed up of Java slipping constantly. We’ve seen releases three, four, five years in delay. Obviously, the five years…was it five or the six years? That was really because the stuff in Oracle moved. But since Java 9 is now being pushed out and we have a module system, we can develop much faster and we can provide smaller features quicker. So it does make sense for Java, now it’s modularized, to make use of that and to say, “Right, now we’re gonna be pushing out different pieces of different modules when they’re ready. So every six months what’s ready to be pushed out let’s make it available.” So I think that’s really, really good for Java.
From the ecosystem, it’s gonna be hard work. I think it’s probably gonna be harder work in the ecosystem than it is for Oracle to maintain. Because for Oracle they just need to continue developing and when a feature is ready to go into the main branch they push it into the main branch and we’re good to go. But for the ecosystem, we’re now dealing with…if we look at just next year, we’ve got…what we’ve got Java 9 to the ecosystem’s support, and next year we’re gonna have 18.3 and 18.9 to support. Java 9 is gonna be a well-supported release. Java 18.9 is gonna be a long-term support release. Java 8 is now gonna be commercially supported till 2025. So there are a large number of releases that tools are gonna have to support and it’s no good for tools just to say, “We’re only gonna support the long-term support releases.” Because they’re gonna lose customers.
So they’re gonna have to support every single release of Java, frameworks the same, application servers are more likely gonna support the main long-term support versions. Libraries are gonna have to support all the versions of Java. So for the ecosystem, it’s gonna be a lot more work, a lot more testing. It’s gonna be interesting to see how they keep up particularly for those libraries and frameworks which don’t have large communities and large numbers of committers to do this work, so that’s gonna be extremely interesting. And the third thing was Oracle pushing or putting OpenJDK first. They’re obviously gonna still have their own commercial kind of support branch, which is fine.
What’s new at ZeroTurnaround?
Cameron McKenzie: Now I notice ZeroTurnaround has a booth on the exhibitors’ floor. What’s going on with ZeroTurnaround at JavaOne? What are the big things that you guys are working on? Are there any big product announcements and what are you guys doing to draw people into your booth in the exhibition hall?
Simon Maple: We are obviously working very, very hard to support Java 9 and JRebel which is gonna be a big, big talk obviously because JRebel is so deeply connected to the low-level parts of the JVM. We’re looking to release support for that very soon after the Java 9 release. Yes, we are making some big moves in and around the developmental productivity…I’m sorry, the developmental performance market. We already have XRebel as you know. So we’re gonna be making some announcements over the next week or so and it’s gonna be extremely interesting because we’re gonna be very disruptive in the performance management space. That’s gonna be extremely interesting to me over the next few weeks as well.
Cameron McKenzie: So to get more insights from Mr. Maple, you can always follow him on Twitter @sjmaple. And for that matter if you wanna learn more about some product announcements that are coming out from ZeroTurnaround, you might wanna follow them on Twitter as well, @zeroturnaround.
You can follow Cameron McKenzie on Twitter: @cameronmcnz
Serverless services may be the big trend in IT these days, but it’s still a service-full world out there, and virtually every organization is relying on third party services to keep their technology going. Now, more than ever, companies are finding it critical to choose the right partnerships. Your clients are relying on you to keep your commitments so they can keep theirs. But what practices and habits make it clear that you’re a vendor that clients can trust?
Patrick Debois, CEO of Zender TV, has definite opinions about what makes a high-quality, trustworthy serverless services vendor. Here are twelve things Patrick takes into account when selecting vendors to bring into his circle of trust. Follow these guidelines to become a better vendor for your customers—and use the same criteria to choose better partners for your ecosystem, regardless of whether you’re in the field of serverless services or not.
- Communicate about your status. Amazon learned the hard way about the importance of transparency during their first major outage. The services company was swamped with inquiries about what was going on. A simple status page would have answered many of these questions and kept tech support from being so overwhelmed.
- Monitor the other agents you depend upon. This is the key to not being blindsided when critical services are experiencing issues. You can only let your customers know what’s going on when you’re keeping your finger on the pulse of your ecosystem.
- Do a post mortem after a failure. Debois put it bluntly, “If you had a real failure, man up and describe what actually happened.” When people understand what went wrong, and how the issue is being addressed to prevent future failure, they can make better choices about managing their own risk.
- Be proactive. This could be as simple as warning customers to take action to head off issues. For example, if they are using a service outside the scope of what it was intended to handle, let them know up front that they might run into issues. Also, publish a change log with new features so customers know what’s coming up and can check other dependencies on their end.
- Expose your metrics in more detail. Even if this means revealing that your company is having trouble with a specific aspect of performance, honesty is valuable. Engineers working in your clients’ organizations want to see what’s going wrong so they don’t waste time trying to debug something on their side when the real issue is on your side.
- Keep people updated. This can be done with email or other communication. Consider what information you’d want to receive from your vendors, and use that as a guideline for what to share.
- Make it easy to get data out. This is a big deal for Debois. “Something I look for when I use a service is whether I can get data out.” Even more important, he wants to know if he can easily reproduce settings. Going back to the “factory default” and having to rebuild settings can be a huge task after a failure. However, he says it is rare to find a vendor that allows a full data dump including settings. Be the exception.
- Talk at conferences like JavaOne. Even if it’s not specifically about your services, being willing to share knowledge is a sign that you are there to help and that you care about helping users have a better experience.
- Contribute to open source. Again, this allows potential customers to see the quality of your work and your commitment to supporting best practices. They can see how you do documentation as well.
- Give users a voice. Allow the community to vote on upcoming features so that your offerings are tailored to more effectively address their concerns and needs.
- Show that you listen. Responding to all requests quickly is crucial for maintaining trust. Patrick says he is particularly impressed by organizations that “listen in” to conversations about their services on Twitter and jump in to respond in real time to ask questions and propose solutions. Having engineers that aren’t afraid to talk to people is a huge plus.
- Provide feedback to other vendors you depend on for services. Sometimes, it takes an outside voice to prompt action—even if internal engineering teams have been pushing for the change for a while. Being a good customer makes you a better vendor by helping improve the entire ecosystem.
So do you want to be a more trustworthy, serverless services vendor, or just a more trustworthy vendor in general? These are twelve ideas you should take into account.
I believe Joshua Bloch said it first in his very good book “Effective Java”: static factory methods are the preferred way to instantiate objects compared with constructors. I disagree. Not only because I believe that static methods are pure evil, but mostly because in this particular case they pretend to be good and make us think that we have to love them.
I stumbled upon this proposal by Brian Goetz for data classes in Java, and immediately realized that I too have a few ideas about how to make Java better as a language. I actually have many of them, but this is a short list of the five most important.
Eariler this year we spoke with Jim Manco of Manicode security. It was immediately prior to Oracle OpenWorld 2017, in which Manico was delivering a JavaOne session on Java SE 9 security.
There are plenty of new tools and technologies in the latest version of the JDK to help minimize the number of Java security bugs that developers might encounter. Of course, it’s not good enough just having technologies like JEP-273 (DRBG-Based SecureRandom Implementations), JEP-290 (Filtering of Incoming Serialization Data), and the new unlimited JCE (Java Cryptography Extension) in the Java 9 specification. What’s important in terms of minimizing the number of Java security bugs that get into production is having developers that know what these various security controls do and how to use them.
Talking to Jim Manico about Security
The following is a transcript of the interview between TheServerSide’s Cameron McKenzie and Mr. Manico in which a variety of Java security topics are addressed, including how Java modularity will impact how security bugs are addressed, the shortcomings of DevOps automation tools when Java security bugs arise, and of course, insights on various Java security controls that are new in Java SE 9.
Cameron McKenzie: When it comes to enterprise Java security, what are the things that concern you and what are the things that should concern people who are doing enterprise software development with Java?
Jim Manico: The things that really concern me are the risks against Java and Java is being attacked. To a lot of developers, to be honest with you, this is esoteric stuff. So it’s not necessarily the most exciting or sexy feature of Java, but I daresay it’s the necessary stuff. Like for example, there’s a new JSR that addresses serialization and tries to allow for direct white list filtering of exact classes at multiple tiers within your Java application. This is not exciting stuff, right? But it’s necessary to have a secure Java application or at least it gives you a way to address known Java risks. But, you know, it’s not sexy; But it’s important.
Avoiding Java security bugs
Cameron McKenzie: Quite often, the enterprise Java developer just focuses on fulfilling the business requirements and doesn’t really concern themselves about some security implications of the code that they’re writing. For some of the code the developer would write on an everyday kind of basis, what are some of the security concerns they should be taking into account in order to avoid Java security bugs from finding their way into the code?
Jim Manico: Sometimes the use of numerics for financial processing is a lot more tricky than people would expect. Now other problems range from, when you’re using older school Java technologies and you wanna provide certain kinds of controls, like escaping, to stop cross-site scripting, that’s not part of the core language. There’s some parts of it in J2EE, there’s some frameworks that provide it, but it’s not in the core of the language. And it’s a control that I think should be made more readily available to the developer.
There’s also the issue of all the cryptographic APIs. I think some of the best cryptographic APIs are not in the core of Java. They’re in different places like Google projects. Google Tink is a new project. It’s a cryptographic API that makes Java’s world, the Java developer’s world of interacting with low-level cryptographic APIs easier. I’d love to see more of those kind of APIs closer to the core. Let me rephrase that. The closer these APIs are to the core language, the more likely we’re gonna get use from the developers, right?
We’re all Java security engineers
Cameron McKenzie: Now I’m paraphrasing you a bit here, so correct me if I’m wrong. But in one of the previous sessions of yours that I attended, I remember you talking about how you felt that software developers and DevOps people should actually consider themselves security engineers nowadays. What exactly did you mean by that?
Jim Manico: You know, the world’s changing. So I challenge that most developers, whether they believe it or not, whether they think it or not, or whether they even do it or not, they’re security engineers now. Because the code that they’re writing is on the front-lines of protecting organizations from data loss, financial damage, reputation damage, privacy violations, compliance regulations and fines. So these developers and the code they write are on the front-line of all those issues and more. And so they’re security engineers. So it’s a matter of if they’re gonna do it or not. And if you beat a developer over a head with like pen tests and having some security teams run tools, and, you know, especially early in the maturity, training the developers to attack their code really will help change things.
Later on in the maturity, developers are part of conversations at the early part of design, with existing security libraries and knowledge and controls in place, like rigorous authentication and access control services and advanced cryptographic services available to developers. And these are controls at very early stages of designing software; that’s the ideal, right? But at first, if you’re at the early stage of maturity and maybe you haven’t done application security in your organization ever before, then I think a lot of assessment just to see initially where developers are, show them exploits against their software, that’s usually a good way early on to help shake up a culture. But you know, we wanna do this way more proactively, as we get better at writing secure software.
The limits of DevOps based security
Jim Manico: It’s a piece of the puzzle.
I think it’s a nice word but the concept is that we’re automating all aspects of the software development life cycle. Like we’re automating the build process, we’ve done that a lot of times. We’re integrating the security tests into the build life cycle at different phases of building and deploying software. We’re automating. What else we are we doing? We’re automating dynamic tests, we’re automating code skin tests, we’re automating deployment of software. And as we deploy software, we run this huge batch of tests, automated unit tests and dynamic tests and static tests against our code looking for security bugs. Maybe we find security bugs, we’re gonna stop the build and not allow that code to be deployed. Maybe it’s just a warning and we ship it anyways. There’s many gradients how we do that automation.
DevOps also talks about automating, you know, different dashboards and alerts, so people who are monitoring the application real-time get better intel on security when it’s happening. These are things we’ve done for a long time in software. I think DevOps is putting a little more rigor around it, putting a nice name in front of it, and trying to, at least from my world, add more security to it as much automation as possible.
Now the other side to that is there are some elements of application security that don’t translate really well to automation. Like, especially if like, if you wanna look at a turnkey tool, it’s not always great at finding access control problems, or business logic problems, or deeper problems that maybe a pen test would find, that a turnkey tool, especially a tool that’s tuned to work fast in a DevOps environment, you know, maybe a pentester will find that manually where automation may miss the problem. The dark side of DevOps is that we still need people. We can’t just automate everything. We still need people involved at some level of a deep review, I think, to really provide deeper security assurance, if that’s what you want.
Cameron McKenzie: Now tell me honestly, is the deprecation of the Java Web Browser Plug-in the greatest thing to happen to security analysts?
Jim Manico: As a programmer, I’m like, “whatever,” but as an infrastructure person who’s trying to manage a fleet of PCs or Macs or whatnot to keep an organization secure, not supporting that is usually a good thing, right? I don’t wanna trash Java but Java in the client traditionally has not been a great thing. A lot of policies are to heavily limit how Java runs on the client. And so this is one kick in that direction, which is nice. I’m not saying Java-client is bad, it’s just…it’s good for administrators that can do as much as possible to control clients’ JVMs as much as they can. And having rogue applets from any website just running the browser and similar technology just is not an ideal situation, right? So we wanna restrict that and manage that as best as we can.
Software security tooling
Cameron McKenzie: Now when you’re working as a security consultant and you go into an organization, what are some of the tools and the governance models and the policies that you like to see already in place as soon as you get in there?
Jim Manico: So it’s like a good authentication service is usually a good idea, right? So for a developer, especially as your organization matures, it’s really…rather than having every developer recode in some way parts of an authentication service, having or using a rigorous one that all developers can use in a standard way makes that front gate of your application easier to lock down. And the second layer I look at is, of course, access control. Having a good access control service and series of methodologies and database driven rules and configuration capability, all that good stuff for really good horizontal, detailed, permission-level access control in a way that all developers of your team can leverage, that’s a good idea.
You know, we also wanna be able to build user interfaces in a secured fashion. Whatever framework that we’re using, we wanna understand how to keep that locked down via scripting. And we also wanna make sure developers understand, you know, what’s proper data flow through a client, what we should and should not be storing on a client long term. What kind of logic should and should not run on a client? What stuff should we push more server side? What shouldn’t we even do on the client? You know, how are we relating data access and user interface functionality access with the same consistent set of access control rules on both the client and the server?
You know, all these things are things that developers do day in and day out that they really need to understand how to use and get right. And if we don’t have these tools available, if we’re like, “Yo, developer, you know, welcome to the team, go figure out access control now,” you know, good luck with that. You know, that’s so critical of an area of your software, it’s not something you can just…it’s not something you…it’s something you can but I daresay it’s something you shouldn’t just casually put into your application. It should be extremely intentional.
Cameron McKenzie: And, you know, there are a whole bunch of hot topics rising in the Java ecosystem. There’s lots of talk about containers and microservices and coding. But security is important. And if you wanna learn about security, there’s no better speaker to learn from then Jim Mancio.
In our series on cloud native computing, TheServerSide spoke with a number of experts in the field, including a number of members of the Cloud Native Computing Foundation. The following is the transcription of the interview between Cameron McKenzie and Apprenda’s Sinclair Schuller.
An interview with Apprenda’s Sinclair Schuller
How do you define cloud native computing?
Cameron McKenzie: In the service side’s quest to find out more about how traditional enterprise Java development fits in with this new world of cloud native computing that uses microservices and Docker and container and Kubernetes, we tracked down Sinclair Schuller. Sinclair Schuller’s the CEO of Apprenda. He’s also a Kubernetes advocate and he also sits on the governing board of the Cloud Native Computing Foundation.
So the first thing we wanted to know from Sinclair was, well, how do you define cloud native computing?
Sinclair Schuller: Great question. I guess I’ll give you an architecturally-rooted definition. To me, a cloud native application is an application that has an either implicit or explicit capability to exercise elastic resources that it lives on and that has a level of portability that makes it easy to run in various scenarios, whether it be on cloud A or cloud B or cloud C. But in each of those instances, its ability to understand and/or exercise the underlying resources in an elastic way is probably the most fundamental definition I would use. Which is different than traditional non-cloud native applications that might be deployed on a server. Those have no idea that the resources below them were replaceable or elastic, so they could never take advantage of those sorts of infrastructure properties. And that’s what makes them inherently not scalable, inherently not elastic and so on.
Cameron McKenzie: Now, there’s a lot of talk about how the application server is dead, but whenever I look at these environments that people are creating to manage containers and microservices, they’re bringing in all of these tools together to do things like monitoring and logging and orchestration. And eventually, all this is going to lead to some sort of dashboard that gives people a view into what’s happening in their cloud native architecture. I mean, is the application server really dead or is this simply going to redefine what the application server is?
Sinclair Schuller: I think you’re actually spot-on. I think, over this whole application server is dead thing is, unfortunately, a consequence of cheesy marketing where new vendors try to reposition old vendors. And it’s fair, right? That’s just how these things go, but nothing about even the old app server’s dead. In many cases they’ll still deploy their apps to something like Tomcat sitting in a container. So that happens, so that hasn’t gone away per se, even if you’re building new microservices.
Has the traditional heavyweight app server gone away? Yes. So I think that has fallen out of favor. Take the older products like a WebLogic or something to that effect, you don’t see them used in new cloud native development anymore. But you’re right, what’s going to happen, and we’re seeing this for sure, is there’s a collection of fairly loosely coupled tooling that’s starting to surround the new model and it’s looking a lot like an app server in that regard.
This is a spot where I would disagree with you: I don’t know if there’s going to be consolidation. But certainly, if there is, then what you end up with is a vendor that has all the tools to effectively deliver now a cloud native app server. If there isn’t consolidation and this tooling stays fairly loosely coupled and fragmented, then we have a slightly better outcome than the traditional app server model. We have the ability to best of breed piecemeal tooling the way we need it.
Cloud native computing and UI development
Cameron McKenzie: I can develop a “hello world” application. I can write it as a microservice. I can package it in a Docker container and I can deploy it to some sort of hosting environment. What I can’t do is I can’t go into the data center of an airline manufacturer and break down their monolith, turn it into microservices, figure out which ones should be coarse grain microservices and figure out which ones should be fine grain microservices. I’ve no idea how many microservices I should end up with and once I’ve done all that, I wouldn’t know how to orchestrate all of that live in production.
Apprenda’s role in advancing cloud native computing
How do you do that? How do organizations take this cloud native architecture and this cloud native infrastructure and scale?
Sinclair Schuller: Yeah, so you actually just described our business. Effectively, what we notice in the market is that if you take the world’s largest companies that have been built around these monolithic applications, they have the challenge of “how do we decompose them, how do we move our message forward into a modern era? And if we can, of course, some applications don’t need that, how do we at least run them in a better way in a cloud environment so that we can get additional efficiencies, right?”
So what we focused on is providing a platform for cloud native and also ensuring that it provides, for lack of a better term, error bridging capabilities. Now, in Apprenda, the way we built it, our IP will allow you to run a monolith on the platform and we’ll actually instrument changes into the app and have it behave differently so that it can do well on a cloud based infrastructure environment, giving that monolith some cloud architecture elements, if you will.
Now, why is that important? If you can do that and you can quickly welcome a bunch of these applications onto a cloud native platform like this and remove that abrupt requirement that you have to change the monolith and decompose it quickly into who knows how many microservices, to your point, it affords a little bit of slack so the development teams in those enterprises can be more thoughtful about that decision. And they can choose one part of the monolith, cleave it off, leverage that as an actual pure microservice, and still have that running on the same platform and working with the now remaining portion of the monolith.
By doing that, we actually encourage enterprises to accelerate the adoption of a cloud native architecture since it’s not such an abrupt required decision and such an abrupt change to the architecture itself. So for us, we’re pretty passionate about that step. And then second, it is the how do I manage tons of these all at once? The goal of a, in my opinion, good cloud abstraction like Apprenda, a good platform that’s based on Kubernetes is to make managing 10, 1,000 or N number of microservices feel as easy as running 1 or 2.
And if you can do that, if you can turn running 1 or 2 to kind of the M.O. for running just tons of microservices, you remove that cost from the equation and make it easier for enterprise to digest this whole problem. So we really put a lot of emphasis on those two things. How do we provide that bridging capability so that you don’t have to have such an abrupt transition and you can do so on a timeline that’s more comfortable to you and that fits what you need and also deal with the scale problem?
Ultimately the only way that scale problem does get solved, however, is that a cloud platform has to truly extract the underlying infrastructure resources and act as a broker between the microservices tier and those infrastructure resources. If it can do that, then scaling actually becomes a bit trivial because the consequence of a properly architected platform.
Cameron McKenzie: Now, you talked about abstractions. Can you speak to the technical aspects of abstracting that layer out?
Sinclair Schuller: Yeah, absolutely. So there are a couple of things. One is if you’re going to abstract resources, you typically need to find some layer that lets you project a new standard or a new type of resource profile off the stack. Now, what do I mean by that? Let’s look at containers as an example.
Typically I would have the rigid structure of my infrastructure like this VM or this machine has this much memory. It has one OS instance. It has this specific networking layout and now if I want to actually abstract it, I need to come up with a model that can sit on top of that and give you that app with some sort of meaningful capacity and divvy it up in a way that is no longer specifically tied to that piece of infrastructure. So containers take care of that and we all understand that now.
But let’s look at subsystems. Let’s say that I’m building an application and we’ll start with actually a really trivial one. I have something like logging, right? My application logs data to disk, usually dumping something into a file someplace. If I have an existing app that’s already doing that, I’m probably writing a bunch of log information to a single file. And if I have 50 copies of my application across 50 infrastructure instances, I now have 50 log files sitting around in the infrastructure who knows where. And as a developer, if I wanted to debug my app, I have to go find all of that.
With Apprenda, what we focus on is doing things like, “Well, how can we abstract that specific subsystem? How can we intervene in the logging process in a way that actually allows us to capture log information and route it to something that is something like a decentralized store that can aggregate logs and let you parse through them later? So for us, whenever we think about doing this as a practical matter, it’s identifying the subsystems in an app architecture like logging, like compute consumption, which the containers take care of, like identity management, and actually intercepting and enhancing those capabilities so that it can be dealt with in a more granular way and in a more affordable way.
Preparing for cloud computing failures
Cameron McKenzie: Earlier this year we saw the Chernobyl-esque downfall of the Amazon S3 cloud and it had the ability to pretty much take out the internet. What type of advice do you give to clients to ensure that if their cloud provider goes down that their applications don’t go out completely?
Sinclair Schuller: I think the first part of that is culturally understanding what cloud actually is and making sure that all the staff and the architects know what it is. In many cases, we think of cloud as some sort of, like, literally nebulous and decentralized thing. Cloud is actually a very centralized thing, right? We centralize resources among a few big providers like Amazon.
Now, what does that mean? Well, if you’re depending on one provider, like Amazon, for storage through something like S3, you can imagine that something could happen to that company or to that infrastructure that would render that capability unavailable, right? Instead what I think happens is that culturally people have started to believe that, you know, cloud is foolproof, it has 5-9s and 9-9s, pick whatever SL you want, and they rely on that as their exclusive means for guaranteeing availability.
So I think number one, it’s just encouraging a culture that understands that when you’re building on a cloud, you are building on some sort of centralized capacity, some centralized capability and that it still can fail. As soon as you bring that into the light and people understand that, then the next question is, “How do I get around that?” And we’ve done this in computing many, many times. To get around that, you have to come up with architecture patterns that can properly deal with things like segregation of data and fails, right?
So could I build an application architecture that maybe uses S3 and potentially strikes data across something like Azure or multiple regions in S3? So if you think that if you had a mentality that something like S3 can fail, you suddenly push that concern into the app architecture itself and it does require that a developer starts to think in a way like that where they say, “Yeah, I’m going to start striping data across multiple providers or multiple regions.” And that gets rid of these sort of situations.
I think part of the reason that we saw the S3 failure happen and affect so many different properties is that people weren’t thinking that way and they saw the number of 9s in the SLA and said, “Oh, I’ll be fine,” but it’s just not the case. So you have to take that into consideration in the app architecture itself.
Cameron McKenzie: Apprenda is both a leader and an advocate in the world of Kubernetes. What is the role that Kubernetes currently plays in the world of orchestrating Docker containers and cloud native architectures?
Sinclair Schuller: So when we look at kind of the world around containers a couple things became very clear. Configurations, scheduling of containers, like making sure that we can do container placement, these all became important things that people cared about and certain projects evolved, like Docker Swarm, like Kubernetes, compete with that in a common way.
So when we look at something like Kubernetes as part of our architecture, the goal was let’s make sure that we’re picking a project and working against a project that we believe has the best foundational primitives for things like scheduling, for things like orchestration. And if we can do that, then we can look at the set of concerns that surround that and move up the application stack to provide additional value.
Now in our case, by adopting Kubernetes as the core scheduler for all the cloud native workloads, we then looked at that and said, “Well, what’s our mission as a company?” To bring cloud into the enterprise, right? Or to the enterprise. And what’s the gap between Kubernetes and that mission? The gap is dealing with existing applications, dealing with things like Windows because that wasn’t something that was native to Kubernetes. And we said, “Could we build IP or attach IP around Kubernetes that can solve those key concerns that exist in the world’s biggest companies as they move to cloud?”
So for us, we went down a couple very specific paths. One, we took our Windows expertise and built the Windows container and Windows notes support in Kubernetes and contributed that back to the community, something I would like to see getting into production sometime soon. Number two, we surrounded Kubernetes with a bunch of our IP that focuses on dealing with the monolith problem and decomposing monoliths into microservices and having them run on a cloud native platform. So for us, it was extending the Kubernetes vision beyond container orchestration, container scheduling, and placement and tackling those very specific architectural challenges across platform support and the ability to run and support existing applications side by side with cloud native.
Cameron McKenzie: To hear more about Sinclair’s take on the current state of cloud native computing, you can follow him on Twitter @sschuller. You can also follow Apprenda and if you’re looking to find out more information on cloud native computing, you can always go over to the Cloud Native Computing Foundation’s website, cncf.io.
You can follow Cameron McKenzie on Twitter: @cameronmcnz
ZK Team has just announced the release of ZK 8.5. The new release takes the core ZK 8 philosophy “Stay true to your Java roots and effortlessly keep up with front-end innovations” and continues to push the innovation envelope: a major improvement on MVVM data binding at the client side enlivened pure HTML content with minimal effort. The Fragment component, in combination with Service workers, allows for caching and managing offline user data and easier Progressive Web Apps (PWAs) building. Other exciting features include: 24 freshly baked modern themes, built-in Websocket, splitlayout component, smooth frozen component and more. Let’s take a look at some of the most interesting new features:
1. Built-in Websocket
WebSocket is a new communication protocol standardized by the IETF as RFC 6455. It provides a full-duplex communication channel over a single TCP connection. Once the WebSocket connection has been established, all subsequent messages are transmitted over the socket rather than new HTTP requests/responses. Therefore, it can lower handshake overhead and reduce a lot of HTTP requests when there are many small updates comparing to AJAX server push. A server can actively send data to a client without a client’s request. So it’s more trivial than Comet. ZK now supports not only WebSocket-based update engine but also a WebSocket-based server push.
To enable websockets in ZK 8.5 all you need is to add the following <listener> to your zk.xml:
<listener> <listener-class>org.zkoss.zkmax.au.websocket.WebSocketWebAppInit</listener-class> </listener>
2. 24 Freshly baked themes
ZK 8.5 comes with a new theme called Iceblue as well as another 23 brand new, modern and elegant themes. To apply the desired theme, simply set the preferred theme in zk.xml:
<library-property> <name>org.zkoss.theme.preferred</name> <value>breeze</value> </library-property>
You can also include multiple themes and allow each end user set his or her preferred theme in your application with cookies like:
Themes.setTheme(Executions.getCurrent(), "custom"); Executions.sendRedirect("");
3. New Client-side Data Binding Component: Fragment
Fragment is a special component that turns a static HTML page into dynamic. It can bind an HTML snippet with data from a ViewModel with ZK data binding syntax. With this new component, you can create a custom HTML widget that’s not part of standard ZK components, e.g. custom layouts or custom components, and it binds the data from a ViewModel.
Behind the scene: fragment is a data container and renderer. It synchronizes data between itself and a server according to data binding syntax and it stores the data from server as JSON objects at the client-side. Inside a Fragment, the specified data binding syntax actually binds the JSON objects, and it renders HTML elements based on the JSON objects. This also reduces the server’s tracking nodes for data binding since data is tracked at the client-side.
4. Source Maps for WPD Files
<client-config> <debug-js>true</debug-js> <enable-source-map>true</enable-source-map> </client-config>
If you are interested, you can read more about ZK 8.5 New Features here.