Coffee Talk: Java, News, Stories and Opinions

July 3, 2017  11:26 AM

Advancing JVM performance with the LLVM compiler

cameronmcnz Cameron McKenzie Profile: cameronmcnz

The following is a transcript of an interview between TheServerSide’s Cameron W. McKenzie and Azul Systems’ CTO Gil Tene.

Cameron McKenzie: I always like talking to Gil Tene, the CTO of Azul Systems.

Before jumping on the phone, PR reps often send me a PowerPoint of what we’re supposed to talk about. But  with Tene, I always figure that if I can jump in with a quick question before he gets into the PowerPoint presentation, I can get him to answer some interesting questions that I want the answers to. He’s a technical guy and he’s prepared to get technical about Java and the JVM.

Now, the reason for our latest talk was Azul Systems’ 17.3 release of Zing, which includes an LLVM-based, code-named Falcon, just-in-time compiler. Apparently, it’s incredibly fast, like all of Azul Systems’ JVMs typically are.

But before we got into discussing Azul Systems’ Falcon just-in-time compiler, I thought I’d do a bit of bear-baiting with Gil and tell him that I was sorry that in this new age of serverless computing and cloud and containers, and a world where nobody actually buys hardware anymore, that it must be difficult flogging a high-performance JVM when nobody’s going to need to download one and to install it locally. Well, anyways, Gil wasn’t having any of it.

Gil Tene: So, the way I look at it is actually we don’t really care because we have a bunch of people running Zing on Amazon, so where the hardware comes from and whether it’s a cloud environment or a public cloud or private cloud, a hybrid cloud, or a data center, whatever you want to call it, as long as people are running Java software, we’ve got places where we can sell our JVM. And that doesn’t seem to be happening less, it seems to be happening more.

Cameron McKenzie: Now, I was really just joking around with that first question, but that brought us into a discussion about using Java and Zing in the cloud. And actually, I’m interested in that. How are people using Java and JVMs they’ve purchased in the cloud? Is it mostly EC2 instances or is there some other unique way that people are using the cloud to leverage high-performance JVMs like Zing?

Gil Tene: It is running on EC2 instances. In practical terms, most of what is being run on Amazon today, it is run as virtual instances running on the public cloud. They end up looking like normal servers running Linux on an x86 somewhere, but they run on Amazon, and they do it very efficiently and very elastically, they are very operationally dynamic. And whether it’s Amazon or Azure or the Google Cloud, we’re seeing all of those happening.

But in many of those cases, that’s just a starting point where instead of getting server or running your own virtualized environment, you just do it on Amazon.

The next step is usually that you operationally adapt to using the model, so people no longer have to plan and know how much hardware they’re going to need in three months time, because they can turn it on anytime they want. So they can empower teams to turn on a hundred machines on the weekend because they think it’s needed, and if they were wrong they’ll turn them off. But that’s no longer some dramatic thing to do. Doing it in a company internal data center? It’s a very different thing from a planning perspective.

But from our point of view, that all looks the same, right? Zing and Zulu run just fine in those environments. And whether people consume them on Amazon or Azure or in their own servers, to us it all looks the same.

Cameron McKenzie: Now, cloud computing and virtualization is all really cool, but we’re here to talk about performance. So what do you see these days in terms of bare iron deployments or bare metal deployments or people actually deploying to bare metal and if so, when are they doing it?

Gil Tene: We do see bare metal deployments. You know, we have a very wide mix of customers, so we have everything from e-commerce and analytics and customers that run their own stuff, to banks obviously, that do a lot of stuff themselves. There is more and more of a move towards virtualization in some sort of cloud, whether it’s internal or external. So I’d say that a lot of what we see today is virtualized, but we do see a bunch of the bare metal in latency-sensitive environments or in dedicated super environments. So for example, a lot of people will run dedicated machines for databases or for low-latency trading or for messaging because they don’t want to take the hit for what the virtualized infrastructure might do to them if they don’t.

But having said that, we’re seeing some really good results from people on consistency and latency and everything else running just on the higher-end Amazon. So for example, Cassandra is one of the workloads that fits very well with Zing and we see a lot of turnkey deployments. If you want Cassandra, you turn Zing on and you’re happy, you don’t look back. In an Amazon, that type of cookie-cutter deployment works very well. We tend to see that the typical instances that people use for Cassandra in Amazon with or without us is they’ll move to the latest greatest things that Amazon offers. I think the i3 class of Amazon instances right now are the most popular for Cassandra.

Cameron McKenzie: Now, I believe that the reason we’re talking today is because there are some big news from Azul. So what is the big news?

Gil Tene: The big news for us was the latest release of Zing. We are introducing a brand-new JIT compiler to the JVM, and it is based on LLVM. The reason this is big news, we think, especially in the JVM community, is that the current JIT compiler that’s in use was first introduced 20 years ago. So it’s aging. And we’ve been working with it and within it for most of that time, so we know it very well. But a few years ago, we decided to make the long-term investment in building a brand-new JIT compiler in order to be able to go beyond what we could before. And we chose to use LLVM as the basis for that compiler.

Java had a very rapid acceleration of performance in the first few years, from the late ’90s to the early 2000s, but it’s been a very flat growth curve since then. Performance has improved year over year, but not by a lot, not by the way that we’d like it to. With LLVM, you have a very mature compiler. C and C++ compilers use it, Swift from Apple is based on its, Objective-C as well, the RAS language from Azul is based on it. And you’ll see a lot of exotic things done with it as well, like database query optimizations and all kinds of interesting analytics. It’s a general compiler and optimization framework that has been built for other people to build things with.

It was built over the last decade, so we were lucky enough that it was mature by the time we were making a choice in how to build a new compiler. It incorporates a tremendous amount of work in terms of optimizations that we probably would have never been able to invest in ourselves.

To give you a concrete example of this, the latest CPUs from Intel, the current ones that run, whether they’re bare metal or powered mostly on Amazon servers today, have some really cool new vector optimization capabilities. There’s new vector registers, new instructions and you could do some really nice things with them. But that’s only useful if you have some optimizer that’s able to make use of those instructions when they know it’s there.

With Falcon, our LLVM-based compiler, you take regular Java loops that would run normally on previous hardware, and when our JVM runs on a new hardware, it recognizes the capabilities and basically produces much better loops that use the vector instructions to run faster. And here, you’re talking about factors that could be, 50%, 100%, or sometimes 2 times or 3 times faster even, because those instructions are that much faster. The cool thing for us is not that we sat there and thought of how to use the latest Broadwell chip instructions, it’s that LLVM does that for us without us having to work hard.

Intel has put work into LLVM over the last two years to make sure that the backend optimizers know how to do the stuff. And we just need to bring the code to the right form and the rest is taken care of by other people’s work. So that’s a concrete example of extreme leverage. As the processor hits the market, we already have the optimizations for it. So it’s a great demonstration of how a runtime like a JVM could run the exact same code and when you put it on a new hardware, it’s not just the better clock speed and not just slightly faster, it can actually use the instructions to literally run the code better, and you don’t have to change anything to do it.

Cameron McKenzie: Now, whenever I talk about high-performance JVM computing, I always feel the need to talk about potential JVM pauses and garbage collection. Is there anything new in terms of JVM garbage collection algorithms with this latest release of Zing?

Gil Tene: Garbage collection is not big news at this point, mostly because we’ve already solved it. To us, garbage collection is simply a solved problem. And I do realize that that often sounds like what marketing people would say, but I’m the CTO, and I stand behind that statement.

With our C4 collector in Zing, we’re basically eliminating all the concerns that people have with garbage collections that are above, say, half a millisecond in size. That pretty much means everybody except low-latency traders simply don’t have to worry about it anymore.

When it comes to low-latency traders, we sometimes have to have some conversations about tuning. But with everybody else, they stop even thinking about the question. Now, that’s been the state of Zing for a while now, but the nice thing for us with Falcon and the LLVM compiler is we get to optimize better. So because we have a lot more freedom to build new optimizations and do them more rapidly, the velocity of the resulting optimizations is higher for us with LLVM.

We’re able to optimize around our garbage collection code better and get even faster code for the Java applications running it. But from a garbage collection perspective, it’s the same as it was in our previous release and the one before that because those were close to as perfect as we could get them.

Cameron McKenzie: Now, one of the complaints people that use JVMs often have is the startup time. So I was wondering if there’s anything that was new in terms of the technologies you put into your JVM to improve JVM startup? And for that matter, I was wondering what you’re thinking about Project Jigsaw and how the new modularity that’s coming in with Java 9 might impact the startup of Java applications.

Gil Tene: So those are two separate questions. And you probably saw in our material that we have a feature called ReadyNow! that deals with the startup issue for Java. It’s something we’ve had for a couple of years now. But, again, with the Falcon release, we’re able to do a much better job. Basically, we have a much better vertical rise right when the JVM starts to speed.

The ReadyNow! feature is focused on applications that basically want to reduce the number of operations that go slow before you get to go fast, whether it’s when you start up a new server in the cluster and you don’t want the first 10,000 database queries to go slow before they go fast, or whether it’s when you roll out new code in a continuous deployment environment where you update your servers 20 times a day so you rollout code continuously and, again, you don’t want the first 10,000 or 20,000 web request for every instance to go slow before they get to go fast. Or the extreme examples of trading where at market open conditions, you don’t want to be running your high volume and most volatile trades in interpreter Java speed before they become optimized.

In all of those cases, ReadyNow! is basically focused on having the JVM hyper-optimize the code right when it starts rather than profile and learn and only optimize after it runs. And we do it with a very simple to explain technique, it’s not that simple to implement, but it’s basically we save previous run profiles and we start a run assuming or learning from the previous run’s behavior rather than having to learn from scratch again for the first thousand operations. And that allows us to run basically fast code, either from the first transaction or the tenth transaction, but not from the ten-thousandth transaction. That’s a feature in Zing we’re very proud of.

To the other part of your question about startup behavior, I think that Java 9 is bringing in some interesting features that could over time affect startup behavior. It’s not just the Jigsaw parts, it’s certainly the idea that you could perform some sort of analysis on code-enclosed modules and try to optimize some of it for startup.

Cameron McKenzie: So, anyways, if you want to find out more about high-performance JVM computing, head over to Azul’s website. And if you want to hear more of Gil’s insights, follow him on Twitter, @giltene.
You can follow Cameron McKenzie on Twitter: @cameronmckenzie

September 15, 2019  9:28 PM

10 Oracle Code One 2019 sessions to check out

cameronmcnz Cameron McKenzie Profile: cameronmcnz

As Oracle Code One 2019 kicks off in San Francisco, I hope you’ve already logged into the Oracle Open World (OOW) schedule builder and booked yourself into all of the sessions you want to attend. Unlike smaller conferences where you can easily slide into any session that has open seats, OOW tightly enforces its registration rules. If you’re not enrolled in a given session, they won’t allow you in. Furthermore, sessions with popular speakers quickly get booked to capacity, and waiting lists are prohibitively long.

But, if you’re still actively building your schedule for Oracle Code One 2019, here are a few of the session I recommend:

  1. Beyond Jakarta EE 8 [DEV1391]
    If you know this site’s domain name, it should come as no surprise that a primary interest of mine is what’s going on in the world of server-side enterprise development. And who could be more informed on that topic than Mark Little, Will Lyons and Ian Robinson? If the CTO of JBoss, a Senior Director of WebLogic and WebSphere’s Chief Architect can’t speak to the state of modern servers-side development, I don’t know who can.
  2. Jakarta EE Community BOF [BOF4151]
    Again, as the editor of this site,  I feel no compulsion to justify my interest in the topic of Jakarta EE. But beyond it being my raison d’être, session speakers Ivar Grimstad and Reza Rahman have been sources of significant insight for TheServerSide in the past, and it’s appealing to chat with them in a Birds of a Feather format.
  3. Advances in Java Security [DEV6321]
    Smart software developers pay attention to the small details, and one of the most often overlooked details is security. Security can be a dry topic, but Jim Manico’s presentation skills more than compensate. Any Manico session where he talks about Java security  is recommended.
  4. Continuous Delivery with Docker and Java: The Good, the Bad, and the Ugly [DEV3737]
    This Daniel Bryant session captures all of the popular buzz words, so you’ll likely have to put yourself on a waiting list for this one. But as the industry further embraces a DevOps mindset, the ability to know how Docker and Java fit in with continuous delivery practices is a valuable asset.
  5. Cross-Platform Development with GraalVM [DEV3907]
    The GraalVM, with its ahead-of-time compilation and ability to run multiple languages, makes it a real game-changer. But sadly, most software developers rarely raise their periscope above the waters of the standard JVM. It should be interesting to hear Oracle’s Tim Felgentreff talk about the state of cross-platform development with GraalVM.
  6. Everything You Ever Wanted to Know About Java and Didn’t Know Whom to Ask [DEV6268]
    I’m attending this one largely because Azul’s CTO Gil Tene is involved. The people at Azul tend to be technical leaders in the industry, and I’m sure I’ll learn something about Java and the JVM that will surprise me.
  7. Open Source Java-Based Tools: Hacking on Cool Open Source Projects [DEV6544]
    I recently wrote an epic tome of an article about Java programming tools. I’m going to sit in this session to see if there were any important points that my thesis paper on the topic missed.
  8. Preventing Errors Before They Happen: The Checker Framework [TUT3339]
    It seems as though every developer wants to talk about microservices and cloud-native development, but when attendees leave this conference and go back to their daily grind, many of their clock-cycles will go to waste on troubleshooting applications. So why not learn from my CodeRanch alumni Micheal Ernst about how to deal with Java exceptions before they start to present themselves in the logs? It might even be the motivation I need to write a Checker Framework tutorial in the coming quarter.
  9. Choosing the Right Java Vendor and Strategy [DEV1969]
    Speaking of CodeRanch alumni, Jeanne Boyarsky will speak on the topic of how to choose the right Java vendor strategy. This session is of timely interest to me, as I recently revised a popular article about how to install the JDK, only to find myself talking more about the highly confusing JVM vendor market than the actual Java install process. Expect an article on which Java vendor to choose and why to follow.
  10. Hands-on Java 11 OCP Certification Prep – BYOL [HOL1812]
    I wrote a little Java Certification guide a few years ago and have often thought about reworking it for the modern market. But, it would have to be updated. Maybe Scott Selikoff can give me a good idea of what would be involved with upgrading a Java 5 quick-study guide to one that covers Java 11? I have a hunch that an explanation of the ins and outs of modern Java interfaces to a new developer is one of the stumbling blocks.

This is by no means an exhaustive list of the sessions I’ll attend at Oracle Code One 2019, but it is a good percentage of them. If you see me there, I encourage you to say hello.

September 13, 2019  3:42 AM

How to get the most out of Oracle Code One 2019

cameronmcnz Cameron McKenzie Profile: cameronmcnz

If Oracle Code One 2019 is your first time at a major software conference, it will serve you well to follow some sage advice and insight from a veteran attendee of past JavaOne and Oracle Open World conferences.

The first piece of advice, for which it is far too late to act upon, is to make sure you’ve got a hotel booked. I did a quick search on Expedia, and San Francisco’s Hampton Inn is listed for over $700 a night. And the true surprise isn’t the price; it’s the fact that the hotel actually has any availability. If you’re the type of person who books hotels at the last minute, I’d say you’d be lucky to find a hotel in Oakland or San Jose for a reasonable price, let alone San Francisco.

Schedule those Oracle Code One sessions

For those who have their accommodations all set, the next sage piece of conference advice is to log on to the Oracle Code One 2019 session scheduler and reserve a seat in the sessions you wish to attend. Various sessions on Eclipse MicroProfile, microservices and reactive Java are already overbooked. The longer you wait to formulate your schedule, the fewer sessions you’ll have to choose from.

When choosing sessions, I find the speaker to be the more important criteria than the topic. Most speakers have a YouTube video or two of them doing a presentation. Check those out to see if the speaker is compelling. An hour can be quite a long time to sit through a boring slide show. But, an exciting speaker can make an hour go by in an instant, and if you’re engaged, you’re more likely to learn something.

Skip the Oracle keynotes

One somewhat contrarian piece of advice I’m quick to espouse is for attendees to skip the Oracle keynotes, especially the morning ones. That’s not to say the keynotes are bad, but it can be a hassle to get a seat if you aren’t there early enough, and you can’t always hear everything in the auditorium. A better alternative is to stream the keynote from your hotel room, or better yet, watch the video Oracle uploads to their YouTube channel while you eat lunch.

Enjoy the party

One other big piece of advice for Oracle Code One 2019: enjoy San Francisco, especially if it’s your first time in the city. It’s the smallest alpha city in the world, but it is an alpha city. There are plenty of parties, meet-ups and events you’ll be invited to, and it’s worth taking up any offers you manage to get. With said that, keep an eye on how much gas you have left in the tank at the end of the day, because you want to be able to make it to all of your morning sessions the next day.

If it’s your first time at a major conference, I assure you that you’ll have a great time at Oracle Code One 2019. San Francisco is a great city, and the greatest minds in the world of modern software development will be in attendance with you.

September 4, 2019  4:18 PM

How to deploy a JAR file to Tomcat the right way

cameronmcnz Cameron McKenzie Profile: cameronmcnz

The question of how to deploy a JAR file to Tomcat drives a surprising amount of traffic to TheServerSide. It’s a strange search query, because you don’t really deploy JAR files to Tomcat.

Apache Tomcat is a servlet engine that runs Java web applications, which are packaged as web application archive files, or WARs. A WAR file is the one that’s deployed to Tomcat, not a JAR file. But, despite the fact that the question of how to deploy a JAR isn’t one that’s commonly asked, it’s worth further exploration.

WAR file deployment to Tomcat

TheServerSide has a number of tutorials on how to deploy a WAR file to Tomcat, including a Maven Tomcat deploy or a WAR file deployment with Jenkins. If that’s the actual issue that needs to be resolved, I highly recommend that you view those tutorials.

However, that’s not to say there’s no relationship at all between JAR files and applications that run on Tomcat. Frameworks such as Spring and Hibernate are packaged in JAR files, and common utilities a team might put together also get packaged as JARs. These files need to be accessible to web applications hosted on the Apache Tomcat server at runtime.

I have an inkling that when developers ask how to deploy a JAR file to Tomcat, they actually mean to ask where JAR files should be placed to ensure they are part of the Apache Tomcat classpath at runtime and subsequently accessible to their deployed apps. After all, there’s nothing worse than when you deploy an application and run into a bunch of Tomcat’s ClassNotFoundExceptions.

JAR files and Tomcat

The right place to put a JAR file to make its contents available to a Java web application at runtime is in the WEB-INF\lib directory of the WAR file in which the application is packaged. With very few exceptions, the JAR files a Java web application must link to at runtime should be packaged within the WAR file itself. This approach helps reduce the number of external dependencies a web application has, and at the same time eliminates the potential for classloader conflicts.

Sometimes there are common utilities — such as a set of JDBC drivers — that are so ubiquitously required, it makes more sense to place them directly in a Tomcat sub-folder, and not actually package them inside of a WAR.

If every application hosted on your Tomcat server uses a MySQL database, it would make sense to place the MySQL database drivers in Tomcat’s \lib directory, and not in the WEB-INF\lib directory of the WAR. Furthermore, if you have an upgraded database but the JDBC drivers need to be upgraded, by updating the JAR file in that one shared location, it will mean all of the applications will start using the same set of updated Java libraries at the same time.

tomcat JAR deploy

A web app’s WEB-INF\lib folder and the Tomcat \lib directories are the best places to deploy JAR files in Tomcat.

A common organizational mistake is to place JAR files containing frameworks like JSF or Spring Boot in Tomcat’s \lib directory. People think that since every application deployed to Tomcat in their organization is built with Spring or JSF, it makes sense to put these JAR files in Tomcat’s \lib folder.

While this may work initially, as soon as one application needs to use an updated version of the JAR, a problem arises. If the shared JAR file is updated to a new version, all applications hosted on the Tomcat server that use that JAR file must be updated as well. This obviously creates unnecessary work and unnecessary risk, as even applications that don’t need to use the updated version must be regression tested. In contrast, if the required JAR file was packaged within the WAR file itself, you can avoid a mass migration issue.

JARs deployed to the Tomcat lib directory

One problem with the Tomcat \lib directory is the fact that it includes all of the standard libraries the application server needs to implement the Servlet and JSP API. Right out of the box, that folder is filled with over 30 JAR files that are required at runtime. Plus, you don’t want to mess around with any of those required JAR files, because if any of those files are deleted or disturbed, the Tomcat server will fail to start.

Some Tomcat administrators like to address this issue with a separate directory for application JAR files. Admins can do this with a simple edit to the common.loader property in Tomcat’s file. For example, to have Tomcat link to JAR files in a subdirectory named \applib, you can make the following change to the common.loader property:

#Note that catalina.base refers to the Tomcat installation directory

It should be noted that since Tomcat runs on a Java installation, it also has access to any JAR file placed on the classpath of the JVM. This is known as the system classpath, and while it is a viable option for JAR files that need to be linked to by Tomcat, I wouldn’t recommend using the system classpath.

This location should only be used for resources that are used to bootstrap the JVM, or are referenced directly by the JVM at runtime. Furthermore, the system classpath is typically highly restricted, so it is unlikely that software developers or any continuous integration tools would ever have the credentials required to read or write to this folder.

Tomcat JAR deployment options

In summary, I think that when you ask how to deploy a JAR file to Tomcat, you’re really wondering how to make a JAR file available to your web applications at runtime. There are three recommended options to make this happen:

  1. Package the JAR file in the WEB-INF\lib folder of the Java web application;
  2. Place the JAR file in the \lib subfolder of the Apache Tomcat installation;
  3. Configure a folder for shared JAR files by editing Tomcat’s common.loader property in the file.

If you follow this advice, your JAR file deployment issues with Tomcat should disappear, and ClassNotFoundExceptions in your logs will become a thing of the past.

August 29, 2019  8:38 PM

Input validation issues open Cisco firewall vulnerability

JudithMyerson Profile: JudithMyerson

Standard security practices are the baseline for any product, and even the most junior software developers should be aware of the minimum security requirements for any project. And yet, something as simple as a lack of input validation still plagues the industry.

For example, a firewall vulnerability (CVE-2019-1841) was found in the Software Image Management element of Cisco Digital Network Architecture (DNA) Center versions prior to 1.2.5. This vulnerability stems from an insufficient validation of user-supplied input and could allow an authenticated, remote attacker to send arbitrary HTTP requests to unauthorized internal services.

Check your firewall security

Based on a CVSS V3 score of 8.1, this firewall vulnerability is ranked as high in severity, and also scores high in the confidentiality and integrity categories. An attacker could read or change the firewall rules table, and/or discover which internal services are protected on a firewall host.

A hacker could bypass Cisco’s high-end next generation firewalls (NGFW) in an attack. The threat-focused NGFWs provide advanced threat detection and remediation along with all functions of a traditional NGFW. The firewall uses network and endpoint event correlation to help detect evasive or suspicious network activity, and unified policies help reduce threat complexity. However, an attacker could use Cisco DNA Center to bypass the firewall and modify a table of firewall rules.

Specifically, the bypass vulnerability is a weakness caused by firewall implementation and configuration that a hacker could exploit to attack the trusted network from either outside or inside the firewall.

The extent to which this Cisco firewall vulnerability can be exploited depends on three things:

  • the overall firewall technology;
  • the firewall’s configuration; and
  • the complexity of the firewall’s implementation.

A proxy server is more vulnerable to firewall attacks compared to higher-level firewalls because it’s limited to providing content caching and preventing direct connections from the outside.

How to eliminate a firewall vulnerability

To overcome this limitation, a stateful inspection firewall was designed to allow or block traffic based on state, port and protocol. This type of firewall doesn’t detect unnecessary open ports, isn’t closed when it’s not in use and includes open ports that are hidden by an operating system by default.

Higher up in the firewall evolution is a unified threat management firewall. This type of firewall includes stateful inspection and allows the administrator to set up loose coupling with intrusion prevention, antivirus software and other services.

Next-generation firewalls do more than simple packet filtering and stateful inspection. They block advanced malware and application-layer attacks, and automate security operations to save time in an enterprise network. If your budget allows, consider a move to a next generation firewall to help avoid security failures.

For the Cisco DNA Center firewall vulnerability, service isn’t denied. It continues to run while the attacker gains access to internal services. Cisco DNA Center version 1.2.10 includes vulnerability fixes and is available.

August 28, 2019  9:03 PM

Use the HSTS header for secure communications across networks

JudithMyerson Profile: JudithMyerson

It should always be a top priority for any developer to secure and encrypt communications across the network. Along these lines, the performance overhead for encryption and to ensure confidentiality is relatively minor. I’d go so far as to say the use of SSL on all HTTP-based traffic should be a universal requirement.

That’s where HTTP Strict Transport Security (HSTS) comes in. The HSTS header can ensure that all communications with your web server are secure.

HSTS parameters

The HSTS header is used to force the server and the browser to communicate over HTTPS. The contract HSTS sets out then remains in effect based on the value of the required maximum age directive, which can be one day, one month or one year. The maximum age cannot be changed until the header reaches the expiration time.

The HSTS header can also be used to enforce HTTPS use across subdomains as well, which you can see with the following setup:

Strict-Transport-Security: max-age=31536000; includeSubDomains; preload

When you use the includeSubDomains option and a user accesses the site, the browser is directed by the HSTS policy to use HTTPS for all the subdomains — inside and outside the firewalls.

Implementation configurations

Here is how HSTS can be configured for one year, including the preloaded list of subdomains. All web pages must be accessible over HTTPS or they will be blocked.

In an Apache HTTP server, add to the httpd.conf file:

Header set Strict-Transport-Security
"max-age=31536000; includeSubDomains; preload"

In a Nginx server, add to nginx.conf under the (SSL) directive:

add_header Strict-Transport-Security
'max-age=31536000; includeSubDomains; preload';

Given the importance users place on secure and confidential communication over the internet, organizations should explore the option of HSTS headers and enforce SSL-based communications throughout their networks.

August 23, 2019  9:28 PM

7 IT security best practices to know to prevent data breaches

JudithMyerson Profile: JudithMyerson

Imagine a hacker trying to break into a secure system. You often envision an expert programmer make attempts at a vast array of complex approaches, such as buffer overruns or distributed brute force attacks to breach security.

But more often than not, breaches occur because basic IT security best practices aren’t adhered to, and known exploits are left unfixed. Here are some of the most common, yet avoidable, security flaws that exist in production systems.

Overlooked IT security best practices that lead to hacks

  • Default passwords: Administrators and users don’t change default passwords immediately after a firewall OS installation. A default password can be found on a device label near where the serial number is provided or in the OS documentation.
  • Outdated firewall OS: An outdated firewall could mean that the software in use is no longer supported, the subscription for an update has expired or the device that contains or runs the outdated firewall OS hasn’t been replaced with a newer device.
  • Unencrypted HTTP connections: They are used to access the firewall and haven’t been updated to HTTPS.
  • Lack of documentation: Configuration and implementation update documentation can often be not available or outdated. Or, a provider can fail to provide the user with the URL for an updated project during patch fixes.
  • Unpatchable firewalls: For some firewalls, patches are no longer possible, or no workarounds are available. All users must be logged off to remove unpatchable firewalls.
  • Firewall incompatibility: Some firewalls aren’t compatible with one another. Different firewall types require different configurations and implementations. Attempts to connect the firewalls may fail due to configuration and implementation incompatibility or may result in poor performance due to unforeseen configuration issuers and implementation overload.
  • Buffer issues: A program can have improper limitations of how much or what type of data the program can accept. When data writes to a buffer, the program overruns the buffer’s boundary and overwrites adjacent memory locations, which can result in issues.

It’s become far too common for organizations to protect themselves against the most complex exploits, but leave themselves vulnerable to the simplest programming and infrastructure mistakes. IT security best practices can help.

Check for these issues the next time you do a security hardening of your application. No security team ever wants to be caught off guard.

August 6, 2019  8:18 PM

3 questions to ask in a microservices oriented architecture review

BobReselman BobReselman Profile: BobReselman

Microservices oriented architecture is all the rage these days. Why not? It’s hard to ignore the faster deployment rates and reduced costs that are the essential promises of microservices oriented architectures.

Yet for most companies that take the plunge, development activity is more about the transformation of an existing monolithic application into a microservices oriented architecture, which can be a source of frustration and conflict on many levels.

While greenfield microservices oriented architecture implementations can adhere to the strict interpretations of current microservices-design principles, the decomposing, legacy applications in a microservices oriented architecture live with shades of gray, if for no other reason than to satisfy budgetary and time constraints.

Somewhere in the corporate food chain, there is a business executive who looks at the decomposition costs associated with these legacy apps within a microservices oriented architecture, and compares it to the value already provided by the legacy code. Once the development costs exceed the perceived benefit, the business exec might very well pull the plug and cancel the project.

This happens. A lot.

Thus, development managers are under enormous pressure to get the code out the door as soon as possible. “Good enough” becomes the desired goal of the transformation.

Now, this isn’t necessarily a bad thing. The ability to ship working code is always preferable when compared to waiting for the dream to arrive. But, “shades of gray” is hard to manage and the problem in lies where to draw the line at “good enough.”

And so, the conflict begins. One side wants to ship things as they are and the other side wants to do more refinement.

For you, the challenge is to not let these different schools of thought create an endless fight over what are essentially belief-backed opinions. If you do, it will create a situation where no code gets shipped at all. Now, conflict can be beneficial to synthesize the best idea from many competing ones. But, it can be deadly when the discourse degrades into a never-ending conflict.

I manage these sorts of situations with discussions that focus on the following three questions to avoid such conflict:

  • What is the justification for the design?
  • What are the risks?
  • What is the risk mitigation plan?

Allow me to elaborate.

What is the justification for the design?

When you evaluate the design of a microservices oriented architecture, the challenge is to move past opinion and onto rationale analysis. Its creation comes largely from the decomposition of a monolithic application. Any design might be “good enough,” provided you can justify its benefit and value.

For example, one of the preferred styles in microservices oriented architecture design is to take an event-driven approach to inter-service communication. Concretely, this means you use a message broker to pass messages between microservices in an asynchronous manner. However, while asynchronous communication is more flexible and extensible in the long run, message system implementation is more complex than a design that uses synchronous HTTP calls between APIs that “fronts” the microservices. Thus, when time-to-market is a concern, it’s entirely justifiable to refactor a feature in a monolithic application as a standalone microservice that’s represented by way of an HTTP API.

async vs. sync

Synchronous microservices are usually less complex to implement than asynchronous ones.

Synchronous communication isn’t necessarily the optimal choice for the long run, but given all the other work required to get a standalone microservice extracted out of a monolithic application, synchronous is “good enough” for the first release. Hence, a valid justification.

However, this isn’t to say that a synchronous approach is without risk. In fact, there are many risks. When it comes to review a microservices oriented architecture design, justification alone cannot be the only factor. Risk must be articulated too.

What are the risks?

All designs have inherent risks. In the synchronous design example described above, this approach to inter-service communication can incur a risk of type coupling between services,  increased latency due to the nature of synchronous HTTP communication and others.

The important thing is to make the risks known so they can be weighed in terms of the justification for the intended design. If the risks are overwhelming, no amount of justification will suffice. On the other hand, some risks might be acceptable given the demands at hand. The trick is to make sure that risks are clearly communicated as part of the review process. A known risk under discussion is always preferable to a hidden risk that can strike down the road. Also, if you know the risks before, it allows you to plan how to better move forward in future versions as the microservices oriented architecture matures. This is where risk mitigation comes in.

What is the risk mitigation plan?

A sign of a wise application designer is an ability to identify their design risks and then have the vision to articulate a way(s) to mitigate those risks once identified. Risk identification without proper mitigation techniques is a sign of incomplete thinking.

If a microservices oriented architecture design has risks galore and marginal plans to address them, then the design team needs to give serious consideration to its viability. Also, if the mitigation plan is impractical — beyond the expertise and budget of the project — the design’s viability needs to be questioned too. It’s all a matter of balance.

A well-balanced microservices oriented architecture design is justifiable in terms of the conditions it’s intended to satisfy weighed against its inherent design risks and the mitigation plans intended to address them.

Put it all together

Conflict is an essential part of the creative process. Highly creative people tend to have tenacity about their ideas. So, when you put them in a room and ask them to come up with a single design for a microservices oriented architecture, tensions are bound to rise. That’s how it goes. But take heart! Conflict is good.

Fortunately, with a rational approach to review a microservices oriented architecture design with the three questions I described above, you can foster objective discussions that produce software to meet your needs in a timely manner. No design will ever be perfect, particularly those that are a decomposition of a monolithic application. But, there is a significant benefit in the delivery of a microservices oriented architecture that’s good enough to be operationally effective in the short term and flexible enough to continuously improve in the long term.

July 18, 2019  3:21 PM

How to become a good Java programmer without a degree

JohnSelawsky Profile: JohnSelawsky

The road to master java is a long and thorny one. But over my years as a coder, I’ve picked up a hint or two. But how to become a good Java programmer isn’t a question with a simple answer? You don’t need any formal training. You don’t need to sit in a classroom and earn a diploma. And you can certainly become a good Java programmer without a degree that attests to that fact.

No, all you need is some focus, a good book or two, the willingness to take advantage of the wealth of online resources that are available and the dedication to put in enough time to learn the craft.

However, there are pitfalls for those who are self-taught and try to learn how on the fly without a degree or any formal training make. The journey to become a Java pro is a long one, but if you avoid common mistakes, the whole process becomes more productive. I’ve been teaching people Java for quite a few years now, and these few mistakes continue to pop up over and again.

The top mistakes beginner students make

Here are the most common mistakes I see junior developers make as they begin their journey on how to become a good Java programmer:

  1. You devour too much of theory. Our fear of mistakes plays a bad joke on us. We read and read and read. When you read, you don’t make mistakes. As a result, you feel safe. Stop reading and try coding. I’d say the same thing with video lectures. Practice is key, and your future job title won’t be “book reader” or “YouTube watcher” will it?
  2. You try to learn everything in one day. At the very beginning, you might get very enthusiastic. Wow! Fascinating! It works! Look, ma, I’m coding! And you forge on, trying to grasp everything at once. By the end of the day, even the thought of Java makes you sick. Don’t do that to yourself. This is a marathon, not a sprint, so take it step by step.
  3. You fret over mistakes. Remember when you were a child and learned math? Unfortunately, 2+3 didn’t equal 7 or any other random number you had in mind and you were confused and sad. Same story with Java code. Sometimes you get the wrong solution. Sometimes you get them wrong over and over again. So what? Remember what happened with your math education? You can count now and you will be able to code. Just give it time and don’t give up.
  4. You are afraid to experiment. Almost every one of us has been through this at school: there is only one right answer and only one way to get that answer. In Java programming and in life in general, this approach doesn’t work. You have to try various options and see what fits best.
  5. You burn yourself out. We all get tired from time to time. And if the progress is slow you might hear that nagging voice in the back of your head tell you to give up on learning Java. You might think you need to know math better or read up a bit more on algorithms or whatever else. Stop. Read my advice on how to avoid these mistakes.

How to become a good Java programmer without a diploma

Two of the nice things about scholarly courses are their structure and the ability to gauge your progress through regular tests and deliverables. But, that type of structure and those types of checkpoints aren’t available when you try to become a good Java programmer without a degree. If you choose to go the non-degree route, keep the following insights in mind:

  1. Schedule your learning and stay disciplined: Minimize distractions during the hours of study and devote your full attention to Java. No matter what your actual attention span is, give it all to Java.
  2. Learn by coding: Remember what I told you about “safe” book reading and video watching? Move out of your comfort zone and practice coding Easier said than done. Just try it out and see for yourself. I list some useful tools for practicing Java
  3. Write code out by hand: Typing is all good and while I’m not against it, there’s a mechanical memory that activates when you write by hand and helps you remember stuff even better. Besides, during job interviews, some companies do check if you can code on paper. The real pros can.
  4. Make your work visible: There are code repositories where you can showcase your work. It is also a good way to ask for feedback from more experienced developers. Peer-to-peer exchange of information is also a great way to learn some applicable practical things about Java. Other coders will help you out when they can, and in time, you will be able to help out beginners as well! And don’t be afraid of making mistakes. Remember, a master has failed more times than a beginner has tried.
  5. Keep on coding. Just. Keep. On. Coding. Start small, and slowly expand the scope of your projects. Solve a basic task. Then a series of tasks. Then make a simple game. Then a whole app. Just remember, when in doubt: code your way out.

Good Java programmer best practices

According to Malcom Gladwell’s book Outliers, it takes 10,000 hours of practice to become an expert in a given field. But how does someone new to the Java language practice without enrolling in a college course or on-the-job experience? Fortunately, there are many options for how to become a good Java programmer without pursuing a degree.

There are many open-access online courses that offer a lot of practical tasks. You can also join a Java community, which is full of practical knowledge. If you feel uneasy at the thought of meeting teachers in the classroom (even online ones), try to learn through games. Below are several examples of online educational projects that I recommend.

  • CodeGym is an interactive, practice-oriented Java course. This one’s my personal favorite because of its gamification. You get a virtual mentor who reviews your code, gives feedback and helps you through the learning/gaming process. The course combines 1,200 practical tasks and you start coding in a real IDE. CodeGym has Intellij IDEA integration, so you can dive right into the reality of programming. In the event you’re unsure of a solution, there’s a whole Java community to help and support you.
  • CodinGame is a great training platform for programmers. Gamification is the main learning tool for this project, so it doesn’t feel like boring classroom stuff. Instead, you gradually become a Java dev hero with the superpower of saving the world with your code.
  • Codewars is another game-like educational project, but this one is challenge-based. Choose Java, unite with your team members and start learning the code by solving real tasks. You go from level to level, earn rankings, compare your code with other solutions, etc. The more tasks you complete, the better coder you become and the higher your ranking jumps.
  • GeeksforGeeks is a huge portal for computer science adept people. There are courses on Java and other programming languages, question-based knowledge sharing, a community of like-minded geeks and lots more. You can go through quizzes to check your level, ask for help with your code, etc. There’s also a separate section on algorithms which is quite handy if you have gaps in this area.

With Internet access and a good dose of self-motivation, anyone can become a good Java programmer without a degree. The Java learning path isn’t that dark and scary, but just avoid the monsters of fear and procrastination.

Regular coding practice will make you feel more and more confident. Try one of the projects I recommended. I’m sure one of them will perfectly fit your needs. Maybe even all of them? Don’t forget to code by hand from time to time either. It helps you memorize Java better and stands out at a job interview.

July 16, 2019  8:15 PM

Fix JAVA_HOME errors quickly | Invalid directory | Not set or defined | Points to JRE

cameronmcnz Cameron McKenzie Profile: cameronmcnz

There’s nothing worse than installing your favorite Java-based application — such as Minecraft, Maven, Jenkins or Apache Pig — only to run into a JAVA_HOME is set to an invalid directory or a JAVA_HOME is not defined correctly error as soon as you boot up the program.

Well, there’s no need to fret. Here’s how to fix the most common JAVA_HOME errors.

How to fix JAVA_HOME not found errors

It’s worth noting that there aren’t standardized JAVA_HOME error messages that people will encounter. There are many different ways that a given JAVA_HOME error might be logged.

For example, one of the most common JAVA_HOME configuration problems arises from the fact that the environment variable has never actually been set up. Such a scenario tends to trigger the following error messages:

  • Error: JAVA_HOME not found in your environment
  • Error: JAVA_HOME not set
  • Error: JAVA_HOME is not set currently
  • Error: JAVA_HOME is not set
  • Error: Java installation exists but JAVA_HOME has not been set
  • Error: JAVA_HOME cannot be determined from the registry

How do you fix the JAVA_HOME not found problem?

Well, you fix this by in the Windows environment variable editor where you can actually add a new system variable. If you know your way around the Windows operating system, you should be able to add the JAVA_HOME environment variable to your configuration and have it point to the installation root of your JDK within minutes. The Windows 10 setting looks like this:

JAVA_HOME not found

Fix JAVA_HOME not found errors

As mentioned above, the JAVA_HOME variable must point to the installation root of a JDK, which means a JDK must actually be installed. If one isn’t, then you better hop to it and get that done.

The JAVA_HOME is set to an invalid directory fix

The next most common JAVA_HOME error message is JAVA_HOME is set to an invalid directory. The error message is delightfully helpful, because it tells you in no uncertain terms the environment variable does in fact exist. And, it also tells you it’s not pointing to the right place, which is helpful as well. All you need to do to fix this error is edit the JAVA_HOME variable and point it to the correct directory.

The JAVA_HOME environment variable must point to the root of the installation folder of a JDK. It cannot point to a sub-directory of the JDK, and it cannot point to a parent directory that contains the JDK. It must point directly at the JDK installation directory itself. If you encounter the JAVA_HOME invalid directory error, make sure the name of the installation folder and the value of the variable match.

An easy way to see the actual value associated with the JAVA_HOME variable is to simply echo its value on the command line. In Windows, write:

>/echo %JAVA_HOME%

On an Ubuntu, Mac or Linux machine, the command uses a dollar sign instead of percentages:

:-$ echo $JAVA_HOME
Find JAVA_HOME Ubuntu

How to find JAVA_HOME in Mac or Ubuntu Linux computers.

Steer clear of the JDK \bin directory

One very common developer mistake that leads to the JAVA_HOME is set to an invalid directory error is pointing JAVA_HOME to the \bin sub-directory of the JDK installation. That’s the directory you use to configure the Windows PATH, but it is wrong, wrong, wrong when you set JAVA_HOME. If you point JAVA_HOME at the bin directory, you’ll need to fix that.

This misconfiguration also manifests itself with the following error messages:

  • JAVA_HOME is set to an invalid directory
  • Java installation exists but JAVA_HOME has been set incorrectly
  • JAVA_HOME is not defined correctly
  • JAVA_HOME does not point to the JDK

Other things that might trigger this error include spelling mistakes or case sensitivity errors. If the JAVA_HOME variable is set as java_home, JAVAHOME or Java_Home, a Unix, Linux or Ubuntu script will have a hard time finding it. The same thing goes for the value attached to the JAVA_HOME variable.

The JAVA_HOME does not point to the JDK error

One of the most frustrating JAVA_HOME errors is JAVA_HOME does not point to the JDK.

Here’s a little bit of background on this one.

When you download a JDK distribution, some vendors include a Java Runtime Environment (JRE) as well. And when the JAVA_HOME environment variable gets set, some people point it at the JRE installation folder and not the JDK installation folder. When this happens, we see errors such as:

  • JAVA_HOME does not point to a JDK
  • JAVA_HOME points to a JRE not a JDK
  • JAVA_HOME must point to a JDK not a JRE
  • JAVA_HOME points to a JRE

To fix this issue,  see if you have both a JRE and JDK installed locally. If you do, ensure that the JAVA_HOME variable is not pointing at the JRE.

JAVA_HOME and PATH confusion

After you’ve downloaded and installed the JDK, sometimes another problem can plague developers. If you already have programs that installed their own version of the JDK, those programs could have added a reference to that specific JDK in the Linux or Windows PATH setting. Some programs will run Java using the program’s availability through the PATH first, and JAVA_HOME second. If another program has installed a JRE and put that JRE’s \bin directory on the PATH, your JAVA_HOME efforts may all be for naught.

However, you can address this issue. First, check the Ubuntu or Windows PATH variable and look to see if any other JRE or JDK directory has been added to it. You might be surprised to find out that IBM or Oracle has at some prior time performed an install without your knowledge. If that’s the case, remove the reference to it from the PATH, add your own JDK’s \bin directory in there, and restart any open command windows. Hopefully that will solve the issue.

Of course, there is never any end to the configurations or settings that can trigger JAVA_HOME errors. If you’ve found any creative solutions not mentioned here, please add your expert insights to the comments.

July 9, 2019  7:16 PM

The future of front-end software development in a post GUI world

BobReselman BobReselman Profile: BobReselman

By the year 2025, Google predicts that the number of IoT and Smart Devices in operation will exceed that of non-IoT devices. Statista also predicts a similar growth pattern, in which the proliferation of IoT devices will be three times more than today’s usage.

Any way you slice it, the transformation to an IoT dominant world is going to cause a seismic shift in the way software is used, the way it’s made and the overall future of front-end software development. Soon enough, most computing activities will no longer revolve around the human-machine interaction. Rather, it will be about machine-machine interaction. And, of the human-machine interactions that remain, most will not involve a person that swipes a screen, clicks a mouse or types on a keyboard. Human-machine interaction will be conducted in other ways, some too scary to consider.

The days of GUI-centric development are closing. Yet, few people in mainstream software development seem to notice. It’s as if they’re the brick and mortar bookstores at the beginning of the Amazon age. As long as people kept walking through the door to make purchases, life was great. But, once the customers stopped coming, few were prepared for the consequences.

The same thing will happen to the software industry if we’re not careful.

And, unlike the demise of Big Box retailers — which took decades — the decline in the use of apps based on traditional GUI interactions might very well occur within a decade or less. Other means of interaction will prevail.

The shift to voice

In the not too distant future, the primary “front end” for human-machine interaction will be voice driven. Don’t believe me? Consider this:

My wife, who I consider to be an average user, no longer uses her phone’s keyboard to “write” SMS messages. She simply talks to the device. She uses WhatsApp to talk to her friends. She “asks” Alexa to play music. She still does most of her online shopping on Amazon, but I suspect once she learns how to use Alexa to buy stuff, her time spent on e-commerce websites will diminish.

She still has manual interaction with our television, which is really a computer with a big screen. But she uses the remote’s up/down/left/right buttons in conjunction with voice commands to find and view content. There’s no keyboard involved… ever.

Her phone connects to her car via Bluetooth. She makes phone calls via voice and controls call interactions from the steering wheel. If she needs directions to a location, she talks to the Map app in the phone which then responds with voice prompts.

On the flip side, each day I have a multitude of interactions with computers. And yet, those that require the use of a keyboard and mouse are confined mostly to my professional work coding and writing. The rest involves voice and touch.

In terms of my writing work, I find that I spend an increasing amount of time using my computer as a digital stenographer. My use of the voice typing feature of Google Docs and an online transcription service is growing. I too am becoming GUI-less.

GUI-less commerce

There’s a good case to be made that for the near future, there will still be a good deal of commercial applications that require human-GUI interaction. Yet, as the number of IoT devices expand, more activity will instead be machine-machine and not require GUI whatsoever. All those driverless vehicles, warehouse robots, financial management applications and calls to Alexa or Siri will just push bits back and forth directly between IP addresses and ports somewhere in the cloud.

But, the good news is that the foreseeable future of creative coding is still very much in the domain of human activity. However, this too is changing.

More machines make more software than ever before, and most machine-generated code is made with existing models. Thus, the scope of creative programming by machines is limited. Nonetheless, it’s only a matter of time until AI matures to the point where it will be able to make software from scratch and the software that humans make will be about something else.

Sadly, few people in mainstream, commercial software development think about what that something else will be. Today, front end still means iOS, Android or whatever development framework is popular to make those nice GUI front ends. Few people can imagine any other type for the future of front-end software development. Even the application framework manufacturers are still focused on the GUI world.

When was the last time you heard a tech evangelist caution their constituency about the dangers ahead? That the world soon won’t need any more buttons to click or web pages to scroll?

That’s like asking horseshoe manufacturers to warn blacksmiths about the impact of that newfangled thing called an automobile. It’s just not in their best interest. But, it is in our best interest because the future of front-end software development in the post GUI world will provide amazing opportunities for those with foresight.

The amazing opportunity at hand

There’s a good deal of wisdom in the saying, “once one door shuts another door opens.” Even the most disruptive change provides immense opportunity if you pay attention. Think of it this way, Amazon is killing brick and mortar retailers but it’s been a boon for FedEx and UPS.

There is always an opportunity at hand for those with the creativity and vision to see it. Fortunately, creativity or vision is in no short supply among software developers. We’ve made something out of nothing since the first mainframe came along nearly seventy years ago. All we need to do now is be on the lookout for the next opportunity.

The question is, what will that next opportunity be? What will the new front-end in human-machine look like? If I were a gambling person, I’d put my money on the stuff we might think is too scary to consider today: implants.

Let me explain: I have a dental implant where a molar used to be. Right now that implant is nothing more than benign prosthesis in my mouth.

But think about this: given the fact that computers continue to miniaturize, how far are we from a time when that implant will be converted into a voice sensitive computing device that interacts with another microscopic audio device injected beneath my ear? Sound farfetched? Not really.

Twenty years ago nobody could watch a movie on their cellphone. Today it’s the norm. As Moore’s Law reveals, technological progress accelerates at an exponential rate.

Regardless of whether the future of front-end software development is implants or something else, one thing is for certain: it won’t be anything like what we have today. Those who understand this and seize the opportunity will prosper. The others? Well, I’ll leave it up to you to imagine their outcome.

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: