Coffee Talk: Java, News, Stories and Opinions


July 3, 2017  11:26 AM

Advancing JVM performance with the LLVM compiler

cameronmcnz Cameron McKenzie Profile: cameronmcnz

The following is a transcript of an interview between TheServerSide’s Cameron W. McKenzie and Azul Systems’ CTO Gil Tene.

Cameron McKenzie: I always like talking to Gil Tene, the CTO of Azul Systems.

Before jumping on the phone, PR reps often send me a PowerPoint of what we’re supposed to talk about. But  with Tene, I always figure that if I can jump in with a quick question before he gets into the PowerPoint presentation, I can get him to answer some interesting questions that I want the answers to. He’s a technical guy and he’s prepared to get technical about Java and the JVM.

Now, the reason for our latest talk was Azul Systems’ 17.3 release of Zing, which includes an LLVM-based, code-named Falcon, just-in-time compiler. Apparently, it’s incredibly fast, like all of Azul Systems’ JVMs typically are.

But before we got into discussing Azul Systems’ Falcon just-in-time compiler, I thought I’d do a bit of bear-baiting with Gil and tell him that I was sorry that in this new age of serverless computing and cloud and containers, and a world where nobody actually buys hardware anymore, that it must be difficult flogging a high-performance JVM when nobody’s going to need to download one and to install it locally. Well, anyways, Gil wasn’t having any of it.

Gil Tene: So, the way I look at it is actually we don’t really care because we have a bunch of people running Zing on Amazon, so where the hardware comes from and whether it’s a cloud environment or a public cloud or private cloud, a hybrid cloud, or a data center, whatever you want to call it, as long as people are running Java software, we’ve got places where we can sell our JVM. And that doesn’t seem to be happening less, it seems to be happening more.

Cameron McKenzie: Now, I was really just joking around with that first question, but that brought us into a discussion about using Java and Zing in the cloud. And actually, I’m interested in that. How are people using Java and JVMs they’ve purchased in the cloud? Is it mostly EC2 instances or is there some other unique way that people are using the cloud to leverage high-performance JVMs like Zing?

Gil Tene: It is running on EC2 instances. In practical terms, most of what is being run on Amazon today, it is run as virtual instances running on the public cloud. They end up looking like normal servers running Linux on an x86 somewhere, but they run on Amazon, and they do it very efficiently and very elastically, they are very operationally dynamic. And whether it’s Amazon or Azure or the Google Cloud, we’re seeing all of those happening.

But in many of those cases, that’s just a starting point where instead of getting server or running your own virtualized environment, you just do it on Amazon.

The next step is usually that you operationally adapt to using the model, so people no longer have to plan and know how much hardware they’re going to need in three months time, because they can turn it on anytime they want. So they can empower teams to turn on a hundred machines on the weekend because they think it’s needed, and if they were wrong they’ll turn them off. But that’s no longer some dramatic thing to do. Doing it in a company internal data center? It’s a very different thing from a planning perspective.

But from our point of view, that all looks the same, right? Zing and Zulu run just fine in those environments. And whether people consume them on Amazon or Azure or in their own servers, to us it all looks the same.

Cameron McKenzie: Now, cloud computing and virtualization is all really cool, but we’re here to talk about performance. So what do you see these days in terms of bare iron deployments or bare metal deployments or people actually deploying to bare metal and if so, when are they doing it?

Gil Tene: We do see bare metal deployments. You know, we have a very wide mix of customers, so we have everything from e-commerce and analytics and customers that run their own stuff, to banks obviously, that do a lot of stuff themselves. There is more and more of a move towards virtualization in some sort of cloud, whether it’s internal or external. So I’d say that a lot of what we see today is virtualized, but we do see a bunch of the bare metal in latency-sensitive environments or in dedicated super environments. So for example, a lot of people will run dedicated machines for databases or for low-latency trading or for messaging because they don’t want to take the hit for what the virtualized infrastructure might do to them if they don’t.

But having said that, we’re seeing some really good results from people on consistency and latency and everything else running just on the higher-end Amazon. So for example, Cassandra is one of the workloads that fits very well with Zing and we see a lot of turnkey deployments. If you want Cassandra, you turn Zing on and you’re happy, you don’t look back. In an Amazon, that type of cookie-cutter deployment works very well. We tend to see that the typical instances that people use for Cassandra in Amazon with or without us is they’ll move to the latest greatest things that Amazon offers. I think the i3 class of Amazon instances right now are the most popular for Cassandra.

Cameron McKenzie: Now, I believe that the reason we’re talking today is because there are some big news from Azul. So what is the big news?

Gil Tene: The big news for us was the latest release of Zing. We are introducing a brand-new JIT compiler to the JVM, and it is based on LLVM. The reason this is big news, we think, especially in the JVM community, is that the current JIT compiler that’s in use was first introduced 20 years ago. So it’s aging. And we’ve been working with it and within it for most of that time, so we know it very well. But a few years ago, we decided to make the long-term investment in building a brand-new JIT compiler in order to be able to go beyond what we could before. And we chose to use LLVM as the basis for that compiler.

Java had a very rapid acceleration of performance in the first few years, from the late ’90s to the early 2000s, but it’s been a very flat growth curve since then. Performance has improved year over year, but not by a lot, not by the way that we’d like it to. With LLVM, you have a very mature compiler. C and C++ compilers use it, Swift from Apple is based on its, Objective-C as well, the RAS language from Azul is based on it. And you’ll see a lot of exotic things done with it as well, like database query optimizations and all kinds of interesting analytics. It’s a general compiler and optimization framework that has been built for other people to build things with.

It was built over the last decade, so we were lucky enough that it was mature by the time we were making a choice in how to build a new compiler. It incorporates a tremendous amount of work in terms of optimizations that we probably would have never been able to invest in ourselves.

To give you a concrete example of this, the latest CPUs from Intel, the current ones that run, whether they’re bare metal or powered mostly on Amazon servers today, have some really cool new vector optimization capabilities. There’s new vector registers, new instructions and you could do some really nice things with them. But that’s only useful if you have some optimizer that’s able to make use of those instructions when they know it’s there.

With Falcon, our LLVM-based compiler, you take regular Java loops that would run normally on previous hardware, and when our JVM runs on a new hardware, it recognizes the capabilities and basically produces much better loops that use the vector instructions to run faster. And here, you’re talking about factors that could be, 50%, 100%, or sometimes 2 times or 3 times faster even, because those instructions are that much faster. The cool thing for us is not that we sat there and thought of how to use the latest Broadwell chip instructions, it’s that LLVM does that for us without us having to work hard.

Intel has put work into LLVM over the last two years to make sure that the backend optimizers know how to do the stuff. And we just need to bring the code to the right form and the rest is taken care of by other people’s work. So that’s a concrete example of extreme leverage. As the processor hits the market, we already have the optimizations for it. So it’s a great demonstration of how a runtime like a JVM could run the exact same code and when you put it on a new hardware, it’s not just the better clock speed and not just slightly faster, it can actually use the instructions to literally run the code better, and you don’t have to change anything to do it.

Cameron McKenzie: Now, whenever I talk about high-performance JVM computing, I always feel the need to talk about potential JVM pauses and garbage collection. Is there anything new in terms of JVM garbage collection algorithms with this latest release of Zing?

Gil Tene: Garbage collection is not big news at this point, mostly because we’ve already solved it. To us, garbage collection is simply a solved problem. And I do realize that that often sounds like what marketing people would say, but I’m the CTO, and I stand behind that statement.

With our C4 collector in Zing, we’re basically eliminating all the concerns that people have with garbage collections that are above, say, half a millisecond in size. That pretty much means everybody except low-latency traders simply don’t have to worry about it anymore.

When it comes to low-latency traders, we sometimes have to have some conversations about tuning. But with everybody else, they stop even thinking about the question. Now, that’s been the state of Zing for a while now, but the nice thing for us with Falcon and the LLVM compiler is we get to optimize better. So because we have a lot more freedom to build new optimizations and do them more rapidly, the velocity of the resulting optimizations is higher for us with LLVM.

We’re able to optimize around our garbage collection code better and get even faster code for the Java applications running it. But from a garbage collection perspective, it’s the same as it was in our previous release and the one before that because those were close to as perfect as we could get them.

Cameron McKenzie: Now, one of the complaints people that use JVMs often have is the startup time. So I was wondering if there’s anything that was new in terms of the technologies you put into your JVM to improve JVM startup? And for that matter, I was wondering what you’re thinking about Project Jigsaw and how the new modularity that’s coming in with Java 9 might impact the startup of Java applications.

Gil Tene: So those are two separate questions. And you probably saw in our material that we have a feature called ReadyNow! that deals with the startup issue for Java. It’s something we’ve had for a couple of years now. But, again, with the Falcon release, we’re able to do a much better job. Basically, we have a much better vertical rise right when the JVM starts to speed.

The ReadyNow! feature is focused on applications that basically want to reduce the number of operations that go slow before you get to go fast, whether it’s when you start up a new server in the cluster and you don’t want the first 10,000 database queries to go slow before they go fast, or whether it’s when you roll out new code in a continuous deployment environment where you update your servers 20 times a day so you rollout code continuously and, again, you don’t want the first 10,000 or 20,000 web request for every instance to go slow before they get to go fast. Or the extreme examples of trading where at market open conditions, you don’t want to be running your high volume and most volatile trades in interpreter Java speed before they become optimized.

In all of those cases, ReadyNow! is basically focused on having the JVM hyper-optimize the code right when it starts rather than profile and learn and only optimize after it runs. And we do it with a very simple to explain technique, it’s not that simple to implement, but it’s basically we save previous run profiles and we start a run assuming or learning from the previous run’s behavior rather than having to learn from scratch again for the first thousand operations. And that allows us to run basically fast code, either from the first transaction or the tenth transaction, but not from the ten-thousandth transaction. That’s a feature in Zing we’re very proud of.

To the other part of your question about startup behavior, I think that Java 9 is bringing in some interesting features that could over time affect startup behavior. It’s not just the Jigsaw parts, it’s certainly the idea that you could perform some sort of analysis on code-enclosed modules and try to optimize some of it for startup.

Cameron McKenzie: So, anyways, if you want to find out more about high-performance JVM computing, head over to Azul’s website. And if you want to hear more of Gil’s insights, follow him on Twitter, @giltene.
You can follow Cameron McKenzie on Twitter: @cameronmckenzie

August 6, 2019  8:18 PM

3 questions to ask in a microservices oriented architecture review

BobReselman BobReselman Profile: BobReselman

Microservices oriented architecture is all the rage these days. Why not? It’s hard to ignore the faster deployment rates and reduced costs that are the essential promises of microservices oriented architectures.

Yet for most companies that take the plunge, development activity is more about the transformation of an existing monolithic application into a microservices oriented architecture, which can be a source of frustration and conflict on many levels.

While greenfield microservices oriented architecture implementations can adhere to the strict interpretations of current microservices-design principles, the decomposing, legacy applications in a microservices oriented architecture live with shades of gray, if for no other reason than to satisfy budgetary and time constraints.

Somewhere in the corporate food chain, there is a business executive who looks at the decomposition costs associated with these legacy apps within a microservices oriented architecture, and compares it to the value already provided by the legacy code. Once the development costs exceed the perceived benefit, the business exec might very well pull the plug and cancel the project.

This happens. A lot.

Thus, development managers are under enormous pressure to get the code out the door as soon as possible. “Good enough” becomes the desired goal of the transformation.

Now, this isn’t necessarily a bad thing. The ability to ship working code is always preferable when compared to waiting for the dream to arrive. But, “shades of gray” is hard to manage and the problem in lies where to draw the line at “good enough.”

And so, the conflict begins. One side wants to ship things as they are and the other side wants to do more refinement.

For you, the challenge is to not let these different schools of thought create an endless fight over what are essentially belief-backed opinions. If you do, it will create a situation where no code gets shipped at all. Now, conflict can be beneficial to synthesize the best idea from many competing ones. But, it can be deadly when the discourse degrades into a never-ending conflict.

I manage these sorts of situations with discussions that focus on the following three questions to avoid such conflict:

  • What is the justification for the design?
  • What are the risks?
  • What is the risk mitigation plan?

Allow me to elaborate.

What is the justification for the design?

When you evaluate the design of a microservices oriented architecture, the challenge is to move past opinion and onto rationale analysis. Its creation comes largely from the decomposition of a monolithic application. Any design might be “good enough,” provided you can justify its benefit and value.

For example, one of the preferred styles in microservices oriented architecture design is to take an event-driven approach to inter-service communication. Concretely, this means you use a message broker to pass messages between microservices in an asynchronous manner. However, while asynchronous communication is more flexible and extensible in the long run, message system implementation is more complex than a design that uses synchronous HTTP calls between APIs that “fronts” the microservices. Thus, when time-to-market is a concern, it’s entirely justifiable to refactor a feature in a monolithic application as a standalone microservice that’s represented by way of an HTTP API.

async vs. sync

Synchronous microservices are usually less complex to implement than asynchronous ones.

Synchronous communication isn’t necessarily the optimal choice for the long run, but given all the other work required to get a standalone microservice extracted out of a monolithic application, synchronous is “good enough” for the first release. Hence, a valid justification.

However, this isn’t to say that a synchronous approach is without risk. In fact, there are many risks. When it comes to review a microservices oriented architecture design, justification alone cannot be the only factor. Risk must be articulated too.

What are the risks?

All designs have inherent risks. In the synchronous design example described above, this approach to inter-service communication can incur a risk of type coupling between services,  increased latency due to the nature of synchronous HTTP communication and others.

The important thing is to make the risks known so they can be weighed in terms of the justification for the intended design. If the risks are overwhelming, no amount of justification will suffice. On the other hand, some risks might be acceptable given the demands at hand. The trick is to make sure that risks are clearly communicated as part of the review process. A known risk under discussion is always preferable to a hidden risk that can strike down the road. Also, if you know the risks before, it allows you to plan how to better move forward in future versions as the microservices oriented architecture matures. This is where risk mitigation comes in.

What is the risk mitigation plan?

A sign of a wise application designer is an ability to identify their design risks and then have the vision to articulate a way(s) to mitigate those risks once identified. Risk identification without proper mitigation techniques is a sign of incomplete thinking.

If a microservices oriented architecture design has risks galore and marginal plans to address them, then the design team needs to give serious consideration to its viability. Also, if the mitigation plan is impractical — beyond the expertise and budget of the project — the design’s viability needs to be questioned too. It’s all a matter of balance.

A well-balanced microservices oriented architecture design is justifiable in terms of the conditions it’s intended to satisfy weighed against its inherent design risks and the mitigation plans intended to address them.

Put it all together

Conflict is an essential part of the creative process. Highly creative people tend to have tenacity about their ideas. So, when you put them in a room and ask them to come up with a single design for a microservices oriented architecture, tensions are bound to rise. That’s how it goes. But take heart! Conflict is good.

Fortunately, with a rational approach to review a microservices oriented architecture design with the three questions I described above, you can foster objective discussions that produce software to meet your needs in a timely manner. No design will ever be perfect, particularly those that are a decomposition of a monolithic application. But, there is a significant benefit in the delivery of a microservices oriented architecture that’s good enough to be operationally effective in the short term and flexible enough to continuously improve in the long term.


July 18, 2019  3:21 PM

How to become a good Java programmer without a degree

JohnSelawsky Profile: JohnSelawsky
Uncategorized

The road to master java is a long and thorny one. But over my years as a coder, I’ve picked up a hint or two. But how to become a good Java programmer isn’t a question with a simple answer? You don’t need any formal training. You don’t need to sit in a classroom and earn a diploma. And you can certainly become a good Java programmer without a degree that attests to that fact.

No, all you need is some focus, a good book or two, the willingness to take advantage of the wealth of online resources that are available and the dedication to put in enough time to learn the craft.

However, there are pitfalls for those who are self-taught and try to learn how on the fly without a degree or any formal training make. The journey to become a Java pro is a long one, but if you avoid common mistakes, the whole process becomes more productive. I’ve been teaching people Java for quite a few years now, and these few mistakes continue to pop up over and again.

The top mistakes beginner students make

Here are the most common mistakes I see junior developers make as they begin their journey on how to become a good Java programmer:

  1. You devour too much of theory. Our fear of mistakes plays a bad joke on us. We read and read and read. When you read, you don’t make mistakes. As a result, you feel safe. Stop reading and try coding. I’d say the same thing with video lectures. Practice is key, and your future job title won’t be “book reader” or “YouTube watcher” will it?
  2. You try to learn everything in one day. At the very beginning, you might get very enthusiastic. Wow! Fascinating! It works! Look, ma, I’m coding! And you forge on, trying to grasp everything at once. By the end of the day, even the thought of Java makes you sick. Don’t do that to yourself. This is a marathon, not a sprint, so take it step by step.
  3. You fret over mistakes. Remember when you were a child and learned math? Unfortunately, 2+3 didn’t equal 7 or any other random number you had in mind and you were confused and sad. Same story with Java code. Sometimes you get the wrong solution. Sometimes you get them wrong over and over again. So what? Remember what happened with your math education? You can count now and you will be able to code. Just give it time and don’t give up.
  4. You are afraid to experiment. Almost every one of us has been through this at school: there is only one right answer and only one way to get that answer. In Java programming and in life in general, this approach doesn’t work. You have to try various options and see what fits best.
  5. You burn yourself out. We all get tired from time to time. And if the progress is slow you might hear that nagging voice in the back of your head tell you to give up on learning Java. You might think you need to know math better or read up a bit more on algorithms or whatever else. Stop. Read my advice on how to avoid these mistakes.

How to become a good Java programmer without a diploma

Two of the nice things about scholarly courses are their structure and the ability to gauge your progress through regular tests and deliverables. But, that type of structure and those types of checkpoints aren’t available when you try to become a good Java programmer without a degree. If you choose to go the non-degree route, keep the following insights in mind:

  1. Schedule your learning and stay disciplined: Minimize distractions during the hours of study and devote your full attention to Java. No matter what your actual attention span is, give it all to Java.
  2. Learn by coding: Remember what I told you about “safe” book reading and video watching? Move out of your comfort zone and practice coding Easier said than done. Just try it out and see for yourself. I list some useful tools for practicing Java
  3. Write code out by hand: Typing is all good and while I’m not against it, there’s a mechanical memory that activates when you write by hand and helps you remember stuff even better. Besides, during job interviews, some companies do check if you can code on paper. The real pros can.
  4. Make your work visible: There are code repositories where you can showcase your work. It is also a good way to ask for feedback from more experienced developers. Peer-to-peer exchange of information is also a great way to learn some applicable practical things about Java. Other coders will help you out when they can, and in time, you will be able to help out beginners as well! And don’t be afraid of making mistakes. Remember, a master has failed more times than a beginner has tried.
  5. Keep on coding. Just. Keep. On. Coding. Start small, and slowly expand the scope of your projects. Solve a basic task. Then a series of tasks. Then make a simple game. Then a whole app. Just remember, when in doubt: code your way out.

Good Java programmer best practices

According to Malcom Gladwell’s book Outliers, it takes 10,000 hours of practice to become an expert in a given field. But how does someone new to the Java language practice without enrolling in a college course or on-the-job experience? Fortunately, there are many options for how to become a good Java programmer without pursuing a degree.

There are many open-access online courses that offer a lot of practical tasks. You can also join a Java community, which is full of practical knowledge. If you feel uneasy at the thought of meeting teachers in the classroom (even online ones), try to learn through games. Below are several examples of online educational projects that I recommend.

  • CodeGym is an interactive, practice-oriented Java course. This one’s my personal favorite because of its gamification. You get a virtual mentor who reviews your code, gives feedback and helps you through the learning/gaming process. The course combines 1,200 practical tasks and you start coding in a real IDE. CodeGym has Intellij IDEA integration, so you can dive right into the reality of programming. In the event you’re unsure of a solution, there’s a whole Java community to help and support you.
  • CodinGame is a great training platform for programmers. Gamification is the main learning tool for this project, so it doesn’t feel like boring classroom stuff. Instead, you gradually become a Java dev hero with the superpower of saving the world with your code.
  • Codewars is another game-like educational project, but this one is challenge-based. Choose Java, unite with your team members and start learning the code by solving real tasks. You go from level to level, earn rankings, compare your code with other solutions, etc. The more tasks you complete, the better coder you become and the higher your ranking jumps.
  • GeeksforGeeks is a huge portal for computer science adept people. There are courses on Java and other programming languages, question-based knowledge sharing, a community of like-minded geeks and lots more. You can go through quizzes to check your level, ask for help with your code, etc. There’s also a separate section on algorithms which is quite handy if you have gaps in this area.

With Internet access and a good dose of self-motivation, anyone can become a good Java programmer without a degree. The Java learning path isn’t that dark and scary, but just avoid the monsters of fear and procrastination.

Regular coding practice will make you feel more and more confident. Try one of the projects I recommended. I’m sure one of them will perfectly fit your needs. Maybe even all of them? Don’t forget to code by hand from time to time either. It helps you memorize Java better and stands out at a job interview.


July 16, 2019  8:15 PM

Fix JAVA_HOME errors quickly | Invalid directory | Not set or defined | Points to JRE

cameronmcnz Cameron McKenzie Profile: cameronmcnz

There’s nothing worse than installing your favorite Java-based application — such as Minecraft, Maven, Jenkins or Apache Pig — only to run into a JAVA_HOME is set to an invalid directory or a JAVA_HOME is not defined correctly error as soon as you boot up the program.

Well, there’s no need to fret. Here’s how to fix the most common JAVA_HOME errors.

How to fix JAVA_HOME not found errors

It’s worth noting that there aren’t standardized JAVA_HOME error messages that people will encounter. There are many different ways that a given JAVA_HOME error might be logged.

For example, one of the most common JAVA_HOME configuration problems arises from the fact that the environment variable has never actually been set up. Such a scenario tends to trigger the following error messages:

  • Error: JAVA_HOME not found in your environment
  • Error: JAVA_HOME not set
  • Error: JAVA_HOME is not set currently
  • Error: JAVA_HOME is not set
  • Error: Java installation exists but JAVA_HOME has not been set
  • Error: JAVA_HOME cannot be determined from the registry

How do you fix the JAVA_HOME not found problem?

Well, you fix this by in the Windows environment variable editor where you can actually add a new system variable. If you know your way around the Windows operating system, you should be able to add the JAVA_HOME environment variable to your configuration and have it point to the installation root of your JDK within minutes. The Windows 10 setting looks like this:

JAVA_HOME not found

Fix JAVA_HOME not found errors

As mentioned above, the JAVA_HOME variable must point to the installation root of a JDK, which means a JDK must actually be installed. If one isn’t, then you better hop to it and get that done.

The JAVA_HOME is set to an invalid directory fix

The next most common JAVA_HOME error message is JAVA_HOME is set to an invalid directory. The error message is delightfully helpful, because it tells you in no uncertain terms the environment variable does in fact exist. And, it also tells you it’s not pointing to the right place, which is helpful as well. All you need to do to fix this error is edit the JAVA_HOME variable and point it to the correct directory.

The JAVA_HOME environment variable must point to the root of the installation folder of a JDK. It cannot point to a sub-directory of the JDK, and it cannot point to a parent directory that contains the JDK. It must point directly at the JDK installation directory itself. If you encounter the JAVA_HOME invalid directory error, make sure the name of the installation folder and the value of the variable match.

An easy way to see the actual value associated with the JAVA_HOME variable is to simply echo its value on the command line. In Windows, write:

>/echo %JAVA_HOME%
C:/_JDK13.0

On an Ubuntu, Mac or Linux machine, the command uses a dollar sign instead of percentages:

:-$ echo $JAVA_HOME
/usr/lib/jvm/java-13-oracle
Find JAVA_HOME Ubuntu

How to find JAVA_HOME in Mac or Ubuntu Linux computers.

Steer clear of the JDK \bin directory

One very common developer mistake that leads to the JAVA_HOME is set to an invalid directory error is pointing JAVA_HOME to the \bin sub-directory of the JDK installation. That’s the directory you use to configure the Windows PATH, but it is wrong, wrong, wrong when you set JAVA_HOME. If you point JAVA_HOME at the bin directory, you’ll need to fix that.

This misconfiguration also manifests itself with the following error messages:

  • JAVA_HOME is set to an invalid directory
  • Java installation exists but JAVA_HOME has been set incorrectly
  • JAVA_HOME is not defined correctly
  • JAVA_HOME does not point to the JDK

Other things that might trigger this error include spelling mistakes or case sensitivity errors. If the JAVA_HOME variable is set as java_home, JAVAHOME or Java_Home, a Unix, Linux or Ubuntu script will have a hard time finding it. The same thing goes for the value attached to the JAVA_HOME variable.

The JAVA_HOME does not point to the JDK error

One of the most frustrating JAVA_HOME errors is JAVA_HOME does not point to the JDK.

Here’s a little bit of background on this one.

When you download a JDK distribution, some vendors include a Java Runtime Environment (JRE) as well. And when the JAVA_HOME environment variable gets set, some people point it at the JRE installation folder and not the JDK installation folder. When this happens, we see errors such as:

  • JAVA_HOME does not point to a JDK
  • JAVA_HOME points to a JRE not a JDK
  • JAVA_HOME must point to a JDK not a JRE
  • JAVA_HOME points to a JRE

To fix this issue,  see if you have both a JRE and JDK installed locally. If you do, ensure that the JAVA_HOME variable is not pointing at the JRE.

JAVA_HOME and PATH confusion

After you’ve downloaded and installed the JDK, sometimes another problem can plague developers. If you already have programs that installed their own version of the JDK, those programs could have added a reference to that specific JDK in the Linux or Windows PATH setting. Some programs will run Java using the program’s availability through the PATH first, and JAVA_HOME second. If another program has installed a JRE and put that JRE’s \bin directory on the PATH, your JAVA_HOME efforts may all be for naught.

However, you can address this issue. First, check the Ubuntu or Windows PATH variable and look to see if any other JRE or JDK directory has been added to it. You might be surprised to find out that IBM or Oracle has at some prior time performed an install without your knowledge. If that’s the case, remove the reference to it from the PATH, add your own JDK’s \bin directory in there, and restart any open command windows. Hopefully that will solve the issue.

Of course, there is never any end to the configurations or settings that can trigger JAVA_HOME errors. If you’ve found any creative solutions not mentioned here, please add your expert insights to the comments.


July 9, 2019  7:16 PM

The future of front-end software development in a post GUI world

BobReselman BobReselman Profile: BobReselman

By the year 2025, Google predicts that the number of IoT and Smart Devices in operation will exceed that of non-IoT devices. Statista also predicts a similar growth pattern, in which the proliferation of IoT devices will be three times more than today’s usage.

Any way you slice it, the transformation to an IoT dominant world is going to cause a seismic shift in the way software is used, the way it’s made and the overall future of front-end software development. Soon enough, most computing activities will no longer revolve around the human-machine interaction. Rather, it will be about machine-machine interaction. And, of the human-machine interactions that remain, most will not involve a person that swipes a screen, clicks a mouse or types on a keyboard. Human-machine interaction will be conducted in other ways, some too scary to consider.

The days of GUI-centric development are closing. Yet, few people in mainstream software development seem to notice. It’s as if they’re the brick and mortar bookstores at the beginning of the Amazon age. As long as people kept walking through the door to make purchases, life was great. But, once the customers stopped coming, few were prepared for the consequences.

The same thing will happen to the software industry if we’re not careful.

And, unlike the demise of Big Box retailers — which took decades — the decline in the use of apps based on traditional GUI interactions might very well occur within a decade or less. Other means of interaction will prevail.

The shift to voice

In the not too distant future, the primary “front end” for human-machine interaction will be voice driven. Don’t believe me? Consider this:

My wife, who I consider to be an average user, no longer uses her phone’s keyboard to “write” SMS messages. She simply talks to the device. She uses WhatsApp to talk to her friends. She “asks” Alexa to play music. She still does most of her online shopping on Amazon, but I suspect once she learns how to use Alexa to buy stuff, her time spent on e-commerce websites will diminish.

She still has manual interaction with our television, which is really a computer with a big screen. But she uses the remote’s up/down/left/right buttons in conjunction with voice commands to find and view content. There’s no keyboard involved… ever.

Her phone connects to her car via Bluetooth. She makes phone calls via voice and controls call interactions from the steering wheel. If she needs directions to a location, she talks to the Map app in the phone which then responds with voice prompts.

On the flip side, each day I have a multitude of interactions with computers. And yet, those that require the use of a keyboard and mouse are confined mostly to my professional work coding and writing. The rest involves voice and touch.

In terms of my writing work, I find that I spend an increasing amount of time using my computer as a digital stenographer. My use of the voice typing feature of Google Docs and an online transcription service is growing. I too am becoming GUI-less.

GUI-less commerce

There’s a good case to be made that for the near future, there will still be a good deal of commercial applications that require human-GUI interaction. Yet, as the number of IoT devices expand, more activity will instead be machine-machine and not require GUI whatsoever. All those driverless vehicles, warehouse robots, financial management applications and calls to Alexa or Siri will just push bits back and forth directly between IP addresses and ports somewhere in the cloud.

But, the good news is that the foreseeable future of creative coding is still very much in the domain of human activity. However, this too is changing.

More machines make more software than ever before, and most machine-generated code is made with existing models. Thus, the scope of creative programming by machines is limited. Nonetheless, it’s only a matter of time until AI matures to the point where it will be able to make software from scratch and the software that humans make will be about something else.

Sadly, few people in mainstream, commercial software development think about what that something else will be. Today, front end still means iOS, Android or whatever development framework is popular to make those nice GUI front ends. Few people can imagine any other type for the future of front-end software development. Even the application framework manufacturers are still focused on the GUI world.

When was the last time you heard a tech evangelist caution their constituency about the dangers ahead? That the world soon won’t need any more buttons to click or web pages to scroll?

That’s like asking horseshoe manufacturers to warn blacksmiths about the impact of that newfangled thing called an automobile. It’s just not in their best interest. But, it is in our best interest because the future of front-end software development in the post GUI world will provide amazing opportunities for those with foresight.

The amazing opportunity at hand

There’s a good deal of wisdom in the saying, “once one door shuts another door opens.” Even the most disruptive change provides immense opportunity if you pay attention. Think of it this way, Amazon is killing brick and mortar retailers but it’s been a boon for FedEx and UPS.

There is always an opportunity at hand for those with the creativity and vision to see it. Fortunately, creativity or vision is in no short supply among software developers. We’ve made something out of nothing since the first mainframe came along nearly seventy years ago. All we need to do now is be on the lookout for the next opportunity.

The question is, what will that next opportunity be? What will the new front-end in human-machine look like? If I were a gambling person, I’d put my money on the stuff we might think is too scary to consider today: implants.

Let me explain: I have a dental implant where a molar used to be. Right now that implant is nothing more than benign prosthesis in my mouth.

But think about this: given the fact that computers continue to miniaturize, how far are we from a time when that implant will be converted into a voice sensitive computing device that interacts with another microscopic audio device injected beneath my ear? Sound farfetched? Not really.

Twenty years ago nobody could watch a movie on their cellphone. Today it’s the norm. As Moore’s Law reveals, technological progress accelerates at an exponential rate.

Regardless of whether the future of front-end software development is implants or something else, one thing is for certain: it won’t be anything like what we have today. Those who understand this and seize the opportunity will prosper. The others? Well, I’ll leave it up to you to imagine their outcome.


July 1, 2019  1:57 PM

Don’t let RabbitMQ vulnerabilities expose your CI pipelines

JudithMyerson Profile: JudithMyerson

RabbitMQ is an open source message broker that exchanges asynchronous messages between publishers and consumers. The messages can be a human-readable JSON, a simple string or a list of values that can be converted into a JSON string.

In March of 2019, the Jenkins Security Advisory reported multiple vulnerabilities were found in . If your continuous integration pipelines rely on RabbitMQ, you’ll want to make sure your environment is hardened against any of the 1.1.9 version’s vulnerabilities.

Plugin configurations

Before you connect the plugin to the broker, Jenkins users are required to provide values for the following categories:

  • name
  • host
  • port
  • username
  • password

The name is used to label the designated configuration on the build step. Default values are assigned to the host and port. You must create a unique username and the password is masked with asterisks.

After you complete these tasks, press the “Connection Test” button to ensure the values are properly used.

After a successful test, you can configure the build step with the following steps:

Message routing

Rabbit-MQ Publisher doesn’t directly publish on RabbitMQ. Instead, a message is published to an exchange on the RabbitMQ exchange server — Windows, Linux or Ubuntu. The exchange decides how the message should be routed to queues.    

Data can contain environment variables and build parameters. Once properly checked, data can then be converted to JSON format.

Plugin vulnerabilities

The first vulnerability was that passwords weren’t encrypted. They were stored in plain text in the plugin’s global configuration file on the Jenkins Master.

An attacker with low or no privileges could access the configuration file and view the passwords, and could also use the exposed password to change the values for the default host and port. New plugin versions encrypt all passwords before storage in a configuration file.

The second vulnerability was that the plugin’s missing permission check allowed any user to connect to Rabbit MQ. Users with Overall/Read access were vulnerable to, for example, Jenkins to initiate a Rabbit MQ connection to an attacker-specified host and port with an attacker-specified username and password.

This form validation method did not require POST requests, which resulted in a cross-site request forgery vulnerability. Fortunately, new versions include fixes. If you haven’t updated your Jenkins plugins and the older publisher is still active, the time to update is now.


June 25, 2019  10:08 PM

Hibernate vs JPA: What’s the difference between these database ORM APIs?

cameronmcnz Cameron McKenzie Profile: cameronmcnz

During my recent update to TheServerSide’s JDBC definition, I researched commonly queried terms related to the database API and was surprised by just how many people are still confused about the difference between JPA and Hibernate.

According to my research, the term “What is JDBC?” is queried 2,900 times per month. The term “Hibernate vs. JPA” gets queried 3,600 times. So, there is clearly a great deal of confusion between these two very different, yet related, topics.

I’m taking this opportunity to hopefully clarify, once and for all, the similarities and differences between the Java Persistence API and Hibernate.

What is the difference between JPA and Hibernate?

The main difference between JPA and Hibernate is the fact that JPA is a specification. Hibernate is Red Hat’s implementation of the JPA spec.

There is only one JPA specification. The JPA specification is collaboratively developed through the Java community process (JCP) and updates are released as Java specification requests (JSRs). If the community agrees upon all of the changes proposed within a JSR, a new version of the API is released.

When JPA 2.0, JSR #317, was released in 2009, JCP committee members who voted for the specification’s approval included Oracle, IBM, Red Hat, VMWare and Sun Microsystems. As you can see, a number of big names in the software industry participate in the evolution of the API.

Hibernate vs JPA specification

The Java Persistence API’s final ballot approval.

The difference between a specification and an implementation

There is only one JPA specification. But, there are many different implementations.

Various projects, including DataNucleus, TopLink, EclipseLink, OpenJPA and Hibernate, provide an implementation of the JPA specification. These projects, and the vendors behind them, compete by trying to provide implementations that are faster, more efficient, easier to deploy, integrate with more external systems and potentially have less restrictive licenses than the others. Hibernate is simply one of many implementations of the JPA specification, albeit the one with which Java developers tend to be the most familiar.

The JPA or Hibernate question

The fact of the matter is, the JPA vs. Hibernate question isn’t a great one, because there really isn’t any intersectionality between the two concepts. You can’t really compare JPA and Hibernate in terms of performance, scalability or reliability because the two don’t really compete on those axes. JPA is the spec. Hibernate is an implementation.

No organization needs to choose between Hibernate and JPA. An organization either chooses to use JPA or not. And if an organization does choose to use the Java Persistence API to interact with their relational database systems, they can choose between the various implementations, and one of the most popular ones is the JBoss Hibernate project.

Why all the Hibernate vs. JPA confusion?

Given the fact that JPA and Hibernate fulfill two very distinct and different roles, the question arises as to why there is so much confusion when it comes to these two terms. From what I can tell, the confusion traces back to when the JPA specification was originally released.

Prior to the initial release of JPA 1.0 in 2006, there were a number of vendors competing in the object-relational mapping (ORM) tools space, all of whom have very similar APIs that accomplished many of the same objectives. But, none of those projects had compatible and interchangeable code. The goal of JPA was to standardize how Java applications performed ORM. With JPA 1.0, all of the competing implementations were unified because they all now implemented a common, standard API.

However, because of Hibernate’s popularity, many people continued to use the term Hibernate when they really meant JPA. Hibernate became an eponym for JPA, just as Kleenex is an eponym for bathroom tissues. Even today, when developers and architects talk about Hibernate, they’re really just referring to the JPA spec.

JPA and Hibernate annotations

A JavaBean decorated with JPA annotations.

Choosing between JPA and Hibernate

I mentioned earlier that nobody has to choose between JPA and Hibernate, because all of the functionality provided by Hibernate can simply be accessed through the JPA API. However, that hasn’t always been the case. There was actually a time, during the early days of JPA releases, in which the choice between JPA and Hibernate was a legitimate decision organizations had to make.

The initial JPA release was very basic, and didn’t include many of the advanced features available in Hibernate at the time, including Hibernate’s very powerful Criteria API. When JPA was first released, many organizations used both JPA and Hibernate together. Developers would call upon proprietary Hibernate APIs, such as the HibernateSession within their code, while at the same time, they would decorate their JavaBeans and POJOs with JPA-based annotations to simplify the mapping between the Java code and the relational database in use. In doing so, organizations took advantage of useful features in the standard API, and simultaneously had access to various Hibernate functions that weren’t yet standardized.

Advantages of Hibernate vs JPA

Even today, it’s possible for there to be advanced mapping features baked into the Hibernate framework that aren’t yet available through the JPA specification. Because JPA is guided by the JCP and JSR process, it’s often a slow and methodical process to add new features. However, since the JBoss team that manages the Hibernate project isn’t bound by these sorts of restrictions, they can make features available much faster through their proprietary APIs. Some important features that were implemented by Hibernate long before the JPA specification caught up include:

  • Java 8 Date and Time support
  • SQL fragment mapping
  • Immutable entity types
  • Entity filters
  • SQL fragment matching
  • A manual flush mode
  • Second level cache queries
  • Soft deletes

But, despite the fact the Hibernate is often quicker at the draw when it comes to introducing new and advanced features, the JPA 2.0 release almost closed the gap between the two. It would be difficult for any software developer to develop applications with the proprietary API, when the JPA specification almost always provides equivalent functionality.

And if one of those advanced Hibernate features is required, you can always write code that bypasses JPA and calls the Hibernate code directly, which completely eliminates the need to ever choose sides in the JPA and Hibernate debate.


June 25, 2019  6:18 PM

Perform a Kubernetes security hardening before you use Jenkins X

JudithMyerson Profile: JudithMyerson

In March 2019, the Linux Foundation created the Continuous Delivery Foundation as a vendor-neutral means for developers to track CI/CD open source projects. At the same time, the Continuous Delivery Foundation debuted Jenkins X, an open source CI/CD tool to automate Kubernetes and manage the integration and delivery of containers in cloud applications.

Kubernetes uses multiple software tools in CI/CD, including:

  • Google open source tools Tekton and Spinnaker
  • Google container tools Skaffold and Kaniko
  • Open packaging manager Helm

However, before a developer uses Jenkins X on Kubernetes, they should be aware of multiple security vulnerabilities that can create issues in a deployment. A developer should perform a Kubernetes security hardening before they run Jenkins X to get the most secure and up-to-date version of the platform.

Let’s take a look at some software tools and then Kubernetes vulnerabilities to be aware of with Jenkins X.

Google open source tools

Tekton is a shared set of open source components that help build CI/CD systems. It provides deployment to Kubernetes, along with multi-clouds, VMs, bare metal and mobile. Tekton takes advantage of Kubernetes and shifts the software development there to modernize the CD control plane.

Spinnaker was originally created by Netflix, and is currently led by both Netflix and Google. It is an open source, multi-cloud continuous delivery platform that provides support for cloud providers like Google Kubernetes Engine, Azure Kubernetes Service, Amazon EC2, OpenStack and Oracle Cloud Infrastructure.

Google container tools

Skaffold uses Kubernetes’ command line interface tool for continuous development of Kubernetes containers. A developer can locally iterate source code and then choose to deploy it to local or remote Kubernetes clusters. Skaffold provides workflows to help DevOps teams build, push and deploy the application.  To make workflow tasks easier, Skaffold supports open packager manager Helm.

Kaniko is also a Google container tool. A user starts with the standard Kubernetes cluster, via the Google Kubernetes Engine, to build and push container images. Note that you must create a Kubernetes secret to authenticate to the Google Cloud Registry. The tool provides these three parameter arguments in a pod spec to run a container image:

  • a Dockerfile, which is a text file that defines a Docker image. But note that the Docker Daemon isn’t involved in this instance;
  • a build context, which is retrieved from a Google Cloud Storage bucket; and
  • A registry, where the final image can be pushed.

Open package manager

Helm is a package manager tool for Kubernetes applications and is maintained by the Cloud Native Computing Foundation in collaboration with Google, Microsoft, and Bitnami. Helm comes in two parts: a (helm) client that runs outside the cluster and a (tiller) server that runs inside the cluster and manages application releases from within. The tiller also manages charts installations.

Kubernetes security hardening

Before you jump into Jenkins X, you should perform a Kubernetes security hardening to deploy, maintain and monitor Kubernetes clusters without issues. The National Institute of Standards and Technology and National Vulnerability Database website monitors Kubernetes vulnerabilities.

The following vulnerabilities should be corrected in your Kubernetes security hardening before you start with Jenkins X.

  • CVE-2019-9946: A firewall misconfiguration was found in the Cloud Native Computing Foundation Container Networking Interface (CNI) that’s used for Kubernetes network plugins. This vulnerability would allow an attacker, without any privileges, to access the firewall and modify CNI port rules. This vulnerability comes with a high Common Vulnerability Scoring System (CVSS) 3.0 rating.
    • A developer can fix this vulnerability in CNI 0.7.5 and Kubernetes 1.11.9, 1.12.7, 1.13.5, and 1.14.0. The administrator should update network policies on firewall configuration and access controls.
  • CVE-2019-10002100: A crafted patch vulnerability was discovered in the Kubernetes API Server (Red Hat). This vulnerability would allow an attacker, with low privileges, to potentially send a specially crafted “json-patch” to repeatedly consume resources. When excessive consumption exhausts all resources, the API server is vulnerable to a denial of service attack. It has a medium CVSS 3.0 rating.
  • CVE-2019-5736: A root access vulnerability was found in the core runC container code that could let an attacker gain root access to the host operating system. For example, an attacker could use a new container with an attacker-controlled image or an existing container with attacker write access to execute a command as root. Also, an attacker has the ability to deny some availability to other users. It has a high CVSS 3.0 rating.
  • CVE-2018-18264: An authentication bypass vulnerability was located in earlier versions of Kubernetes Dashboard. An attacker with no or low privileges can use Dashboard’s Service Account to read secrets with the cluster. No user interaction is required to exploit the vulnerability over the network, and all information in the Service Account is exposed. However, the vulnerability doesn’t result in a denial of service attack.  It has a medium CVSS 3.0 rating.

A developer should monitor security vulnerabilities at all times to ensure a secure deployment. A Kubernetes security hardening is one step toward a successful Jenkins X deployment.


June 13, 2019  3:35 PM

How to troubleshoot a JVM OutOfMemoryError problem

RamLakshmanann Profile: RamLakshmanann
Uncategorized

There aren’t any magical tools that will fix an OutOfMemoryError for you, but there are some options available that will help automate your ability to troubleshoot and identify the root cause.

Follow these three steps to deal with this JVM memory error and get on the way to recovery:

  1. Capture a JVM heap dump
  2. Restart the application
  3. Diagnose the problem

1. Capture the heap dump

A heap dump is a snapshot of what’s in your Java program’s memory at a given point in time. It contains details about objects that are present in memory, actual data that is present within those objects, references how those objects maintain to others objects and other information. A heap dump is a vital step to fix an OutOfMemoryError, but they do present some challenges, as their contents can be difficult to read and decipher.

In an optimal situation, you want to capture a heap dump at the moment of or just prior to an OutOfMemoryError to diagnose the cause, but this isn’t exactly easy. However, you can automate this heap dump process. Tell the JVM to create a heap dump by editing the JRE’s startup parameters with the following variables:

-XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/crashes/my-heap-dump.hprof

2. Restart the troublesome application

Most of the time, an OutOfMemoryError won’t crash the application, but it could put the application in an unstable state. A restart would be a prudent move in this situation, since requests served from an unstable application instance will inevitably lead to an erroneous result.

And, you can automate this restart process as well. Simply write a “restart-myapp.sh” script, which will bounce your application. Provide command line arguments to the JVM that trigger it to run the following script when you encounter the exception:

-XX:OnOutOfMemoryError=/scripts/restart-myapp.sh

When you pass this argument, the JVM will invoke “/scripts/restart-myapp.sh” script whenever OutOfMemoryError is thrown. Thus, your application will be automatically restarted right after it experiences an OutOfMemoryError.

3. Diagnose the problem

Now that you have captured the heap dump — which is needed to troubleshoot the problem — and restarted the application — to reduce the outage impact — the next step is troubleshooting.

As mentioned above, understanding the contents of a heap dump can be tricky, but there are helpful heap analyzer tools that help simplify the process. Some options include Eclipse Memory Analyzer (MAT), Oracle JHat or HeapHero.

These tools generate a memory analysis report, highlight the objects that cause the most memory and hopefully help identify objects that create a memory leak.

It can be extremely frustrating when your applications encounter a runtime error. You’ll need patience, a memory heap dump and the proper tools to analyze the problem to fix the OutOfMemoryError and other pesky exceptions of a similar ilk.


June 10, 2019  3:31 PM

How to deal with a remote code execution vulnerability

JudithMyerson Profile: JudithMyerson
Uncategorized

Visual Studio Code is a free source code editor developed by Microsoft for Windows, Max OS and Linux. On February 12, 2019 Symantec Security Center found a serious remote code execution vulnerability (CVE-2019-0728) in MS Visual Studio Code. This vulnerability ties into another one back in June of 2018, when an untrusted search path vulnerability (CVE-2018-0597) was reported.

In April 2019, Linux was made available as a snap that can be used to run across over 40 Linux distribution variations. The editor comes with Git built in to help developers manage version control in DevOps when the source code is ready for deployment to a production server. The source code is a type of server-side script that can only be compiled on the server.

Remote code execution vulnerability severity   

Both remote code execution vulnerabilities create a total loss of confidentiality, integrity and availability. They come with a Common Vulnerability Scoring System 3.0 rating of 7.8 on a 0-10 scale.

The first vulnerability could allow an unauthorized attacker to execute arbitrary code in the context of the current user. A successful defense of an attack would require a user to take some action before the vulnerability can be exploited, such as the installation of a malware extension to the code. Failed exploit attempts will likely result in denial of service conditions.

The second vulnerability could allow the attacker to gain privileges via a Trojan DLL in an unspecified directory.

A programming-savvy attacker could target the SecurityHeaders, which sends a report on HTTP security headers and server information back to a local browser. An attacker could exploit the default value broadcast for the IIS version in use as such:

Server:  Microsoft-IIS/8.0

where Server is the HTTP server header and Microsoft-IIS/8.0 is the default value.

The attacker could also exploit the preloading of the HTTP Strict Transport Security security header protocol. When the preload directive is added to the security header, all subdomains are included for a specified period of time. The main risk associated with this vulnerability is that the specified period of time setting could be up to a year. And, a developer wouldn’t be able to shorten this setting to 90 days to fix the subdomain problems, and an update may not be able to propagate until after the original maximum time directive expires.

Remote code execution vulnerability risk mitigation steps

Here are some recommendations on how to mitigate the latent remote code execution vulnerabilities.

Automatic downloads: Set up a default setting of automatic downloads of Visual Studio Code updates.

Access rights: Grant minimal access rights to individuals and team members — such as read only, read and write. Avoid allowing members, except the administrator leader, to have full access rights.

Network traffic: Run network intrusion detection system IDSs to monitor network traffic for malicious activity that may occur after an attacker exploits the Visual Studio Code vulnerabilities. Ensure IDSs are free of vulnerabilities as well.

Analysis report: After you implement the HTTP response headers as mentioned above, follow these three steps to receive an analysis report. First, transfer the latest version of a script from a local machine to a server. Second, enter any website address in a local browser to implement HTTP response headers in the script. And third, head over to Security Headers or another website to analyze the report sent back to the browser. An overall grade is included for all security headers, the report discloses server information by default and doesn’t provide warnings on the risks if you use the preloading list in a HTTP security header.

Server information: Avoid broadcasting default server information. IIS 8.0 software allows the developer to add a new value to the script like Web.config before it deploys to the server. The report from SecurityHeaders should show the new information like this:

Server: Hello World!

To suppress the HTTP server header from sending to a local browser, the developer should use IIS 10, which is shipped with Windows 10, Windows Server 2016 and other options. You only need one code line in the script to suppress the header, the removeServerHeader attribute, which can be set to true.

<security>

<requestFiltering removeServerHeader=”true” />

</security>

Compiler language used to run the script is one of the VB and C variants. Non-window platforms may not have the capability to remove or suppress the HTTP security header.

Preloading list: Exclude the preload directive from the HTTP Strict Transport Security header to avoid preloading a list of all subdomains.  The max-age directive is expressed in seconds (one year).

<customHeaders>

<add name=”X-Xss-Protection” value=”1;mode=block” />

<add name=”X-Frame-Options” value=”sameorigin” />

<add name=”Strict-Transport-Security” value=”max-age=31536000″ />

</customHeaders>

If a preloaded list is used, start with a lower maximum age expiry time — 30 days — to make sure all the subdomains have HTTPS support. It’s better to wait until the time frame expires in 30 days than in a year to fix a problem.

Alternatively, use an HTTPS front end for an HTTP-only server — which should be done before you secure the back-end server.


May 28, 2019  9:44 PM

Why is programming so hard to master?

BobReselman BobReselman Profile: BobReselman

Why is programming so hard? Because it’s no longer about programming.

Allow me to elaborate.

I wrote my first line of professional code back in 1987. It was an application written in BASIC that did lease calculations for computer rentals. (Yes, back then computers were so expensive it made sense to lease them by the month. Today, we practically give them away.) The program worked when you selected a computer from a list, provided the number of months for the lease terms and the program calculated the monthly payments. The program also had a feature that allowed you to print a hard copy of the results.

In terms of the work I had to do, 90% of my effort was the actual programming. The remaining 10% involved creating the executable file, copying it onto floppy disk and then installing the code on the computers of the other people in my office.

It took me about a week to write the program. Admittedly, it wasn’t exactly rocket-science programming and when I look back it, wasn’t very good programming either. But, it worked and I got paid — win-win, so to speak.

Fast forward 30 years to today. Last week I wrote a program for a class I teach. The program is called WiseSayings. It’s a web app that responds upon request with a random saying from a list of wise sayings.

Wisesayings connection

Connection to Wisesayings

It took me about 30 minutes to write the code, including application data retrieval and configuration. Yet, just programming the app wasn’t enough. Here’s just the beginning as to why is programming so hard. Containers are very popular these days, so I had to create the Dockerfile that allows users to run WiseSayings in a Docker container.

But, there was more. Not only did I need to create the Dockerfile, but I also had to post the container image on DockerHub to make it easier for others to use. This means an image build, followed by a push after I logged into my DockerHub account.

So far, so good, right? Wrong!

As an ambitious coder, I imagined that millions of people will want to use my app. So, I need to make it easy to scale, and that WiseSayings can be run under Kubernetes. I wrote a deployment.yaml to create the Pod and ReplicaSet so my containerized WiseSays app will run in the cloud and, at the least, a service.yaml to provide web access to the logic in the pods from outside the Kubernetes cluster.

If I want to provide security and routing, I’ll need to create a Kubernetes secret or two, a TLS certificate and an ingress.yaml to manage it all. I could go on. We haven’t even talked about web page creation to render my application’s response, nor have we talked about multiple-language support for the app. Who knows, maybe some of my anticipated millions of users will be in China.

How things have changed

My main point is this on why is programming so hard: 30 years ago, all I had to know to create a program was the programming language BASIC and how to structure code into subroutines — which is what we called functions and methods back then. Printing was a bit harder because printer drivers weren’t part of the operating system and your programs needed to know a whole lot about the printers they used. But, that was it. Most of my work revolved around how to express the specific application logic in code.

Today, to create my little WiseSayings app, not only do I need to know a programming language — in this case, JavaScript that runs under Node.js — but I also need to have a basic understanding of how the internet works, as well as how to fiddle with things such as status codes and all the other name-value pairs I can stuff into an HTTP header. Then, I need to know Docker and the basics of Kubernetes. I’d also like to add that there isn’t much in the basics of Kubernetes that’s actually basic. When you work with any Kubernetes API resource, it takes time to really master, even for something as fundamental as a pod.

Now you can really see why is programming so hard.

It still takes about half an hour to write the actual code and get it up on GitHub, but I now add hours to make my program available to my users. My old means of distribution involved copying the executable’s file on to a floppy disk and walking over to a user and copying that file from disk on to the desktop computer. What used to take minutes for a local code distribution has now transitioned into what now makes up the bulk of my “programming” activity, regardless of whether the code goes to a user on the other side of the office or halfway around the world.

Now, don’t get me wrong. Under no circumstance do I want to go back to the days of BASIC and floppy disks. The programs we make today go way beyond anything I could have imagined 30 years ago when I did BASIC programming on an IBM AT running DOS 3.3. I think it’s beyond cool that we’ve made it so you can point your cellphone camera at a newspaper and have the device read the text out loud to you in real time. I like watching the Merchant of Venice any time I want on YouTube with scene summaries available on my iPad. (Yes, sometimes I find it hard to follow the language of the Bard.)

These are amazing achievements, but they come at a price. While commercial software has always required the coordinated efforts of many, these days, even the simple stuff is hard and the implications are profound.

In the old days, knowledge of a programming language and a rudimentary understanding of software design was enough to get you on the playing field. Today, you need to know networking, deployment tools, automated provisioning, testing its variety of forms — from unit testing to performance testing on a distributed scale — and the details of a multitude of development frameworks.

To use a basketball analogy, in the past all you needed to play was a ball, a hoop and the ability to dribble, pass and shoot. Today you also need to know all of that, plus how to sell tickets and run the concession stands. It’s a lot of work.

Is it worth it? Of course. But, the added complexity makes the profession a lot harder to get into. Maybe this is a good thing. Medicine, engineering and nuclear physics have always been “hard to do” professions. Work in those fields has extraordinary benefits when done well and grave consequences when done poorly. Software development is now in that league.

Today, software runs more of the world. Soon it will run most of the world. Maybe it’s time to set a high bar and make it as hard as possible to play. Yet, it’s sad to think that when the next version of me comes along, that person will have to do a lot more than write a simple program in BASIC to get started. I was fortunate to have the opportunity to play and in doing so, software changed my life. Others might not be so lucky.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: