The road to master java is a long and thorny one. But over my years as a coder, I’ve picked up a hint or two. But how to become a good Java programmer isn’t a question with a simple answer? You don’t need any formal training. You don’t need to sit in a classroom and earn a diploma. And you can certainly become a good Java programmer without a degree that attests to that fact.
No, all you need is some focus, a good book or two, the willingness to take advantage of the wealth of online resources that are available and the dedication to put in enough time to learn the craft.
However, there are pitfalls for those who are self-taught and try to learn how on the fly without a degree or any formal training make. The journey to become a Java pro is a long one, but if you avoid common mistakes, the whole process becomes more productive. I’ve been teaching people Java for quite a few years now, and these few mistakes continue to pop up over and again.
The top mistakes beginner students make
Here are the most common mistakes I see junior developers make as they begin their journey on how to become a good Java programmer:
- You devour too much of theory. Our fear of mistakes plays a bad joke on us. We read and read and read. When you read, you don’t make mistakes. As a result, you feel safe. Stop reading and try coding. I’d say the same thing with video lectures. Practice is key, and your future job title won’t be “book reader” or “YouTube watcher” will it?
- You try to learn everything in one day. At the very beginning, you might get very enthusiastic. Wow! Fascinating! It works! Look, ma, I’m coding! And you forge on, trying to grasp everything at once. By the end of the day, even the thought of Java makes you sick. Don’t do that to yourself. This is a marathon, not a sprint, so take it step by step.
- You fret over mistakes. Remember when you were a child and learned math? Unfortunately, 2+3 didn’t equal 7 or any other random number you had in mind and you were confused and sad. Same story with Java code. Sometimes you get the wrong solution. Sometimes you get them wrong over and over again. So what? Remember what happened with your math education? You can count now and you will be able to code. Just give it time and don’t give up.
- You are afraid to experiment. Almost every one of us has been through this at school: there is only one right answer and only one way to get that answer. In Java programming and in life in general, this approach doesn’t work. You have to try various options and see what fits best.
- You burn yourself out. We all get tired from time to time. And if the progress is slow you might hear that nagging voice in the back of your head tell you to give up on learning Java. You might think you need to know math better or read up a bit more on algorithms or whatever else. Stop. Read my advice on how to avoid these mistakes.
How to become a good Java programmer without a diploma
Two of the nice things about scholarly courses are their structure and the ability to gauge your progress through regular tests and deliverables. But, that type of structure and those types of checkpoints aren’t available when you try to become a good Java programmer without a degree. If you choose to go the non-degree route, keep the following insights in mind:
- Schedule your learning and stay disciplined: Minimize distractions during the hours of study and devote your full attention to Java. No matter what your actual attention span is, give it all to Java.
- Learn by coding: Remember what I told you about “safe” book reading and video watching? Move out of your comfort zone and practice coding Easier said than done. Just try it out and see for yourself. I list some useful tools for practicing Java
- Write code out by hand: Typing is all good and while I’m not against it, there’s a mechanical memory that activates when you write by hand and helps you remember stuff even better. Besides, during job interviews, some companies do check if you can code on paper. The real pros can.
- Make your work visible: There are code repositories where you can showcase your work. It is also a good way to ask for feedback from more experienced developers. Peer-to-peer exchange of information is also a great way to learn some applicable practical things about Java. Other coders will help you out when they can, and in time, you will be able to help out beginners as well! And don’t be afraid of making mistakes. Remember, a master has failed more times than a beginner has tried.
- Keep on coding. Just. Keep. On. Coding. Start small, and slowly expand the scope of your projects. Solve a basic task. Then a series of tasks. Then make a simple game. Then a whole app. Just remember, when in doubt: code your way out.
Good Java programmer best practices
According to Malcom Gladwell’s book Outliers, it takes 10,000 hours of practice to become an expert in a given field. But how does someone new to the Java language practice without enrolling in a college course or on-the-job experience? Fortunately, there are many options for how to become a good Java programmer without pursuing a degree.
There are many open-access online courses that offer a lot of practical tasks. You can also join a Java community, which is full of practical knowledge. If you feel uneasy at the thought of meeting teachers in the classroom (even online ones), try to learn through games. Below are several examples of online educational projects that I recommend.
- CodeGym is an interactive, practice-oriented Java course. This one’s my personal favorite because of its gamification. You get a virtual mentor who reviews your code, gives feedback and helps you through the learning/gaming process. The course combines 1,200 practical tasks and you start coding in a real IDE. CodeGym has Intellij IDEA integration, so you can dive right into the reality of programming. In the event you’re unsure of a solution, there’s a whole Java community to help and support you.
- CodinGame is a great training platform for programmers. Gamification is the main learning tool for this project, so it doesn’t feel like boring classroom stuff. Instead, you gradually become a Java dev hero with the superpower of saving the world with your code.
- Codewars is another game-like educational project, but this one is challenge-based. Choose Java, unite with your team members and start learning the code by solving real tasks. You go from level to level, earn rankings, compare your code with other solutions, etc. The more tasks you complete, the better coder you become and the higher your ranking jumps.
- GeeksforGeeks is a huge portal for computer science adept people. There are courses on Java and other programming languages, question-based knowledge sharing, a community of like-minded geeks and lots more. You can go through quizzes to check your level, ask for help with your code, etc. There’s also a separate section on algorithms which is quite handy if you have gaps in this area.
With Internet access and a good dose of self-motivation, anyone can become a good Java programmer without a degree. The Java learning path isn’t that dark and scary, but just avoid the monsters of fear and procrastination.
Regular coding practice will make you feel more and more confident. Try one of the projects I recommended. I’m sure one of them will perfectly fit your needs. Maybe even all of them? Don’t forget to code by hand from time to time either. It helps you memorize Java better and stands out at a job interview.
There’s nothing worse than installing your favorite Java-based application — such as Minecraft, Maven, Jenkins or Apache Pig — only to run into a JAVA_HOME is set to an invalid directory or a JAVA_HOME is not defined correctly error as soon as you boot up the program.
Well, there’s no need to fret. Here’s how to fix the most common JAVA_HOME errors.
How to fix JAVA_HOME not found errors
It’s worth noting that there aren’t standardized JAVA_HOME error messages that people will encounter. There are many different ways that a given JAVA_HOME error might be logged.
For example, one of the most common JAVA_HOME configuration problems arises from the fact that the environment variable has never actually been set up. Such a scenario tends to trigger the following error messages:
- Error: JAVA_HOME not found in your environment
- Error: JAVA_HOME not set
- Error: JAVA_HOME is not set currently
- Error: JAVA_HOME is not set
- Error: Java installation exists but JAVA_HOME has not been set
- Error: JAVA_HOME cannot be determined from the registry
How do you fix the JAVA_HOME not found problem?
Well, you fix this by in the Windows environment variable editor where you can actually add a new system variable. If you know your way around the Windows operating system, you should be able to add the JAVA_HOME environment variable to your configuration and have it point to the installation root of your JDK within minutes. The Windows 10 setting looks like this:
As mentioned above, the JAVA_HOME variable must point to the installation root of a JDK, which means a JDK must actually be installed. If one isn’t, then you better hop to it and get that done.
The JAVA_HOME is set to an invalid directory fix
The next most common JAVA_HOME error message is JAVA_HOME is set to an invalid directory. The error message is delightfully helpful, because it tells you in no uncertain terms the environment variable does in fact exist. And, it also tells you it’s not pointing to the right place, which is helpful as well. All you need to do to fix this error is edit the JAVA_HOME variable and point it to the correct directory.
The JAVA_HOME environment variable must point to the root of the installation folder of a JDK. It cannot point to a sub-directory of the JDK, and it cannot point to a parent directory that contains the JDK. It must point directly at the JDK installation directory itself. If you encounter the JAVA_HOME invalid directory error, make sure the name of the installation folder and the value of the variable match.
An easy way to see the actual value associated with the JAVA_HOME variable is to simply echo its value on the command line. In Windows, write:
>/echo %JAVA_HOME% C:/_JDK13.0
On an Ubuntu, Mac or Linux machine, the command uses a dollar sign instead of percentages:
:-$ echo $JAVA_HOME /usr/lib/jvm/java-13-oracle
Steer clear of the JDK \bin directory
One very common developer mistake that leads to the JAVA_HOME is set to an invalid directory error is pointing JAVA_HOME to the \bin sub-directory of the JDK installation. That’s the directory you use to configure the Windows PATH, but it is wrong, wrong, wrong when you set JAVA_HOME. If you point JAVA_HOME at the bin directory, you’ll need to fix that.
This misconfiguration also manifests itself with the following error messages:
- JAVA_HOME is set to an invalid directory
- Java installation exists but JAVA_HOME has been set incorrectly
- JAVA_HOME is not defined correctly
- JAVA_HOME does not point to the JDK
Other things that might trigger this error include spelling mistakes or case sensitivity errors. If the JAVA_HOME variable is set as java_home, JAVAHOME or Java_Home, a Unix, Linux or Ubuntu script will have a hard time finding it. The same thing goes for the value attached to the JAVA_HOME variable.
The JAVA_HOME does not point to the JDK error
One of the most frustrating JAVA_HOME errors is JAVA_HOME does not point to the JDK.
Here’s a little bit of background on this one.
When you download a JDK distribution, some vendors include a Java Runtime Environment (JRE) as well. And when the JAVA_HOME environment variable gets set, some people point it at the JRE installation folder and not the JDK installation folder. When this happens, we see errors such as:
- JAVA_HOME does not point to a JDK
- JAVA_HOME points to a JRE not a JDK
- JAVA_HOME must point to a JDK not a JRE
- JAVA_HOME points to a JRE
To fix this issue, see if you have both a JRE and JDK installed locally. If you do, ensure that the JAVA_HOME variable is not pointing at the JRE.
JAVA_HOME and PATH confusion
After you’ve downloaded and installed the JDK, sometimes another problem can plague developers. If you already have programs that installed their own version of the JDK, those programs could have added a reference to that specific JDK in the Linux or Windows PATH setting. Some programs will run Java using the program’s availability through the PATH first, and JAVA_HOME second. If another program has installed a JRE and put that JRE’s \bin directory on the PATH, your JAVA_HOME efforts may all be for naught.
However, you can address this issue. First, check the Ubuntu or Windows PATH variable and look to see if any other JRE or JDK directory has been added to it. You might be surprised to find out that IBM or Oracle has at some prior time performed an install without your knowledge. If that’s the case, remove the reference to it from the PATH, add your own JDK’s \bin directory in there, and restart any open command windows. Hopefully that will solve the issue.
Of course, there is never any end to the configurations or settings that can trigger JAVA_HOME errors. If you’ve found any creative solutions not mentioned here, please add your expert insights to the comments.
By the year 2025, Google predicts that the number of IoT and Smart Devices in operation will exceed that of non-IoT devices. Statista also predicts a similar growth pattern, in which the proliferation of IoT devices will be three times more than today’s usage.
Any way you slice it, the transformation to an IoT dominant world is going to cause a seismic shift in the way software is used, the way it’s made and the overall future of front-end software development. Soon enough, most computing activities will no longer revolve around the human-machine interaction. Rather, it will be about machine-machine interaction. And, of the human-machine interactions that remain, most will not involve a person that swipes a screen, clicks a mouse or types on a keyboard. Human-machine interaction will be conducted in other ways, some too scary to consider.
The days of GUI-centric development are closing. Yet, few people in mainstream software development seem to notice. It’s as if they’re the brick and mortar bookstores at the beginning of the Amazon age. As long as people kept walking through the door to make purchases, life was great. But, once the customers stopped coming, few were prepared for the consequences.
The same thing will happen to the software industry if we’re not careful.
And, unlike the demise of Big Box retailers — which took decades — the decline in the use of apps based on traditional GUI interactions might very well occur within a decade or less. Other means of interaction will prevail.
The shift to voice
In the not too distant future, the primary “front end” for human-machine interaction will be voice driven. Don’t believe me? Consider this:
My wife, who I consider to be an average user, no longer uses her phone’s keyboard to “write” SMS messages. She simply talks to the device. She uses WhatsApp to talk to her friends. She “asks” Alexa to play music. She still does most of her online shopping on Amazon, but I suspect once she learns how to use Alexa to buy stuff, her time spent on e-commerce websites will diminish.
She still has manual interaction with our television, which is really a computer with a big screen. But she uses the remote’s up/down/left/right buttons in conjunction with voice commands to find and view content. There’s no keyboard involved… ever.
Her phone connects to her car via Bluetooth. She makes phone calls via voice and controls call interactions from the steering wheel. If she needs directions to a location, she talks to the Map app in the phone which then responds with voice prompts.
On the flip side, each day I have a multitude of interactions with computers. And yet, those that require the use of a keyboard and mouse are confined mostly to my professional work coding and writing. The rest involves voice and touch.
In terms of my writing work, I find that I spend an increasing amount of time using my computer as a digital stenographer. My use of the voice typing feature of Google Docs and an online transcription service is growing. I too am becoming GUI-less.
There’s a good case to be made that for the near future, there will still be a good deal of commercial applications that require human-GUI interaction. Yet, as the number of IoT devices expand, more activity will instead be machine-machine and not require GUI whatsoever. All those driverless vehicles, warehouse robots, financial management applications and calls to Alexa or Siri will just push bits back and forth directly between IP addresses and ports somewhere in the cloud.
But, the good news is that the foreseeable future of creative coding is still very much in the domain of human activity. However, this too is changing.
More machines make more software than ever before, and most machine-generated code is made with existing models. Thus, the scope of creative programming by machines is limited. Nonetheless, it’s only a matter of time until AI matures to the point where it will be able to make software from scratch and the software that humans make will be about something else.
Sadly, few people in mainstream, commercial software development think about what that something else will be. Today, front end still means iOS, Android or whatever development framework is popular to make those nice GUI front ends. Few people can imagine any other type for the future of front-end software development. Even the application framework manufacturers are still focused on the GUI world.
When was the last time you heard a tech evangelist caution their constituency about the dangers ahead? That the world soon won’t need any more buttons to click or web pages to scroll?
That’s like asking horseshoe manufacturers to warn blacksmiths about the impact of that newfangled thing called an automobile. It’s just not in their best interest. But, it is in our best interest because the future of front-end software development in the post GUI world will provide amazing opportunities for those with foresight.
The amazing opportunity at hand
There’s a good deal of wisdom in the saying, “once one door shuts another door opens.” Even the most disruptive change provides immense opportunity if you pay attention. Think of it this way, Amazon is killing brick and mortar retailers but it’s been a boon for FedEx and UPS.
There is always an opportunity at hand for those with the creativity and vision to see it. Fortunately, creativity or vision is in no short supply among software developers. We’ve made something out of nothing since the first mainframe came along nearly seventy years ago. All we need to do now is be on the lookout for the next opportunity.
The question is, what will that next opportunity be? What will the new front-end in human-machine look like? If I were a gambling person, I’d put my money on the stuff we might think is too scary to consider today: implants.
Let me explain: I have a dental implant where a molar used to be. Right now that implant is nothing more than benign prosthesis in my mouth.
But think about this: given the fact that computers continue to miniaturize, how far are we from a time when that implant will be converted into a voice sensitive computing device that interacts with another microscopic audio device injected beneath my ear? Sound farfetched? Not really.
Twenty years ago nobody could watch a movie on their cellphone. Today it’s the norm. As Moore’s Law reveals, technological progress accelerates at an exponential rate.
Regardless of whether the future of front-end software development is implants or something else, one thing is for certain: it won’t be anything like what we have today. Those who understand this and seize the opportunity will prosper. The others? Well, I’ll leave it up to you to imagine their outcome.
RabbitMQ is an open source message broker that exchanges asynchronous messages between publishers and consumers. The messages can be a human-readable JSON, a simple string or a list of values that can be converted into a JSON string.
In March of 2019, the Jenkins Security Advisory reported multiple vulnerabilities were found in . If your continuous integration pipelines rely on RabbitMQ, you’ll want to make sure your environment is hardened against any of the 1.1.9 version’s vulnerabilities.
Before you connect the plugin to the broker, Jenkins users are required to provide values for the following categories:
The name is used to label the designated configuration on the build step. Default values are assigned to the host and port. You must create a unique username and the password is masked with asterisks.
After you complete these tasks, press the “Connection Test” button to ensure the values are properly used.
After a successful test, you can configure the build step with the following steps:
- RabbitMQ Name
- Exchange name
- Routing key
- Convert to JSON
Rabbit-MQ Publisher doesn’t directly publish on RabbitMQ. Instead, a message is published to an exchange on the RabbitMQ exchange server — Windows, Linux or Ubuntu. The exchange decides how the message should be routed to queues.
Data can contain environment variables and build parameters. Once properly checked, data can then be converted to JSON format.
The first vulnerability was that passwords weren’t encrypted. They were stored in plain text in the plugin’s global configuration file on the Jenkins Master.
An attacker with low or no privileges could access the configuration file and view the passwords, and could also use the exposed password to change the values for the default host and port. New plugin versions encrypt all passwords before storage in a configuration file.
The second vulnerability was that the plugin’s missing permission check allowed any user to connect to Rabbit MQ. Users with Overall/Read access were vulnerable to, for example, Jenkins to initiate a Rabbit MQ connection to an attacker-specified host and port with an attacker-specified username and password.
This form validation method did not require POST requests, which resulted in a cross-site request forgery vulnerability. Fortunately, new versions include fixes. If you haven’t updated your Jenkins plugins and the older publisher is still active, the time to update is now.
During my recent update to TheServerSide’s JDBC definition, I researched commonly queried terms related to the database API and was surprised by just how many people are still confused about the difference between JPA and Hibernate.
According to my research, the term “What is JDBC?” is queried 2,900 times per month. The term “Hibernate vs. JPA” gets queried 3,600 times. So, there is clearly a great deal of confusion between these two very different, yet related, topics.
I’m taking this opportunity to hopefully clarify, once and for all, the similarities and differences between the Java Persistence API and Hibernate.
What is the difference between JPA and Hibernate?
The main difference between JPA and Hibernate is the fact that JPA is a specification. Hibernate is Red Hat’s implementation of the JPA spec.
There is only one JPA specification. The JPA specification is collaboratively developed through the Java community process (JCP) and updates are released as Java specification requests (JSRs). If the community agrees upon all of the changes proposed within a JSR, a new version of the API is released.
When JPA 2.0, JSR #317, was released in 2009, JCP committee members who voted for the specification’s approval included Oracle, IBM, Red Hat, VMWare and Sun Microsystems. As you can see, a number of big names in the software industry participate in the evolution of the API.
The difference between a specification and an implementation
There is only one JPA specification. But, there are many different implementations.
Various projects, including DataNucleus, TopLink, EclipseLink, OpenJPA and Hibernate, provide an implementation of the JPA specification. These projects, and the vendors behind them, compete by trying to provide implementations that are faster, more efficient, easier to deploy, integrate with more external systems and potentially have less restrictive licenses than the others. Hibernate is simply one of many implementations of the JPA specification, albeit the one with which Java developers tend to be the most familiar.
The JPA or Hibernate question
The fact of the matter is, the JPA vs. Hibernate question isn’t a great one, because there really isn’t any intersectionality between the two concepts. You can’t really compare JPA and Hibernate in terms of performance, scalability or reliability because the two don’t really compete on those axes. JPA is the spec. Hibernate is an implementation.
No organization needs to choose between Hibernate and JPA. An organization either chooses to use JPA or not. And if an organization does choose to use the Java Persistence API to interact with their relational database systems, they can choose between the various implementations, and one of the most popular ones is the JBoss Hibernate project.
Why all the Hibernate vs. JPA confusion?
Given the fact that JPA and Hibernate fulfill two very distinct and different roles, the question arises as to why there is so much confusion when it comes to these two terms. From what I can tell, the confusion traces back to when the JPA specification was originally released.
Prior to the initial release of JPA 1.0 in 2006, there were a number of vendors competing in the object-relational mapping (ORM) tools space, all of whom have very similar APIs that accomplished many of the same objectives. But, none of those projects had compatible and interchangeable code. The goal of JPA was to standardize how Java applications performed ORM. With JPA 1.0, all of the competing implementations were unified because they all now implemented a common, standard API.
However, because of Hibernate’s popularity, many people continued to use the term Hibernate when they really meant JPA. Hibernate became an eponym for JPA, just as Kleenex is an eponym for bathroom tissues. Even today, when developers and architects talk about Hibernate, they’re really just referring to the JPA spec.
Choosing between JPA and Hibernate
I mentioned earlier that nobody has to choose between JPA and Hibernate, because all of the functionality provided by Hibernate can simply be accessed through the JPA API. However, that hasn’t always been the case. There was actually a time, during the early days of JPA releases, in which the choice between JPA and Hibernate was a legitimate decision organizations had to make.
The initial JPA release was very basic, and didn’t include many of the advanced features available in Hibernate at the time, including Hibernate’s very powerful Criteria API. When JPA was first released, many organizations used both JPA and Hibernate together. Developers would call upon proprietary Hibernate APIs, such as the HibernateSession within their code, while at the same time, they would decorate their JavaBeans and POJOs with JPA-based annotations to simplify the mapping between the Java code and the relational database in use. In doing so, organizations took advantage of useful features in the standard API, and simultaneously had access to various Hibernate functions that weren’t yet standardized.
Advantages of Hibernate vs JPA
Even today, it’s possible for there to be advanced mapping features baked into the Hibernate framework that aren’t yet available through the JPA specification. Because JPA is guided by the JCP and JSR process, it’s often a slow and methodical process to add new features. However, since the JBoss team that manages the Hibernate project isn’t bound by these sorts of restrictions, they can make features available much faster through their proprietary APIs. Some important features that were implemented by Hibernate long before the JPA specification caught up include:
- Java 8 Date and Time support
- SQL fragment mapping
- Immutable entity types
- Entity filters
- SQL fragment matching
- A manual flush mode
- Second level cache queries
- Soft deletes
But, despite the fact the Hibernate is often quicker at the draw when it comes to introducing new and advanced features, the JPA 2.0 release almost closed the gap between the two. It would be difficult for any software developer to develop applications with the proprietary API, when the JPA specification almost always provides equivalent functionality.
And if one of those advanced Hibernate features is required, you can always write code that bypasses JPA and calls the Hibernate code directly, which completely eliminates the need to ever choose sides in the JPA and Hibernate debate.
In March 2019, the Linux Foundation created the Continuous Delivery Foundation as a vendor-neutral means for developers to track CI/CD open source projects. At the same time, the Continuous Delivery Foundation debuted Jenkins X, an open source CI/CD tool to automate Kubernetes and manage the integration and delivery of containers in cloud applications.
Kubernetes uses multiple software tools in CI/CD, including:
- Google open source tools Tekton and Spinnaker
- Google container tools Skaffold and Kaniko
- Open packaging manager Helm
However, before a developer uses Jenkins X on Kubernetes, they should be aware of multiple security vulnerabilities that can create issues in a deployment. A developer should perform a Kubernetes security hardening before they run Jenkins X to get the most secure and up-to-date version of the platform.
Let’s take a look at some software tools and then Kubernetes vulnerabilities to be aware of with Jenkins X.
Google open source tools
Tekton is a shared set of open source components that help build CI/CD systems. It provides deployment to Kubernetes, along with multi-clouds, VMs, bare metal and mobile. Tekton takes advantage of Kubernetes and shifts the software development there to modernize the CD control plane.
Spinnaker was originally created by Netflix, and is currently led by both Netflix and Google. It is an open source, multi-cloud continuous delivery platform that provides support for cloud providers like Google Kubernetes Engine, Azure Kubernetes Service, Amazon EC2, OpenStack and Oracle Cloud Infrastructure.
Google container tools
Skaffold uses Kubernetes’ command line interface tool for continuous development of Kubernetes containers. A developer can locally iterate source code and then choose to deploy it to local or remote Kubernetes clusters. Skaffold provides workflows to help DevOps teams build, push and deploy the application. To make workflow tasks easier, Skaffold supports open packager manager Helm.
Kaniko is also a Google container tool. A user starts with the standard Kubernetes cluster, via the Google Kubernetes Engine, to build and push container images. Note that you must create a Kubernetes secret to authenticate to the Google Cloud Registry. The tool provides these three parameter arguments in a pod spec to run a container image:
- a Dockerfile, which is a text file that defines a Docker image. But note that the Docker Daemon isn’t involved in this instance;
- a build context, which is retrieved from a Google Cloud Storage bucket; and
- A registry, where the final image can be pushed.
Open package manager
Helm is a package manager tool for Kubernetes applications and is maintained by the Cloud Native Computing Foundation in collaboration with Google, Microsoft, and Bitnami. Helm comes in two parts: a (helm) client that runs outside the cluster and a (tiller) server that runs inside the cluster and manages application releases from within. The tiller also manages charts installations.
Kubernetes security hardening
Before you jump into Jenkins X, you should perform a Kubernetes security hardening to deploy, maintain and monitor Kubernetes clusters without issues. The National Institute of Standards and Technology and National Vulnerability Database website monitors Kubernetes vulnerabilities.
The following vulnerabilities should be corrected in your Kubernetes security hardening before you start with Jenkins X.
- CVE-2019-9946: A firewall misconfiguration was found in the Cloud Native Computing Foundation Container Networking Interface (CNI) that’s used for Kubernetes network plugins. This vulnerability would allow an attacker, without any privileges, to access the firewall and modify CNI port rules. This vulnerability comes with a high Common Vulnerability Scoring System (CVSS) 3.0 rating.
- A developer can fix this vulnerability in CNI 0.7.5 and Kubernetes 1.11.9, 1.12.7, 1.13.5, and 1.14.0. The administrator should update network policies on firewall configuration and access controls.
- CVE-2019-10002100: A crafted patch vulnerability was discovered in the Kubernetes API Server (Red Hat). This vulnerability would allow an attacker, with low privileges, to potentially send a specially crafted “json-patch” to repeatedly consume resources. When excessive consumption exhausts all resources, the API server is vulnerable to a denial of service attack. It has a medium CVSS 3.0 rating.
- CVE-2019-5736: A root access vulnerability was found in the core runC container code that could let an attacker gain root access to the host operating system. For example, an attacker could use a new container with an attacker-controlled image or an existing container with attacker write access to execute a command as root. Also, an attacker has the ability to deny some availability to other users. It has a high CVSS 3.0 rating.
- CVE-2018-18264: An authentication bypass vulnerability was located in earlier versions of Kubernetes Dashboard. An attacker with no or low privileges can use Dashboard’s Service Account to read secrets with the cluster. No user interaction is required to exploit the vulnerability over the network, and all information in the Service Account is exposed. However, the vulnerability doesn’t result in a denial of service attack. It has a medium CVSS 3.0 rating.
A developer should monitor security vulnerabilities at all times to ensure a secure deployment. A Kubernetes security hardening is one step toward a successful Jenkins X deployment.
There aren’t any magical tools that will fix an OutOfMemoryError for you, but there are some options available that will help automate your ability to troubleshoot and identify the root cause.
Follow these three steps to deal with this JVM memory error and get on the way to recovery:
- Capture a JVM heap dump
- Restart the application
- Diagnose the problem
1. Capture the heap dump
A heap dump is a snapshot of what’s in your Java program’s memory at a given point in time. It contains details about objects that are present in memory, actual data that is present within those objects, references how those objects maintain to others objects and other information. A heap dump is a vital step to fix an OutOfMemoryError, but they do present some challenges, as their contents can be difficult to read and decipher.
In an optimal situation, you want to capture a heap dump at the moment of or just prior to an OutOfMemoryError to diagnose the cause, but this isn’t exactly easy. However, you can automate this heap dump process. Tell the JVM to create a heap dump by editing the JRE’s startup parameters with the following variables:
2. Restart the troublesome application
Most of the time, an OutOfMemoryError won’t crash the application, but it could put the application in an unstable state. A restart would be a prudent move in this situation, since requests served from an unstable application instance will inevitably lead to an erroneous result.
And, you can automate this restart process as well. Simply write a “restart-myapp.sh” script, which will bounce your application. Provide command line arguments to the JVM that trigger it to run the following script when you encounter the exception:
When you pass this argument, the JVM will invoke “/scripts/restart-myapp.sh” script whenever OutOfMemoryError is thrown. Thus, your application will be automatically restarted right after it experiences an OutOfMemoryError.
3. Diagnose the problem
Now that you have captured the heap dump — which is needed to troubleshoot the problem — and restarted the application — to reduce the outage impact — the next step is troubleshooting.
As mentioned above, understanding the contents of a heap dump can be tricky, but there are helpful heap analyzer tools that help simplify the process. Some options include Eclipse Memory Analyzer (MAT), Oracle JHat or HeapHero.
These tools generate a memory analysis report, highlight the objects that cause the most memory and hopefully help identify objects that create a memory leak.
It can be extremely frustrating when your applications encounter a runtime error. You’ll need patience, a memory heap dump and the proper tools to analyze the problem to fix the OutOfMemoryError and other pesky exceptions of a similar ilk.
Visual Studio Code is a free source code editor developed by Microsoft for Windows, Max OS and Linux. On February 12, 2019 Symantec Security Center found a serious remote code execution vulnerability (CVE-2019-0728) in MS Visual Studio Code. This vulnerability ties into another one back in June of 2018, when an untrusted search path vulnerability (CVE-2018-0597) was reported.
In April 2019, Linux was made available as a snap that can be used to run across over 40 Linux distribution variations. The editor comes with Git built in to help developers manage version control in DevOps when the source code is ready for deployment to a production server. The source code is a type of server-side script that can only be compiled on the server.
Remote code execution vulnerability severity
Both remote code execution vulnerabilities create a total loss of confidentiality, integrity and availability. They come with a Common Vulnerability Scoring System 3.0 rating of 7.8 on a 0-10 scale.
The first vulnerability could allow an unauthorized attacker to execute arbitrary code in the context of the current user. A successful defense of an attack would require a user to take some action before the vulnerability can be exploited, such as the installation of a malware extension to the code. Failed exploit attempts will likely result in denial of service conditions.
The second vulnerability could allow the attacker to gain privileges via a Trojan DLL in an unspecified directory.
A programming-savvy attacker could target the SecurityHeaders, which sends a report on HTTP security headers and server information back to a local browser. An attacker could exploit the default value broadcast for the IIS version in use as such:
where Server is the HTTP server header and Microsoft-IIS/8.0 is the default value.
The attacker could also exploit the preloading of the HTTP Strict Transport Security security header protocol. When the preload directive is added to the security header, all subdomains are included for a specified period of time. The main risk associated with this vulnerability is that the specified period of time setting could be up to a year. And, a developer wouldn’t be able to shorten this setting to 90 days to fix the subdomain problems, and an update may not be able to propagate until after the original maximum time directive expires.
Remote code execution vulnerability risk mitigation steps
Here are some recommendations on how to mitigate the latent remote code execution vulnerabilities.
Automatic downloads: Set up a default setting of automatic downloads of Visual Studio Code updates.
Access rights: Grant minimal access rights to individuals and team members — such as read only, read and write. Avoid allowing members, except the administrator leader, to have full access rights.
Network traffic: Run network intrusion detection system IDSs to monitor network traffic for malicious activity that may occur after an attacker exploits the Visual Studio Code vulnerabilities. Ensure IDSs are free of vulnerabilities as well.
Analysis report: After you implement the HTTP response headers as mentioned above, follow these three steps to receive an analysis report. First, transfer the latest version of a script from a local machine to a server. Second, enter any website address in a local browser to implement HTTP response headers in the script. And third, head over to Security Headers or another website to analyze the report sent back to the browser. An overall grade is included for all security headers, the report discloses server information by default and doesn’t provide warnings on the risks if you use the preloading list in a HTTP security header.
Server information: Avoid broadcasting default server information. IIS 8.0 software allows the developer to add a new value to the script like Web.config before it deploys to the server. The report from SecurityHeaders should show the new information like this:
Server: Hello World!
To suppress the HTTP server header from sending to a local browser, the developer should use IIS 10, which is shipped with Windows 10, Windows Server 2016 and other options. You only need one code line in the script to suppress the header, the removeServerHeader attribute, which can be set to true.
<requestFiltering removeServerHeader=”true” />
Compiler language used to run the script is one of the VB and C variants. Non-window platforms may not have the capability to remove or suppress the HTTP security header.
Preloading list: Exclude the preload directive from the HTTP Strict Transport Security header to avoid preloading a list of all subdomains. The max-age directive is expressed in seconds (one year).
<add name=”X-Xss-Protection” value=”1;mode=block” />
<add name=”X-Frame-Options” value=”sameorigin” />
<add name=”Strict-Transport-Security” value=”max-age=31536000″ />
If a preloaded list is used, start with a lower maximum age expiry time — 30 days — to make sure all the subdomains have HTTPS support. It’s better to wait until the time frame expires in 30 days than in a year to fix a problem.
Alternatively, use an HTTPS front end for an HTTP-only server — which should be done before you secure the back-end server.
Why is programming so hard? Because it’s no longer about programming.
Allow me to elaborate.
I wrote my first line of professional code back in 1987. It was an application written in BASIC that did lease calculations for computer rentals. (Yes, back then computers were so expensive it made sense to lease them by the month. Today, we practically give them away.) The program worked when you selected a computer from a list, provided the number of months for the lease terms and the program calculated the monthly payments. The program also had a feature that allowed you to print a hard copy of the results.
In terms of the work I had to do, 90% of my effort was the actual programming. The remaining 10% involved creating the executable file, copying it onto floppy disk and then installing the code on the computers of the other people in my office.
It took me about a week to write the program. Admittedly, it wasn’t exactly rocket-science programming and when I look back it, wasn’t very good programming either. But, it worked and I got paid — win-win, so to speak.
Fast forward 30 years to today. Last week I wrote a program for a class I teach. The program is called WiseSayings. It’s a web app that responds upon request with a random saying from a list of wise sayings.
It took me about 30 minutes to write the code, including application data retrieval and configuration. Yet, just programming the app wasn’t enough. Here’s just the beginning as to why is programming so hard. Containers are very popular these days, so I had to create the Dockerfile that allows users to run WiseSayings in a Docker container.
But, there was more. Not only did I need to create the Dockerfile, but I also had to post the container image on DockerHub to make it easier for others to use. This means an image build, followed by a push after I logged into my DockerHub account.
So far, so good, right? Wrong!
As an ambitious coder, I imagined that millions of people will want to use my app. So, I need to make it easy to scale, and that WiseSayings can be run under Kubernetes. I wrote a deployment.yaml to create the Pod and ReplicaSet so my containerized WiseSays app will run in the cloud and, at the least, a service.yaml to provide web access to the logic in the pods from outside the Kubernetes cluster.
If I want to provide security and routing, I’ll need to create a Kubernetes secret or two, a TLS certificate and an ingress.yaml to manage it all. I could go on. We haven’t even talked about web page creation to render my application’s response, nor have we talked about multiple-language support for the app. Who knows, maybe some of my anticipated millions of users will be in China.
How things have changed
My main point is this on why is programming so hard: 30 years ago, all I had to know to create a program was the programming language BASIC and how to structure code into subroutines — which is what we called functions and methods back then. Printing was a bit harder because printer drivers weren’t part of the operating system and your programs needed to know a whole lot about the printers they used. But, that was it. Most of my work revolved around how to express the specific application logic in code.
Now you can really see why is programming so hard.
It still takes about half an hour to write the actual code and get it up on GitHub, but I now add hours to make my program available to my users. My old means of distribution involved copying the executable’s file on to a floppy disk and walking over to a user and copying that file from disk on to the desktop computer. What used to take minutes for a local code distribution has now transitioned into what now makes up the bulk of my “programming” activity, regardless of whether the code goes to a user on the other side of the office or halfway around the world.
Now, don’t get me wrong. Under no circumstance do I want to go back to the days of BASIC and floppy disks. The programs we make today go way beyond anything I could have imagined 30 years ago when I did BASIC programming on an IBM AT running DOS 3.3. I think it’s beyond cool that we’ve made it so you can point your cellphone camera at a newspaper and have the device read the text out loud to you in real time. I like watching the Merchant of Venice any time I want on YouTube with scene summaries available on my iPad. (Yes, sometimes I find it hard to follow the language of the Bard.)
These are amazing achievements, but they come at a price. While commercial software has always required the coordinated efforts of many, these days, even the simple stuff is hard and the implications are profound.
In the old days, knowledge of a programming language and a rudimentary understanding of software design was enough to get you on the playing field. Today, you need to know networking, deployment tools, automated provisioning, testing its variety of forms — from unit testing to performance testing on a distributed scale — and the details of a multitude of development frameworks.
To use a basketball analogy, in the past all you needed to play was a ball, a hoop and the ability to dribble, pass and shoot. Today you also need to know all of that, plus how to sell tickets and run the concession stands. It’s a lot of work.
Is it worth it? Of course. But, the added complexity makes the profession a lot harder to get into. Maybe this is a good thing. Medicine, engineering and nuclear physics have always been “hard to do” professions. Work in those fields has extraordinary benefits when done well and grave consequences when done poorly. Software development is now in that league.
Today, software runs more of the world. Soon it will run most of the world. Maybe it’s time to set a high bar and make it as hard as possible to play. Yet, it’s sad to think that when the next version of me comes along, that person will have to do a lot more than write a simple program in BASIC to get started. I was fortunate to have the opportunity to play and in doing so, software changed my life. Others might not be so lucky.
Maven and Eclipse have always had a rocky relationship, and a common pain point between the two is how to force Maven JDK 1.8 support in new Eclipse projects. Without jumping through a few configuration hoops, the antiquated Java 1.5 version persistently remains the default.
The Eclipse IDE became popular before the Maven revolution really took hold, so the IDE’s support for the build tool always felt like an afterthought. Even with recent releases like Eclipse Photon, tasks such as importing Maven projects or creating anything more than a basic Maven project is less than graceful.
Maven JDK 1.8 use in Eclipse
A reminder of this rocky relationship is the fact that Eclipse and Maven have a habit of forcing Java 5 compliance on new applications, even if JDK 1.8 is the only JVM installed on the development machine. The Java 1.5 compliance issue means Lambda expressions, stream access or newer language features besides generics won’t compile.
Some developers try and change the Java compiler setting within Eclipse from Java 1.5 to Java 1.8, but that only works temporarily. As soon as a new Maven build takes place, the JDK version reverts back to 1.5. The only way to make this a permanent change is to edit the POM and force Eclipse and Maven to use Java 1.8.
Of course, there is a fairly simple answer to this problem. That’s why everyone loves Maven. The build tool always has a simple resolution on offer. To force Eclipse and Maven JDK 1.8 compliance on your new projects, simply add the following to your POM file:
<!-- Force Eclipse and Maven to use a Java 8 JDK compiler--> <properties> <maven.compiler.target>1.8</maven.compiler.target> <maven.compiler.source>1.8</maven.compiler.source> </properties>
Maven Java 1.8 plugin support
An alternate, albeit slightly more verbose approach to tell Eclipse and Maven to use Java 8 or newer compilers is to configure the Apache Maven plugin:
<!-- Build plugin to force Maven JDK 1.8 compliance --> <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <version>3.8.0</version> <!- Force Maven to use Java 1.8 --> <configuration> <source>1.8</source> <target>1.8</target> </configuration> </plugin> </plugins> </build>
Eclipse Java 1.8 compiler setting
Once the maven-compiler-plugin change is made to the POM, you can open up Eclipse’s Java compiler properties page and notice that JDK compliance has changed from JDK 1.5 to 1.8.
It’s a tedious problem but it’s also one that is easily fixed. Furthermore, you only need to edit the POM file once on a project to force Eclipse and Maven to use Java 8. It’s really not all that big an inconvenience to overcome.