Coffee Talk: Java, News, Stories and Opinions


July 3, 2017  11:26 AM

Advancing JVM performance with the LLVM compiler

cameronmcnz Cameron McKenzie Profile: cameronmcnz

The following is a transcript of an interview between TheServerSide’s Cameron W. McKenzie and Azul Systems’ CTO Gil Tene.

Cameron McKenzie: I always like talking to Gil Tene, the CTO of Azul Systems.

Before jumping on the phone, PR reps often send me a PowerPoint of what we’re supposed to talk about. But  with Tene, I always figure that if I can jump in with a quick question before he gets into the PowerPoint presentation, I can get him to answer some interesting questions that I want the answers to. He’s a technical guy and he’s prepared to get technical about Java and the JVM.

Now, the reason for our latest talk was Azul Systems’ 17.3 release of Zing, which includes an LLVM-based, code-named Falcon, just-in-time compiler. Apparently, it’s incredibly fast, like all of Azul Systems’ JVMs typically are.

But before we got into discussing Azul Systems’ Falcon just-in-time compiler, I thought I’d do a bit of bear-baiting with Gil and tell him that I was sorry that in this new age of serverless computing and cloud and containers, and a world where nobody actually buys hardware anymore, that it must be difficult flogging a high-performance JVM when nobody’s going to need to download one and to install it locally. Well, anyways, Gil wasn’t having any of it.

Gil Tene: So, the way I look at it is actually we don’t really care because we have a bunch of people running Zing on Amazon, so where the hardware comes from and whether it’s a cloud environment or a public cloud or private cloud, a hybrid cloud, or a data center, whatever you want to call it, as long as people are running Java software, we’ve got places where we can sell our JVM. And that doesn’t seem to be happening less, it seems to be happening more.

Cameron McKenzie: Now, I was really just joking around with that first question, but that brought us into a discussion about using Java and Zing in the cloud. And actually, I’m interested in that. How are people using Java and JVMs they’ve purchased in the cloud? Is it mostly EC2 instances or is there some other unique way that people are using the cloud to leverage high-performance JVMs like Zing?

Gil Tene: It is running on EC2 instances. In practical terms, most of what is being run on Amazon today, it is run as virtual instances running on the public cloud. They end up looking like normal servers running Linux on an x86 somewhere, but they run on Amazon, and they do it very efficiently and very elastically, they are very operationally dynamic. And whether it’s Amazon or Azure or the Google Cloud, we’re seeing all of those happening.

But in many of those cases, that’s just a starting point where instead of getting server or running your own virtualized environment, you just do it on Amazon.

The next step is usually that you operationally adapt to using the model, so people no longer have to plan and know how much hardware they’re going to need in three months time, because they can turn it on anytime they want. So they can empower teams to turn on a hundred machines on the weekend because they think it’s needed, and if they were wrong they’ll turn them off. But that’s no longer some dramatic thing to do. Doing it in a company internal data center? It’s a very different thing from a planning perspective.

But from our point of view, that all looks the same, right? Zing and Zulu run just fine in those environments. And whether people consume them on Amazon or Azure or in their own servers, to us it all looks the same.

Cameron McKenzie: Now, cloud computing and virtualization is all really cool, but we’re here to talk about performance. So what do you see these days in terms of bare iron deployments or bare metal deployments or people actually deploying to bare metal and if so, when are they doing it?

Gil Tene: We do see bare metal deployments. You know, we have a very wide mix of customers, so we have everything from e-commerce and analytics and customers that run their own stuff, to banks obviously, that do a lot of stuff themselves. There is more and more of a move towards virtualization in some sort of cloud, whether it’s internal or external. So I’d say that a lot of what we see today is virtualized, but we do see a bunch of the bare metal in latency-sensitive environments or in dedicated super environments. So for example, a lot of people will run dedicated machines for databases or for low-latency trading or for messaging because they don’t want to take the hit for what the virtualized infrastructure might do to them if they don’t.

But having said that, we’re seeing some really good results from people on consistency and latency and everything else running just on the higher-end Amazon. So for example, Cassandra is one of the workloads that fits very well with Zing and we see a lot of turnkey deployments. If you want Cassandra, you turn Zing on and you’re happy, you don’t look back. In an Amazon, that type of cookie-cutter deployment works very well. We tend to see that the typical instances that people use for Cassandra in Amazon with or without us is they’ll move to the latest greatest things that Amazon offers. I think the i3 class of Amazon instances right now are the most popular for Cassandra.

Cameron McKenzie: Now, I believe that the reason we’re talking today is because there are some big news from Azul. So what is the big news?

Gil Tene: The big news for us was the latest release of Zing. We are introducing a brand-new JIT compiler to the JVM, and it is based on LLVM. The reason this is big news, we think, especially in the JVM community, is that the current JIT compiler that’s in use was first introduced 20 years ago. So it’s aging. And we’ve been working with it and within it for most of that time, so we know it very well. But a few years ago, we decided to make the long-term investment in building a brand-new JIT compiler in order to be able to go beyond what we could before. And we chose to use LLVM as the basis for that compiler.

Java had a very rapid acceleration of performance in the first few years, from the late ’90s to the early 2000s, but it’s been a very flat growth curve since then. Performance has improved year over year, but not by a lot, not by the way that we’d like it to. With LLVM, you have a very mature compiler. C and C++ compilers use it, Swift from Apple is based on its, Objective-C as well, the RAS language from Azul is based on it. And you’ll see a lot of exotic things done with it as well, like database query optimizations and all kinds of interesting analytics. It’s a general compiler and optimization framework that has been built for other people to build things with.

It was built over the last decade, so we were lucky enough that it was mature by the time we were making a choice in how to build a new compiler. It incorporates a tremendous amount of work in terms of optimizations that we probably would have never been able to invest in ourselves.

To give you a concrete example of this, the latest CPUs from Intel, the current ones that run, whether they’re bare metal or powered mostly on Amazon servers today, have some really cool new vector optimization capabilities. There’s new vector registers, new instructions and you could do some really nice things with them. But that’s only useful if you have some optimizer that’s able to make use of those instructions when they know it’s there.

With Falcon, our LLVM-based compiler, you take regular Java loops that would run normally on previous hardware, and when our JVM runs on a new hardware, it recognizes the capabilities and basically produces much better loops that use the vector instructions to run faster. And here, you’re talking about factors that could be, 50%, 100%, or sometimes 2 times or 3 times faster even, because those instructions are that much faster. The cool thing for us is not that we sat there and thought of how to use the latest Broadwell chip instructions, it’s that LLVM does that for us without us having to work hard.

Intel has put work into LLVM over the last two years to make sure that the backend optimizers know how to do the stuff. And we just need to bring the code to the right form and the rest is taken care of by other people’s work. So that’s a concrete example of extreme leverage. As the processor hits the market, we already have the optimizations for it. So it’s a great demonstration of how a runtime like a JVM could run the exact same code and when you put it on a new hardware, it’s not just the better clock speed and not just slightly faster, it can actually use the instructions to literally run the code better, and you don’t have to change anything to do it.

Cameron McKenzie: Now, whenever I talk about high-performance JVM computing, I always feel the need to talk about potential JVM pauses and garbage collection. Is there anything new in terms of JVM garbage collection algorithms with this latest release of Zing?

Gil Tene: Garbage collection is not big news at this point, mostly because we’ve already solved it. To us, garbage collection is simply a solved problem. And I do realize that that often sounds like what marketing people would say, but I’m the CTO, and I stand behind that statement.

With our C4 collector in Zing, we’re basically eliminating all the concerns that people have with garbage collections that are above, say, half a millisecond in size. That pretty much means everybody except low-latency traders simply don’t have to worry about it anymore.

When it comes to low-latency traders, we sometimes have to have some conversations about tuning. But with everybody else, they stop even thinking about the question. Now, that’s been the state of Zing for a while now, but the nice thing for us with Falcon and the LLVM compiler is we get to optimize better. So because we have a lot more freedom to build new optimizations and do them more rapidly, the velocity of the resulting optimizations is higher for us with LLVM.

We’re able to optimize around our garbage collection code better and get even faster code for the Java applications running it. But from a garbage collection perspective, it’s the same as it was in our previous release and the one before that because those were close to as perfect as we could get them.

Cameron McKenzie: Now, one of the complaints people that use JVMs often have is the startup time. So I was wondering if there’s anything that was new in terms of the technologies you put into your JVM to improve JVM startup? And for that matter, I was wondering what you’re thinking about Project Jigsaw and how the new modularity that’s coming in with Java 9 might impact the startup of Java applications.

Gil Tene: So those are two separate questions. And you probably saw in our material that we have a feature called ReadyNow! that deals with the startup issue for Java. It’s something we’ve had for a couple of years now. But, again, with the Falcon release, we’re able to do a much better job. Basically, we have a much better vertical rise right when the JVM starts to speed.

The ReadyNow! feature is focused on applications that basically want to reduce the number of operations that go slow before you get to go fast, whether it’s when you start up a new server in the cluster and you don’t want the first 10,000 database queries to go slow before they go fast, or whether it’s when you roll out new code in a continuous deployment environment where you update your servers 20 times a day so you rollout code continuously and, again, you don’t want the first 10,000 or 20,000 web request for every instance to go slow before they get to go fast. Or the extreme examples of trading where at market open conditions, you don’t want to be running your high volume and most volatile trades in interpreter Java speed before they become optimized.

In all of those cases, ReadyNow! is basically focused on having the JVM hyper-optimize the code right when it starts rather than profile and learn and only optimize after it runs. And we do it with a very simple to explain technique, it’s not that simple to implement, but it’s basically we save previous run profiles and we start a run assuming or learning from the previous run’s behavior rather than having to learn from scratch again for the first thousand operations. And that allows us to run basically fast code, either from the first transaction or the tenth transaction, but not from the ten-thousandth transaction. That’s a feature in Zing we’re very proud of.

To the other part of your question about startup behavior, I think that Java 9 is bringing in some interesting features that could over time affect startup behavior. It’s not just the Jigsaw parts, it’s certainly the idea that you could perform some sort of analysis on code-enclosed modules and try to optimize some of it for startup.

Cameron McKenzie: So, anyways, if you want to find out more about high-performance JVM computing, head over to Azul’s website. And if you want to hear more of Gil’s insights, follow him on Twitter, @giltene.
You can follow Cameron McKenzie on Twitter: @cameronmckenzie

November 27, 2019  3:30 PM

Find the right pull request workflow for your dev projects

George Lawton Profile: George Lawton

At the heart of all large-scale software development projects are the communications patterns required to keep everyone on the same page. The dawn of Git as a distributed version control system allowed enterprises to rethink the way they communicate about the state and updates of large application code bases. Pull requests serve as the nervous system to communicate about changes in these projects.

Over time, enterprise developers have adopted a variety of approaches for pull request workflows to facilitate better communication about code status updates without overwhelming developers. Each approach has its benefits and drawbacks, and developers may have strong opinions about what works best.

If your pull request workflow has some holes, consider some of the alternatives to see if some may be more promising for different projects or sub-projects within your organization.

Centralized workflow

Centralized workflows common to older version control systems need to be re-engineered when organizations move to a distributed repository because pull requests only work in conjunction with two separate repositories or code branches.

However, a centralized workflow can still use Git or other version control systems as a centralized repository for all project changes. In this workflow, everyone commits changes to the master. One of Git’s attractions is that it allows teams to continue to use a centralized workflow, while they decide how and when it’s worth to try out other pull request workflows.

A big advantage of Git over a centralized version control system like Apache Subversion is that Git allows each developer to tinker on their own copy of a codebase independent of other changes, and then merge what they want when appropriate. Git also supports a branch and merg model which reduces problems with multiple people and simultaneous changes to a codebase.

Pull request workflows

Some of the popular pull request workflows include feature branch Workflow, forking workflow, Gitflow, GitHub flow and Gitlab Workflow.

The basic process for all pull request processes consists of several steps:

  1. Developer pulls code from shared repository.
  2. Developer adds new code in their local repository, which is now called a code branch.
  3. Developer pushes the code branch to a shared repository like GitHub.
  4. Developer files a pull request, which is sent to other developers interested in this change.
  5. Other developers review the code and make changes.
  6. Someone merges the new code into the official repository and closes the pull request.

Feature branch workflow

In a feature branch workflow, developers work on their local code changes, and then file a pull request to pull the code changes from their local repository or a separate branch repository managed by a repository service. A developer can also file a pull request if they have guidance problems from other developers.

Forking workflow

In a forking workflow, changes are pushed to a separate repository. Pull requests are sent to a project maintainer, who can decide whether to merge them back to the main repository. This workflow can also be used to notify a collaborator to pull in an updated code branch into their own repository.

Gitflow workflow

In 2010 Vincent Driessen developed Gitflow, which became a popular alternative to feature branching and forking workflows. The Gitflow workflow is similar to the feature branch workflow, except it uses a staging Develop repository to make it easier for developers to discuss code updates before they merge changes to the Master repository.

Gitflow is more flexible than feature branch and forking workflows. While Driessen believed that Gitflow was easy to understand and allows for the shared branching and releasing processes, other developers argue that it’s more complicated than most development teams require and have created their own streamlined variants.

GitHub Flow

GitHub developed GitHub Flow, which is a streamlined alternative to GitHub that is better suited for continuous integration and continuous deployment. Git co-founder Scott Chacon argued that one of the main issues with Gitflow is that it’s organized around the idea of a big release. This organization can complicate things when developers push one or more code updates per day. Here is an example GitHub Flow process:

  1. In GitHub Flow, a developer creates a descriptively named branch off master.
  2. A developer commits to that branch locally and then pushes updates to the same named branch on the server.
  3. A developer sends a pull request when the branch is ready to merge or the developer needs help.
  4. After the code is signed off by someone else, it can be merged into the master branch and pushed into deployment.

Chacon said that Git is complex and a developer should avoid anything that further complicates the workflow. He noted that Gitflow makes sense for more formal releases on a longer-term interval and the occasional hot fix. But, he feels that GitHub Flow is a better choice for companies that push updates daily.

GitLab Workflow

Meanwhile, GitLab defined another variant of Gitflow called Gitlab Workflows, which extends the discussion of Gitflow to include specific branches for explicit situations, such as production branches, environment branches, and release branches.

In this model, the production branch creates a separate branch from the master that represents what is currently “in production.” Said branch could be needed in cases where you can’t deploy to production every time you merge a feature branch. The “environment branch” approach creates branches that are aligned to specific, non-production environments such as “staging” or “qa.” And the “release branch” helps to clearly manage and track all the changes that are assembled for a specific release.

Another important part of the workflow discussion is how to foster and encourage collaboration within the extended team. GitLab made collaboration and encouraging contribution a top priority, so they created the GitLab Merge Request to manage code discussions and code reviews prior to the code changes in a given branch merger into the master.

Other approaches

Of course, every development team will to want to find the pull request workflow that works best for them. Outside of the options listed above, you can also consider the following options for pull request workflows:

  • Gitworkflow: a dogfood workflow used on the git.git project that has a process to identify which topics can cause conflicts with other works in progress;
  • OneFlow: an anti-gitflow workflow that addresses concerns over Gitflow’s difficulties to find things in the change history of large codebases; or
  • An unnamed workflow developed by Scroll.com CTO Kushal Dave


November 26, 2019  8:33 PM

What type of developer are you: A technician or an artist?

BobReselman BobReselman Profile: BobReselman

When I used to work for a big computer manufacturing company, I once had a boss with an interesting hiring philosophy. He divided technical talent into two groups: technicians and artists.

If he wanted someone who knew how to work a piece of technology inside and out, was able to follow instructions exactly and could be relied upon to do what needed to be done when it needed to be done, he hired a technician. If he needed someone to solve a particularly perplexing problem or develop a viable resolution within the constraints of a given budget, he hired an artist. He didn’t expect a technician to do the work of an artist and vice versa.

What type of developer are you? And, what type of developer is made for the current state of the IT industry? Let’s find out.

A two-edged sword

Over the years, I’ve found the talent divide between technician and artist to be consistently accurate. Also, I’ve noticed that as companies grow and mature, the number of technicians in the workforce tends to outweigh the number of artists. This is a two-edged sword.

In a company with a lot of technicians, an employee who can follow directions is well suited to the command and control management style under which large companies operate. Can you imagine how difficult it would be to manage a company the size of Google without a high degree of compliance, reliability, and predictability among the workforce? At some point, if the company wants to get anything done at a strategic level, somebody at the top needs to say, “Go here” and the company needs to follow those directions. Following directions has value and technicians understand this.

On the other hand, employees that consider themselves to be primarily technicians run the risk of limited employability when change comes.

“You can retire on this circuit board.”
For example, my friend Mark studied to be an electrical engineer. After graduation, he was fortunate enough to get a job in the aircraft industry. His job was to maintain a small circuit board that was essential to the operation of the aircraft. Shortly after he arrived, Mark’s boss took him aside and said, “Mark, you can retire on this circuit board.”

Mark saw his future unfold before his very eyes. His entire world would become that circuit board. Every detail, every solder point, every piece of electricity that surged through the circuit board would be his to contemplate and control. He and that circuit board would become one.

A short time later, Mark quit and went back to school to become a landscape architect. Why? Mark didn’t want to wake up out of a job 20 years later because a newer, better circuit board was introduced. He recognized that he could be unable to find gainful employment because all he knew about was a circuit board that nobody wanted. Mark understood the risk at hand and acted accordingly.

The risks of a technician

I’ve seen Mark’s situation play out more than once with this type of developer.

A bright software engineer out of college who did all the right things — studied hard, passed the tests, got certified, followed all the rules — gets a job with a big tech company. They are assigned to take care of some tech that’s not of their making. The engineer absorbs the technical legacy of others and might be able to improve upon that legacy. The engineer becomes reliable and does what needs to be done at the appropriate time. The rewards are ample for years, maybe even decades. They become what is called a “Go to person.”

Then one day, the tech stack changes. All the old rules are no longer valid. The technician-engineer needs to learn to operate in this new world. They go to “training” to learn the new rules only to find out that the technology is so new, that a lot of it is made up as it evolves. The only rule is that “You need to figure it out.”

Many technicians flounder. While they’re more than capable of filling out technical prescriptions sent to them by other employees, the actual figuring stuff out element lags. On the other hand, artists don’t have this problem. They’ve been figuring things out all their lives.

If you give me a tuba…

The scenario mentioned above leads me to the famous quote from John Lennon: “I’m an artist, and if you give me a tuba, I’ll bring you something out of it.”

Artists are very good at seeing possibilities within a given set of constraints. An accomplished artist can make something out of anything. Also, they tend to be quick to adapt to new surroundings.

Whether the artist is a software developer, painter, musician or chef, they’re always on the lookout for something new. Hence, their attraction to the big picture, like getting the concepts and synthesizing the old with the new to create something unique. They don’t need to go to “training” because they train themselves continuously on how to absorb new ideas and create better ones.

Artists bring a lot to the technical landscape, but they lack many of the qualities that technicians have. They aren’t very good at following rules and they can be unpredictable or unreliable. Just ask any manager who had to suffer a missed deadline because a prima donna type of developer spun their wheels trying to create a breakthrough that may never see the light of day.

Artists can also be high maintenance, which is why they open tend to work in the safety and confinement of the R&D departments in larger organizations. What artists consider to be random inspiration can seem to those on the outside like catastrophic chaos.

Still, there are significant benefits to be an artist in technology.  Artists don’t share the same unemployable risk of their technician counterparts. While technicians languish when change comes, artists are often the perpetrators of said change.

In the event things go south and the business suffers a downturn, it’s the artists who will be in high demand because of their eye for innovation. Just look at Microsoft before the advent of Azure, for example. It wasn’t a pretty picture. Compliance, while safe, will only get you so far when the chips are down, and the wolf is at the door.

But, the thing to remember is that most times, the chips aren’t down. In fact, for a big company with a sizable legacy of technology, most times the chips are stacked up very nicely for everyone’s benefit. As a result, few companies want to rock the boat unless it’s sinking. Only then will the business do whatever it takes to stay afloat, such as calling in the artists to figure it out.

The best of both worlds

It might seem as though I described the technician’s life to have the benefit of compliance with the risk of unintended obsolescence. Also, it might seem as if I painted the artist’s life to be one of freedom, independence and unencumbered employment. This makes a nice story, but the reality is a bit different.

If you want to be a competent, viable professional in technology, you need to hone both sensibilities to a very fine degree. You need to combine the discipline that goes with learning the “rules” with the courage to go beyond them. You can’t have one without the other. Remember, that behind every inspired performance, there are hours of mundane practice.

In terms of employability, the trick is to stay engaged and current. Don’t depend on your employer to take care of you. Your employer’s essential goal is to make money. No money, no business. It’s that simple. The company’s need to support your technical proficiency only goes as far as is economically feasible. It’s not a good thing or a bad thing. It’s the way it is.

If you fall into the trap of doing only the minimum within your current technical boundaries to earn a paycheck and expect your employer to educate you when the boundaries need to change, you run the risk of unintended obsolescence.

The easiest thing in the world is to paint yourself into a corner and call yourself a technician or an artist. While taking sides is all the rage these days, the harder, probably more lucrative career path is to be a competent combination of both.


November 25, 2019  7:54 PM

UX design reviews key to every web content management strategy

Gail Mackenzie Profile: GailMackenzie
Uncategorized

In most enterprise organizations, UX designers and content managers are separate roles filled by different people. As a person who has worked in both capacities, I’m constantly reminded of the disconnect between the graphic design and development teams. Graphic artists can create an intricate design, but developers will need to create a content management template based on that design. The result is often a difficult process for both parties.

If you don’t make a stringent UX design review an integral part of your organization’s content management strategy, new website launches are doomed to fail. To ensure the handoff between designers and developers goes as smoothly as possible, here are five design review facets that every content management strategy should include.

1.      Practicality

The first step in a UX design review is to manage expectations. Although a design may look flawless, it won’t necessarily translate well into a content management template, because the HTML framework must be repeatable. Your team will also need to make decisions on text string length and how the design will support translations. In addition to these requirements, you’ll also need to consider the underlying infrastructure.

We need to ask if the engine can support the features that the design suggests. Often, a design will have to be modified to suit the abilities of the system it’s built upon. For this reason, it’s very important that both front-end and back-end developers participate in the development of your content management strategy. All these details must be documented for sign off before you begin development.

2.      Mobile first

Take a mobile first approach to every project. Progressive enhancement involves scaling projects from a mobile format to desktop. You begin with a lean and simplified design and expand on it as the screen expands. An enterprise should use this approach because when we design for desktop first, we lose sight of UX on mobile.

In addition, a UX design review will reveal that some choices that work for desktop may not necessarily work for mobile. For example, in many cases the navigation elements that work for desktop will need to be constructed very differently for mobile. To move into development in my UX strategy, at minimum, we require both the mobile and desktop screens.

3.      Requirements

A design handoff is so much more than just graphics that show how desktop and mobile screens will look. A developer cannot simply look at an image and know every aspect and detail that was used to create said image. We follow these requirements to ensure that pixel rendering is perfect, regardless of the user platform:

  • Measurable Elements

Begin with a realistic document sizing to provide measurable elements to your developers. Spacing is a very important design element that can only be duplicated if you’re given a proper canvas to inspect it. A good guideline is to design in a 1920 x 1080 space.

  • Images

Though they may only be placeholders for now, package and provide the developer with the images they need to create the design. Keep in mind that this will not only consist of the desktop images, but the cropped versions for tablet and mobile as well.

  • Fonts

A font is a difficult thing to determine. For this reason, you’ll need to include the name and URL of the font you have chosen to ensure that it’s indeed a web font. If you purchased the font, make sure you give the developer proper access to embed and test it.

  • Design Specs

Design specs include font size, font weight, font family, line height, box shadow, padding, margins, color, etc. Many of these specs can be found in the design program.

For example, two prototyping tools that I have extracted files from are Adobe XD and InVision. These platforms make it very easy to pull design specifications.

4.      Functionality

Another important thing to include in your design is some indication as to how it’s supposed to work. Is the menu a hover menu or will the user have to click? How does the form work? Does the user simply add the inputs and receive a result, or will there be a submit button? Can the user go back and edit the form if they don’t get the right result? This is where some extensive thought on usability comes into play. A good design is a well thought out design.

5.      Error handling

Be sure to include screens on error handling. The developer will not only need to duplicate an image into HTML, but also develop an experience for the user. Every click will lead the user, and it’s the designer’s job to ensure the journey is mapped out to provide a seamless experience. This should be considered in the design phase and included in the hand off.

Graphic designers focus heavily on what looks good and what will appeal to users. However, the feasibility of reusable, easily handled content management templates over the long term is a common oversight. Employ these five UX design review techniques for a smooth launch of any new enterprise website.


November 8, 2019  6:34 PM

Change how you present technical training for better value

BobReselman BobReselman Profile: BobReselman

Most HR departments simply don’t get it when it comes to technical training. They set up the training in a practical method: first create an outline, second identify a speaker and third find a conference room or video conference bridge to deliver the training. In some instances, HR might even get a technical manager involved in the process to make sure the topic selection is appropriate. But sometimes, the technical manager knows less about how to create an effective educational experience than HR.

As I see it, the problem with technical training stems from a lack of awareness about the fundamentals of educational psychology. If you don’t know the basics of how people understand concepts and how learning takes place you run the risk of running costly training sessions that essentially go through the motions, with little or no tangible results.

The key to deliver effective technical training is to understand that learning is a developmental process. Learners progress through distinct stages of cognition to master a given subject. Each stage has its own characteristics. The teaching techniques and evaluation instruments used in a given stage are special to that stage.

The trick to get the most bang for your training buck is to understand the stages of learning and make sure you use the appropriate teaching techniques related to each stage.

Allow me to elaborate.

Understand the stages of learning

The first thing you’ll need to understand to get the most bang for your buck on technical training is the stages of learning. In educational psychologist Benjamin Bloom’s book The Taxonomy of Educational Objectives: The Classification of Educational Goals. Bloom categorized learning into three domains: the Cognitive (knowledge-based), Affective (emotion-based) and Psychomotor (action-based). For our purposes, we’ll focus on the Cognitive domain.

According to Bloom learning in the Cognitive domain is divided into six hierarchical stages: knowledge, comprehension, analysis, synthesis, and evaluation. (See below)

Stage Description Example Teaching Technique Evaluation Instrument
Knowledge Basic facts “Here is how Docker works” Seeing and listening True and false questioning
Comprehension Understanding terms “What is the difference between a Docker image and a Docker Container?” Interactive training Multiple choice and written exams
Application Working with terms and facts together in a logical way “Create a Dockerfile for a web application running on Python Flask” Teaching concepts to others Solve a simple problem using what you have learned
Analysis Understanding how terms, facts and technologies work together “Name 3 ways to optimize the performance of a Docker container.” Demonstration of given ideas in similar but not exact situations Solve a related problem that only indirectly relates to what you have learned
Synthesis Creating new components and ideas from facts you have been given “Create a microservice architecture that leverages Docker technology.” Focused instruction that demands creativity on the part of the learner Create new products or generate new ideas on peripheral topics based on what has been learned
Evaluation Being able to judge, and evaluate ideas critically “Improve this Docker based microservice architecture.” Evaluation of others Be able to provide intelligent feedback on the ideas synthesized by others

Bloom further divided each domain into six stages. The Cognitive domain’s six stages include: Knowledge, Comprehension, Application, Analysis, Synthesis and Evaluation. It’s essential that anyone who devises a technical training understands these stages. They also need to know that the needs and abilities at each stage, along with the proper teaching techniques to use at each stage, are different. While Bloom largely based his work on children, the underlying concepts still apply to learning at all ages.

Typical training techniques are inefficient

Now, you might not see the connection with Bloom’s taxonomy and technical training, but it does exist. Let’s dig deeper.

The answer has to do with the actual mechanics of how to teach a particular topic. As mentioned above, most technical training — either virtual or in-person — involves an instructor delivering content and conducting exercises around that content in real time. Typically, most of the instruction is focused on the lower stages of Bloom’s Taxonomy — Knowledge, Comprehension and Application — while limited attention is given to the higher stages — Analysis, Synthesis and Evaluation.

And, practically no consideration is given to the student’s ability to retain the information. For example, the instructor talks, the students listen and maybe a few questions are asked. The might be a “quiz” activity. There will probably be accompanying PowerPoint. Then, the class moves on. This is the typical approach and one that’s intrinsically wasteful.

Using instructor time to teach to the lower level stages the Cognitive domain is wildly inefficient. Essentially, we ask the instructor to be little more than a narrator of facts, terms and techniques. Even if the instructor is the most entertaining narrator in the world — which many are — there is little guarantee that the student will actually retain the content. While this has become the most prominent method to technical training, it’s not the optimal one. Fortunately, there’s a better way.

How to save money and get better results

Technical training costs money, a lot it. I would know: I’m a technical instructor. It takes me about three weeks to prepare a course that will be delivered over three days. I charge accordingly, too.

Yet, for all the planning, expertise and time I put into my work, I find that at least 50-75% of my time in class is spent on rote learning instruction. That’s right, I spend at least half my time on “Here is technology XYZ; Here’s how it works; Try it out.”

Now, let’s get to the center of the problem. What can be done? The answer is to use machines to teach the lower stages of Bloom’s taxonomy and use humans to teach the higher stages.

If you segment the educational experience into machine-based and instructor-led sessions, I believe you will save money and get better results.

For example, let’s say a company wants to offer an introductory, three-day course about Kubernetes. The ultimate educational objective is that a student will be able to create a secure, container-based, microservices oriented application (MOA) and deploy that MOA to a Kubernetes cluster with command-line tools.

Instead of three days of instructor-led training, we transform it into a four-day course. The first two days are spent engaged in self-paced learning with computers to acquire the basic facts, terms and techniques that will be used in the final two days of instructor-led training. The final two days will focus on the creation and deployment of the application while incorporating the knowledge learned in the previous two days.

In terms of dollars and cents, the cost of the four-day course is actually lower because only two days of instructor time is required instead of three. There is some additional expense with subscription fees that are common to formal interactive learning environments, but such fees tend to be dramatically lower than the day rate of an instructor. If you really want to save money, you can always have a tech-lead in your company create the list of YouTube videos that attendees will be required to view before they attend the instructor-led sessions. Not only can this list be shared with the instructor as an informal course outline, but it can also be used repeatedly as part of an overall reference library.

Put it all together

Continuous learning practices are essential for the modern professional, particularly in technology. Companies that want to get the best talent will go to great lengths to offer the best benefits. One such benefit is an ongoing support for technical training. However, these companies often fall short. It’s not for a lack of desire, but usually because of a lack of awareness of the basics of educational psychology.

However, companies that understand the six stages of Bloom’s taxonomy will be able to provide a more effective technical educational experience at less cost. The bottom line is that you want to have students do rote learning at their own pace with computer-based instruction and provide instructor-led delivery to satisfy the higher-order educational objectives that stress enhancing analytic and creative skills. Not only is such segmentation a better approach to technical education, it’s a more efficient use of your company’s training budget.


October 3, 2019  5:19 PM

What developers need to know about an Alexa vulnerability

JudithMyerson Profile: JudithMyerson

A security vulnerability was discovered in July 2019 that showed Amazon’s cloud servers contained voice recordings of Alexa interactions from customers. Furthermore, improper handling of records retained by Amazon, along with this Alexa vulnerability, could expose users’ private data to a potentially programming-savvy attacker.

All Alexa virtual assistants automatically transmit all recording data back to Amazon servers. The company saves storage space by retaining certain voice recordings and deleting others at any time. Amazon employees routinely listen to recordings to determine how well Alexa understands requests and improve the service. Recordings are linked with an account number and the user’s first name.

Amazon gives users the option to delete their interaction with Alexa, but doesn’t give them the option to prevent Amazon from retaining certain voice recordings. Indefinite record retention implies a lack of private data retention policy for Amazon’s servers. The company decides on the dates the records must be removed from its primary storage systems, not the consumer.

A similar security concern also exists in Alexa for Business. Developers use the service to build, test and deploy Jenkins code to the cloud. Just like the aforementioned Alexa vulnerability, developers can delete recordings on their end, but don’t have the option to control what records Amazon may retain.

Alexa for Business

Before you start with Alexa for Business, a developer will need to set up an Amazon Echo device or Alexa-enabled speaker. You can download the app to any smartphone or tablet with iOS 9.0 or higher, Android 6.0 or higher or Fire OS 3.0 or higher. If you want to download the app to a desktop computer, it will require a private Wi-Fi connection.

Alexa for Business allows the development team leader to control who has access to any part of a business application. Alexa devices can be shared for anyone to use in a conference room, or any other common areas in a workplace.

A tool included for Alexa for Business is the Alexa Skills Kit — the SDK that developers use to add skills to Alexa. The developer should build an Alexa skill portal with a voice-user interface and include a cloud service. The interface should understand user utterances and have a cloud service tell Alexa how to respond to a user’s request. Be aware that Alexa skills aren’t available on an intranet.

A custom Alexa skill can be hosted as an AWS Lambda platform that can be triggered by events, like when a user talks to Alexa. Developers can run the code in the cloud without a server to provision and manage services. However, AWS serverless security flaws could allow for an injection of malicious event data.

It’s important that you check for any possible Alexa vulnerabilities before you install any components of Alexa for Business in the cloud. Also, data privacy laws and record retention issues are two important areas to be aware of.


October 1, 2019  1:22 PM

Forensic analysis helps close gaps in hypervisor vulnerabilities

JudithMyerson Profile: JudithMyerson

In June 2019, the National Institute of Standards and Technology (NIST) published its draft of NISTIR 8221, “A Methodology for Enabling Forensic Analysis Using Hypervisor Vulnerabilities Data.”

The report provides guidance on how to use forensic analysis to detect, reconstruct and prevent attacks based on hypervisor vulnerabilities as they occur. The report focuses on two open-source hypervisors — Xen (in the Linux kernel) and Kernel-based Virtual Machine (KVM) — to illustrate the methodology.

Xen vs. KVM

Xen is a type 1 hypervisor whereas KVM can be either a type 1 or type 2 hypervisor.

A Type 1 hypervisor is a bare-metal hypervisor that runs directly on the host’s hardware to control the hardware and manage guest operating systems.

Type 2 hypervisors run on an operating system as a process and adds other features of a type 1 hypervisor to most Linux operating systems. For example, Red Hat Enterprise Virtualization uses KVM and Critix uses Xen in the commercial XenServer.

NIST collected and analyzed hypervisor vulnerabilities in 83 Xen and 20 KVM products from the 2016 and 2017 NIST National Vulnerability Database. They were classified based on their underlying hypervisor functionalities, attack types and attack sources. All hypervisor vulnerabilities that occurred after 2017 were not included in the analysis.

Forensic analysis

Two sample attacks were launched to exploit vulnerabilities in the hypervisor’s functionality. Upon the conclusion of the sample attacks, the NIST was able to identify the evidence gaps required to detect and reconstruct the attacks for further examination. The techniques required to gather missing evidence were incorporated into forensic analysis during subsequent attack runs.

The types of attacks caused by Xen and KVM hypervisor vulnerabilities include:

  1. Denial-of-service (DoS);
  2. Privilege escalation;
  3. Information leakage;
  4. Arbitrary code extension;
  5. Unauthorized file read, modify and delete; and
  6. Other, such as data corruption or canceling of administrators’ other operations.

The most common attack was DoS, which came in at 44% for Xen and 63% for KVM. This result indicates that an attack on the availability of cloud services could be a serious security problem. The other top attacks were privilege escalation — 30% for Xen and 11% for KVM, information leakage — 14% for Xen and 19% for KVM and arbitrary code execution — 7 % for both Xen and KVM.

Although each of these attacks occurs with less frequency than a DoS attack, they all carry the potential risks, such as user information leaks or compromised host or guest VMs.

The report also divided the attack sources into five categories:

  1. Administrator;
  2. Guest OS administrators;
  3. Guest OS user;
  4. Remote attacker; and
  5. Host OS user

The highest attack source was from guest OS users — 76% for Xen and 85% for KVM. The NIST suggests that cloud providers should closely monitor guest users’ activities to reduce attack risks. The second highest attack source came from a guest OS administrator — 20% for Xen and 5% for KVM.

While the forensic analysis approach taken by the NIST to close the gaps is encouraging, enterprise users should move beyond these results for better security. Consider other factors — such as how to overcome hypervisor’s inability to generate sufficient entropy — can tighter your data protection and reduce hypervisor vulnerabilities in your systems.


September 29, 2019  8:31 PM

How to fix the Eclipse ‘No Java virtual machine was found’ install error

cameronmcnz Cameron McKenzie Profile: cameronmcnz

<tl;dr>

To fix the Eclipse “No Java virtual machine was found” error, simply edit the eclipse-inst.ini file and add the -vm flag, which points it to the java utility in the JDK bin directory:

-vm/opt/jdk-13/bin/java

</tl;dr>

Nothing saps a developer’s enthusiasm to learn a new language feature or play around with a new Java distribution more than a disconcerting error message during the environment setup. If you’re interested in an Eclipse installation on Ubuntu or Windows, this sort of error message is exactly what happens if your JRE or JDK isn’t found by the installer.

No Java virtual machine found error

The Eclipse ‘No Java virtual machine was found’ install error on Ubuntu.

A JRE or JDK must be available

The only prerequisite to install Eclipse is a modern Linux or Windows operating system and a compatible JDK installation, with preference given to Java 8 versions and above. But personally, after I performed a Java 13 JDK install, and properly set JAVA_HOME and PATH variables, I still ran into the dreaded “A JRE or JDK must be available in order to run the Eclipse Installer”1 error. That’s some bad news.

But the good news? There’s a simple fix to the Eclipse “No Java virtual machine was found” error when you install it on Ubuntu.

No Java virtual machine found

The JVM not found problem stems from the fact that by default, the Eclipse installer looks for a JRE or JDK installation in a folder relative to where the installation is run. To override this default behavior, simply add a -vm flag to the eclipse-inst.ini file and point it at the location of the java utility in the JDK install’s \bin folder. For me, the setting looked like this:

-vm
/opt/jdk-13/bin/java

Eclipse JRE or JVM error fix

Add the -vm flag to fix the Eclipse JRE or JVM must be available error.

And that’s it. Make the change, save the file, and then re-run the Eclipse installer. The Eclipse “No Java virtual machine was found” error will go away, and the Eclipse IDE will be successfully installed on your desktop.

Trigger no JVM found error.

The no Java virtual machine error happens even if a JDK is installed.

No virtual machine found fix overview

In summary, the steps to fix the Eclipse “No Java Virtual machine was found” error are:

  1. Edit the eclipse-inst.ini file
  2. Add the -vm flag
  3. Point the -vm flag to the JDK\bin\java location
  4. Save the file
  5. Re-run the Eclipse installer

Citations

  1. The full text of the error: A Java Runtime Environment (JRE) or Java Development Kit (JDK) must be available in order to run Eclipse Installer. No Java virtual machine was found after searching the following locations: /eclipse-inst-linux64/eclipse-installer/jre/bin/java java in your current path.


September 15, 2019  9:28 PM

10 Oracle Code One 2019 sessions to check out

cameronmcnz Cameron McKenzie Profile: cameronmcnz

As Oracle Code One 2019 kicks off in San Francisco, I hope you’ve already logged into the Oracle Open World (OOW) schedule builder and booked yourself into all of the sessions you want to attend. Unlike smaller conferences where you can easily slide into any session that has open seats, OOW tightly enforces its registration rules. If you’re not enrolled in a given session, they won’t allow you in. Furthermore, sessions with popular speakers quickly get booked to capacity, and waiting lists are prohibitively long.

But, if you’re still actively building your schedule for Oracle Code One 2019, here are a few of the session I recommend:

  1. Beyond Jakarta EE 8 [DEV1391]
    If you know this site’s domain name, it should come as no surprise that a primary interest of mine is what’s going on in the world of server-side enterprise development. And who could be more informed on that topic than Mark Little, Will Lyons and Ian Robinson? If the CTO of JBoss, a Senior Director of WebLogic and WebSphere’s Chief Architect can’t speak to the state of modern servers-side development, I don’t know who can.
  2. Jakarta EE Community BOF [BOF4151]
    Again, as the editor of this site,  I feel no compulsion to justify my interest in the topic of Jakarta EE. But beyond it being my raison d’être, session speakers Ivar Grimstad and Reza Rahman have been sources of significant insight for TheServerSide in the past, and it’s appealing to chat with them in a Birds of a Feather format.
  3. Advances in Java Security [DEV6321]
    Smart software developers pay attention to the small details, and one of the most often overlooked details is security. Security can be a dry topic, but Jim Manico’s presentation skills more than compensate. Any Manico session where he talks about Java security  is recommended.
  4. Continuous Delivery with Docker and Java: The Good, the Bad, and the Ugly [DEV3737]
    This Daniel Bryant session captures all of the popular buzz words, so you’ll likely have to put yourself on a waiting list for this one. But as the industry further embraces a DevOps mindset, the ability to know how Docker and Java fit in with continuous delivery practices is a valuable asset.
  5. Cross-Platform Development with GraalVM [DEV3907]
    The GraalVM, with its ahead-of-time compilation and ability to run multiple languages, makes it a real game-changer. But sadly, most software developers rarely raise their periscope above the waters of the standard JVM. It should be interesting to hear Oracle’s Tim Felgentreff talk about the state of cross-platform development with GraalVM.
  6. Everything You Ever Wanted to Know About Java and Didn’t Know Whom to Ask [DEV6268]
    I’m attending this one largely because Azul’s CTO Gil Tene is involved. The people at Azul tend to be technical leaders in the industry, and I’m sure I’ll learn something about Java and the JVM that will surprise me.
  7. Open Source Java-Based Tools: Hacking on Cool Open Source Projects [DEV6544]
    I recently wrote an epic tome of an article about Java programming tools. I’m going to sit in this session to see if there were any important points that my thesis paper on the topic missed.
  8. Preventing Errors Before They Happen: The Checker Framework [TUT3339]
    It seems as though every developer wants to talk about microservices and cloud-native development, but when attendees leave this conference and go back to their daily grind, many of their clock-cycles will go to waste on troubleshooting applications. So why not learn from my CodeRanch alumni Micheal Ernst about how to deal with Java exceptions before they start to present themselves in the logs? It might even be the motivation I need to write a Checker Framework tutorial in the coming quarter.
  9. Choosing the Right Java Vendor and Strategy [DEV1969]
    Speaking of CodeRanch alumni, Jeanne Boyarsky will speak on the topic of how to choose the right Java vendor strategy. This session is of timely interest to me, as I recently revised a popular article about how to install the JDK, only to find myself talking more about the highly confusing JVM vendor market than the actual Java install process. Expect an article on which Java vendor to choose and why to follow.
  10. Hands-on Java 11 OCP Certification Prep – BYOL [HOL1812]
    I wrote a little Java Certification guide a few years ago and have often thought about reworking it for the modern market. But, it would have to be updated. Maybe Scott Selikoff can give me a good idea of what would be involved with upgrading a Java 5 quick-study guide to one that covers Java 11? I have a hunch that an explanation of the ins and outs of modern Java interfaces to a new developer is one of the stumbling blocks.

This is by no means an exhaustive list of the sessions I’ll attend at Oracle Code One 2019, but it is a good percentage of them. If you see me there, I encourage you to say hello.


September 13, 2019  3:42 AM

How to get the most out of Oracle Code One 2019

cameronmcnz Cameron McKenzie Profile: cameronmcnz

If Oracle Code One 2019 is your first time at a major software conference, it will serve you well to follow some sage advice and insight from a veteran attendee of past JavaOne and Oracle Open World conferences.

The first piece of advice, for which it is far too late to act upon, is to make sure you’ve got a hotel booked. I did a quick search on Expedia, and San Francisco’s Hampton Inn is listed for over $700 a night. And the true surprise isn’t the price; it’s the fact that the hotel actually has any availability. If you’re the type of person who books hotels at the last minute, I’d say you’d be lucky to find a hotel in Oakland or San Jose for a reasonable price, let alone San Francisco.

Schedule those Oracle Code One sessions

For those who have their accommodations all set, the next sage piece of conference advice is to log on to the Oracle Code One 2019 session scheduler and reserve a seat in the sessions you wish to attend. Various sessions on Eclipse MicroProfile, microservices and reactive Java are already overbooked. The longer you wait to formulate your schedule, the fewer sessions you’ll have to choose from.

When choosing sessions, I find the speaker to be the more important criteria than the topic. Most speakers have a YouTube video or two of them doing a presentation. Check those out to see if the speaker is compelling. An hour can be quite a long time to sit through a boring slide show. But, an exciting speaker can make an hour go by in an instant, and if you’re engaged, you’re more likely to learn something.

Skip the Oracle keynotes

One somewhat contrarian piece of advice I’m quick to espouse is for attendees to skip the Oracle keynotes, especially the morning ones. That’s not to say the keynotes are bad, but it can be a hassle to get a seat if you aren’t there early enough, and you can’t always hear everything in the auditorium. A better alternative is to stream the keynote from your hotel room, or better yet, watch the video Oracle uploads to their YouTube channel while you eat lunch.

Enjoy the party

One other big piece of advice for Oracle Code One 2019: enjoy San Francisco, especially if it’s your first time in the city. It’s the smallest alpha city in the world, but it is an alpha city. There are plenty of parties, meet-ups and events you’ll be invited to, and it’s worth taking up any offers you manage to get. With said that, keep an eye on how much gas you have left in the tank at the end of the day, because you want to be able to make it to all of your morning sessions the next day.

If it’s your first time at a major conference, I assure you that you’ll have a great time at Oracle Code One 2019. San Francisco is a great city, and the greatest minds in the world of modern software development will be in attendance with you.


September 4, 2019  4:18 PM

How to deploy a JAR file to Tomcat the right way

cameronmcnz Cameron McKenzie Profile: cameronmcnz

The question of how to deploy a JAR file to Tomcat drives a surprising amount of traffic to TheServerSide. It’s a strange search query, because you don’t really deploy JAR files to Tomcat.

Apache Tomcat is a servlet engine that runs Java web applications, which are packaged as web application archive files, or WARs. A WAR file is the one that’s deployed to Tomcat, not a JAR file. But, despite the fact that the question of how to deploy a JAR isn’t one that’s commonly asked, it’s worth further exploration.

WAR file deployment to Tomcat

TheServerSide has a number of tutorials on how to deploy a WAR file to Tomcat, including a Maven Tomcat deploy or a WAR file deployment with Jenkins. If that’s the actual issue that needs to be resolved, I highly recommend that you view those tutorials.

However, that’s not to say there’s no relationship at all between JAR files and applications that run on Tomcat. Frameworks such as Spring and Hibernate are packaged in JAR files, and common utilities a team might put together also get packaged as JARs. These files need to be accessible to web applications hosted on the Apache Tomcat server at runtime.

I have an inkling that when developers ask how to deploy a JAR file to Tomcat, they actually mean to ask where JAR files should be placed to ensure they are part of the Apache Tomcat classpath at runtime and subsequently accessible to their deployed apps. After all, there’s nothing worse than when you deploy an application and run into a bunch of Tomcat’s ClassNotFoundExceptions.

JAR files and Tomcat

The right place to put a JAR file to make its contents available to a Java web application at runtime is in the WEB-INF\lib directory of the WAR file in which the application is packaged. With very few exceptions, the JAR files a Java web application must link to at runtime should be packaged within the WAR file itself. This approach helps reduce the number of external dependencies a web application has, and at the same time eliminates the potential for classloader conflicts.

Sometimes there are common utilities — such as a set of JDBC drivers — that are so ubiquitously required, it makes more sense to place them directly in a Tomcat sub-folder, and not actually package them inside of a WAR.

If every application hosted on your Tomcat server uses a MySQL database, it would make sense to place the MySQL database drivers in Tomcat’s \lib directory, and not in the WEB-INF\lib directory of the WAR. Furthermore, if you have an upgraded database but the JDBC drivers need to be upgraded, by updating the JAR file in that one shared location, it will mean all of the applications will start using the same set of updated Java libraries at the same time.

tomcat JAR deploy

A web app’s WEB-INF\lib folder and the Tomcat \lib directories are the best places to deploy JAR files in Tomcat.

A common organizational mistake is to place JAR files containing frameworks like JSF or Spring Boot in Tomcat’s \lib directory. People think that since every application deployed to Tomcat in their organization is built with Spring or JSF, it makes sense to put these JAR files in Tomcat’s \lib folder.

While this may work initially, as soon as one application needs to use an updated version of the JAR, a problem arises. If the shared JAR file is updated to a new version, all applications hosted on the Tomcat server that use that JAR file must be updated as well. This obviously creates unnecessary work and unnecessary risk, as even applications that don’t need to use the updated version must be regression tested. In contrast, if the required JAR file was packaged within the WAR file itself, you can avoid a mass migration issue.

JARs deployed to the Tomcat lib directory

One problem with the Tomcat \lib directory is the fact that it includes all of the standard libraries the application server needs to implement the Servlet and JSP API. Right out of the box, that folder is filled with over 30 JAR files that are required at runtime. Plus, you don’t want to mess around with any of those required JAR files, because if any of those files are deleted or disturbed, the Tomcat server will fail to start.

Some Tomcat administrators like to address this issue with a separate directory for application JAR files. Admins can do this with a simple edit to the common.loader property in Tomcat’s catalina.properties file. For example, to have Tomcat link to JAR files in a subdirectory named \applib, you can make the following change to the common.loader property:

#Note that catalina.base refers to the Tomcat installation directory
common.loader="${catalina.base}/lib","${catalina.base}/applib/*"

It should be noted that since Tomcat runs on a Java installation, it also has access to any JAR file placed on the classpath of the JVM. This is known as the system classpath, and while it is a viable option for JAR files that need to be linked to by Tomcat, I wouldn’t recommend using the system classpath.

This location should only be used for resources that are used to bootstrap the JVM, or are referenced directly by the JVM at runtime. Furthermore, the system classpath is typically highly restricted, so it is unlikely that software developers or any continuous integration tools would ever have the credentials required to read or write to this folder.

Tomcat JAR deployment options

In summary, I think that when you ask how to deploy a JAR file to Tomcat, you’re really wondering how to make a JAR file available to your web applications at runtime. There are three recommended options to make this happen:

  1. Package the JAR file in the WEB-INF\lib folder of the Java web application;
  2. Place the JAR file in the \lib subfolder of the Apache Tomcat installation;
  3. Configure a folder for shared JAR files by editing Tomcat’s common.loader property in the catalina.properties file.

If you follow this advice, your JAR file deployment issues with Tomcat should disappear, and ClassNotFoundExceptions in your logs will become a thing of the past.


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: