Coffee Talk: Java, News, Stories and Opinions

May 10, 2019  6:11 PM

Five ways to fix Git’s ‘fatal: repository not found’ error

cameronmcnz Cameron McKenzie Profile: cameronmcnz

There’s nothing worse than joining a new development team and eagerly cloning the existing source code repo only to run head first into Git’s ‘fatal: repository not found’ error. For those who struggle with that problem, here are five potential fixes to the frustrating repository not found error message.

1. You did not authenticate

If you attempt to connect to a private GitHub or BitBucket repository and fail to authenticate, you will receive the repository not found error. To ensure you are indeed authenticating, connect to the repository and include your username and password in the Git URL:

git clone

2. Your password has changed

Have you changed your password lately? If you connect from a Microsoft-based workstation, the Windows Credentials Manager may transparently submit an old password every time you clone, pull or fetch. Make sure your computer doesn’t cache any old, out of date passwords and cause your connection to be rejected.

3. You are not a collaborator

You may authenticate successfully against GitHub or GitLab, but if you haven’t been made a collaborator on the project, you won’t be able to see the repository and will again trigger the fatal: repository not found exception. If you’re not a collaborator on the project, contact one of the GitHub or BitBucket repository administrators and have them add you to that role.

4. Incorrect case or a word misspelled

If your source code management tool is hosted on a Linux distribution, the repository name may be case sensitive. Also watch out for creative repository spellings, such as a zero instead of the letter O, or a one in place of the letter L. If you can copy and paste the git clone command from provided documentation, do that.

5. The git repository has been deleted

If the repository was deleted or renamed, you’ll obviously hit a Git repository not found error when you attempt to clone or fetch from it. If all else fails, check with the team lead to ensure that the remote repository does indeed still exist. One way to fix that problem is to log into your DVCS tool as an administrator and actually create the Git repository.

If you have any more insights on why developers might run into Git’s ‘fatal: repository not found‘ error, please add your thoughts to the comments.

May 9, 2019  8:42 PM

What I learned from the Google I/O 2019 keynote address

BarryBurd Profile: BarryBurd

Before the start of the Google I/O 2019 keynote address, I wondered what I’d learn in my role as an application developer. But when the keynote begins, I find myself thinking more like a consumer than a developer. Instead of thinking, “What new tools will help me create better apps,” I’m thinking about the new features that will make my life easier as a user of all these apps.

The keynote starts with a demo of a restaurant menu app. You point your phone at a menu and the Lens app shows you an augmented reality version of it. The app indicates the most popular items on the menu and, if you’re willing to share some data, the app highlights items that you might particularly like. As a strict junketarian — someone who eats junk food — the app would highlight cheeseburgers and chocolate desserts.

If you tap on a menu item, the app shows you pictures of it along with pricing and dietary information. When you’ve finished your meal, the app calculates the tip and can even split the bill among friends.

Lens has always been a fascinating app, but now it offers translation capabilities. A user can point a phone at text in a foreign language, and the app will overlay the text with a translated version in your language that mimics the size, color and font of the original text. In addition to the visual translation, Lens can also read the text out loud in your native language.

Last year, the keynote speakers stunned attendees with a demonstration of the Duplex app. A computer-generated voice booked an appointment time with a live person at a hair salon. I was skeptical, and I said so in my report for TheServerSide. But this year, they’ve announced the upcoming deployment of Duplex on Pixel phones in 44 states.

Do I trust an app to schedule my appointment for a root canal? The app asks for my approval before it finalizes an appointment, but I’m not sure that I want to remind the app to avoid early morning appointments with a friendly reminder of “Hey Google, let me sleep as long as I want.”

More from Google I/O 2019

One of the overriding developments from the Google I/O 2019 keynote is the ability to perform speech recognition locally on a user’s phone. Google engineers have reduced the size of the voice model from 100 GB to half a gigabyte. In the near future, you’ll get help from Google Assistant without sending any data to the cloud and can talk to the Assistant with your WiFi and cellular data connections turned off.

Users will see a noticeably faster response time from Google Assistant. In a demo, the presenter used the Assistant to open the Photos app, select a particular photo, and share it in a text message. This happened so quickly that the conversation with Google Assistant seemed to be effortless. Best of all, there was no need to repeat the wake phrase “Hey, Google” for each new command.

Google Maps is becoming smarter too. When Maps is in walking mode, the app will replace its street map image with an augmented reality view of the scene. If you hold your phone in front of you, the app shows you what the rear-facing camera sees, and adds labels to help you decide where to go next. In the near future, maybe you’ll see people on the streets with their phones right in front of their faces. Who knows, maybe it’s time to revive Google Glass.

In the keynote and the breakout sessions, I’m surprised to hear so much about foldable phones. With Samsung’s Galaxy Fold troubles, I got the impression that folding phones were more than a year away. But the Google I/O speakers talked about Android’s imminent folding phone display features, and discussed models that will be available in the next few months.

Google I/O 2019 is a feast for the consumer side of my identity. I came to the conference as a cool-headed developer, but I’m participating as an excited, wide-eyed user.

May 5, 2019  9:52 PM

How to install Tomcat as your Java application server

cameronmcnz Cameron McKenzie Profile: cameronmcnz

If you’re interested in Java based web development, you’ll more than likely need to install Tomcat. This Tomcat installation tutorial will take you through the prerequisites, show you where to download Tomcat, help you configure the requisite Tomcat environment variables and finally kick off the Tomcat server and run a couple of example Servlets and JSPs to prove a successful installation.

Tomcat prerequisites

There are minimal prerequisites to install Tomcat. All you need is a version 1.8 installation of the JDK or newer with the JAVA_HOME environment set up, and optionally the JDK’s bin folder added to the Windows PATH. Here is a Java installation tutorial if that prerequisite is yet to be met.

If you are unsure as to whether the JDK is installed — or what version it is — simply open up a command prompt and type java -version. If the JDK is installed, this command will display version and build details.

C:\example\tomcat-install\bin>java -version
java version "1.8.0"
Java(TM) SE Runtime Environment (build pwa6480sr3fp20-20161019_02(SR3 FP20))
IBM J9 VM (build 2.8, JRE 1.8.0 Windows 10 amd64-64 Compressed References 20161013_322271 (JIT enabled, AOT enabled)
J9VM - R28_Java8_SR3_20161013_1635_B322271
GC - R28_Java8_SR3_20161013_1635_B322271_CMPRSS
J9CL - 20161013_322271)
JCL - 20161018_01 based on Oracle jdk8u111-b14

Download Tomcat

You can obtain Tomcat from the project’s download page at Find the zip file that matches your computer’s architecture. This example of how to install Tomcat is on a 64-bit Windows Xeon machine, so I have chosen the 64-bit option.

Unzip the file and rename the folder tomcat-9. Then copy the tomcat-9 folder out of the \downloads directory and into a more suitable place on your files system. In this Tomcat tutorial, I’ve moved the tomcat-9 folder into the C:\_tools directory.

Tomcat Home

Tomcat installation home directory

Tomcat environment variables

Applications that use Tomcat seek out the application server’s location by inspecting the CATALINA_HOME environment variable value. So, create a new environment variable named CATALINA_HOME and have it point to C:\_tools\tomcat

To make Tomcat utilities such as startup.bat and shutdown.bat universally available to command prompts and Bash shells, you can put Tomcat’s \bin directory on the Windows PATH, but this isn’t required.


Set CATALINA_HOME for Tomcat installation

How to start the Tomcat server

At this point, it is time to start Tomcat. Simply open a Command Prompt in Tomcat’s \bin directory and run the startup.bat command. This will start Tomcat and make it accessible through http://localhost:8080

example@tutorial MINGW64 /c/example/tomcat-9/bin
$ ./startup.bat
Using CATALINA_BASE: “C:\_tools\tomcat-9”
Using CATALINA_HOME: “C:\_tools\tomcat-9”
Using CATALINA_TMPDIR: “C:\_tools\tomcat-9\temp”
Using JRE_HOME: “C:\IBM\WebSphere\AppServer\java\8.0”
Using CLASSPATH: “C:\_tools\tomcat-9\bin\bootstrap.jar;C:\_tools\tomcat-9\bin\tomcat-juli.jar”

After you verify that the Apache Tomcat landing page appears at localhost:8080, navigate to http://localhost:8080/examples/jsp/ and look for the option to execute the Snoop servlet. This Tomcat example Servlet will print out various details about the browser and your HTTP request. Some values may come back as null, but that is okay. So long as the page appears, you have validated the veracity of the Tomcat install.

verify tomcat install

Tomcat installation verification

And that’s it. That’s all you need to do to install Tomcat on a Windows machine.

May 1, 2019  4:45 PM

How not to write a Git commit message

cameronmcnz Cameron McKenzie Profile: cameronmcnz

I’m working on an article that outlines how to write a good Git commit message, along with a variety of Git commit message conventions and rules that developers should follow. But, as I write about the best practices developers should follow, I constantly find myself in an internal discussion of what developers should not do.

I want the original article to contain a list of best practices, not a list of things not to do. So, I’ve trimmed the Git commit worst practices parts out of it and decided to list them here.

Git commit anti-patterns

What makes a bad Git commit message? What are some things developers shouldn’t do? Here’s my top 10 list:

  1. Don’t go beyond 50 characters in the subject line. It should be easy to succinctly describe any Git commit.


    They ain’t gonna read your long Git commit message.

  2. Don’t use passive voice or past tense when you annotate commits. Always use the active voice.
  3. Don’t add unnecessary capitalization to the subject line. Standard rules for capitalization aside, only capitalize the first letter of the first word in subject line. Definitely don’t shout in all caps, snake_case or worse of all, SCREAMING_SNAKE_CASE. Also, don’t put a period at the end of the subject line.
  4. Don’t try and format your commit message with superfluous asterisks, ampersands and hash marks.
  5. Don’t forget that someone troubleshooting might use a Unix utility that doesn’t automatically perform text wrapping. Instead, add a carriage return in and around the 70 character mark in the body.
  6. Don’t describe the low-level code you wrote. If someone wants to see the code you wrote, they’ll do a git diff. Commit messages should describe context and purpose, not implementation.
  7. Don’t forget to separate the commit body and the subject line with a full carriage return.
  8. Don’t simply reference a JIRA ticket in your commit. People shouldn’t have to open a bug tracking tool to know why you made a change to the codebase.
  9. Don’t say something nasty about another member of the team, even if you don’t push the branch. Local commits have a tendency to unexpectedly make it into the central code base. A subject line of ‘My idiot team lead made me do this‘ likely won’t go over well in your annual review.
  10. Don’t go on ad nauseam in the body of the commit. No matter how brilliant your prose, the TAGRI rule always applies in the software development world.

What gets your Git goat?

This is a list of the top 10 Git commit mistakes I see fellow developers make, but it is my no means a complete compendium. As a developer, what things do see developers do with Git that really gets your goat? I’d be interested to hear what other developers see in the field, so please share your Git commit horror stories in the comments.

May 1, 2019  1:45 PM

An example of UnaryOperator in functional Lambda expressions

cameronmcnz Cameron McKenzie Profile: cameronmcnz

The implementation of Java 8 Lambda expressions required an introduction to a number of new interfaces with esoteric names that can be somewhat intimidating to developers without any experience in functional programming. One such area is the functional UnaryOperator interface.

It may be academically named, but is incredibly simple in terms of its purpose and implementation.

The function of the UnaryOpertor

The function of the UnaryOperator interface is to take an object, do something with it and then return an object of the same type. That’s the unary nature of the function. One object type goes in, and the exact same type goes out.

For a more technical discussion, you can see from the UnaryOperator JavaDoc that the component extends the Function interface and defines a single method named apply.

public interface UnaryOperator<T> extends Function<T,T>

T apply(T t)
Applies this function to the given argument.

Parameter Types:
T - the input given to the function
T - the result running the function

For example, perhaps you wanted to strip out all of the non-numeric characters from a String. In that case, a String that contains a bunch of digits and letters would go into the UnaryOperator, and a String with nothing but numbers would be returned. A String goes in and a String comes out. That’s a UnaryOperator in action.

Definition of the term unary.

Definition of the term unary.


Implementation of the UnaryOperator example

To show you an old school, pre-Java 8 UnaryOperator example, we will create a single class named UnaryOperatorExample and provide the required apply method. The apply method is the single method required by all classes that implement the UnaryOperator interface.

We will use generics in the class declaration, <String>, to indicate this UnaryOpeartor’s apply method works exclusively on String objects, but this UnaryOperator interface is certainly not limited to just the text based data. You can genericize the interface with any valid Java class.

package com.mcnz.lambda;

import java.util.function.UnaryOperator;
// Create class that implements the UnaryOperator interface
public class UnaryOperatorExample implements UnaryOperator<String>{
  public String apply(String text) {
    return text+".txt";

class UnaryOperatorTest {
  public static void main(String args[]){
     UnaryOperatorExample uoe = new UnaryOperatorExample();

     String text = "lambda-tutorial";
     String newText = uoe.apply(text);

When the class is executed, the result is the text string lambda-tutorial.txt written to the console.

Example UnaryOperator Lambda expression

If you implement the UnaryOperator interface with a complete Java class, it will create completely valid code, but it defeats the purpose of working with a functional interface. The whole idea of functional programming is to write code that uses very sparse and concise lambda expressions. With a lambda expression, we can completely eliminate the need for the UnaryOperatorExample class and rewrite the entire application as such:

package com.mcnz.lambda;
import java.util.function.UnaryOperator;
// A UnaryOperator Lambda expression example
class UnaryOperatorTest {
  public static void main(String args[]){
    UnaryOperator<String> extensionAdder = (String text) -> { return text + ".txt";} ;
    String newText = extensionAdder.apply("example-function");

One of the goals of the lambda expression framework is to simplify the Java language and eliminate as much ceremony from the code as possible. As such, it should come as no surprise to discover that we can simplify our lambda expression further by re-writing the highlighted line of code:

UnaryOperator<String> extensionAdder = (text) -> text + ".txt" ;

Java API use of the UnaryOperator function

With functions now tightly embedded throughout the Java API, interfaces such as the aforementioned Consumer interface, and the current UnaryOperator tend to pop up everywhere. One of its most notable usages is an argument to the iterate method of the Stream class.

static <T> Stream<T> iterate(T seed, UnaryOperator<T> f)

For the uninitiated, a method signature like this can be intimidating, but as this UnaryOperator example has demonstrated, the implementation of a lambda expression that simply takes and returns an object of the same data type really couldn’t be easier. And that’s the whole idea behind the lambda project — that in the end, Java programs will be both easier to read and easier to write.

April 25, 2019  2:34 PM

How to write a screen scraper application with HtmlUnit

cameronmcnz Cameron McKenzie Profile: cameronmcnz

I recently published an article on screen scraping with Java, and a few Twitter followers pondered why I used JSoup instead of the popular, browser-less web testing framework HtmlUnit. I didn’t have a specific reason, so I decided to reproduce the exact same screen scraper application tutorial with HtmlUnit instead of JSoup.

The original tutorial simply pulled a few pieces of information from the GitHub interview questions article I wrote. It pulled the page title, the author name and a list of all the links on the page. This tutorial will do the exact same thing, just differently.

HtmlUnit Maven POM entries

The first step to use HtmlUnit is to create a Maven-based project and add the appropriate GAV to the dependencies section of the POM file. Here’s an example of a complete Maven POM file with the HtmlUnit GAV included in the dependencies.

<project xmlns=""



The HtmlUnit screen scraper code

The next step in the HtmlUnit screen scraper application creation process is to produce a Java class with a main method, and then create an instance of the HtmlUnit WebClient with the URL of the site you want HtmlUnit to scrape.

package com.mcnz.screen.scraper;
import com.gargoylesoftware.htmlunit.*;
import com.gargoylesoftware.htmlunit.html.*;

public class HtmlUnitScraper {
  public static void main(String args[]) throws Exception {
    String url = "";
    WebClient webClient = new WebClient();


The HtmlUnit API

The getPage(URL) method of the WebClient class will parse the provided URL and return a HtmlPage object that represents the web page. However, CSS, JavaScript and a lack of a properly configured SSL keystore can cause the getPage(URL) method to fail. It’s prudent when you prototype to turn these three features off before you obtain the HtmlPage object.

HtmlPage htmlPage = webClient.getPage(url);

In the previous example, we tested our Java screen scraper capabilities by capturing the title of the web page being parsed. To do that with the HtmlUnit screen scraper, we simply invoke the getTitle() method on the htmlPage instance:


Run the Java screen scraper application

You can compile the code and run the application at this point and it will output the title of the page:

Tough sample GitHub interview questions and answers for job candidates

The CSS selector for the segment of the page that displays the author’s name is #author > div > a. If you insert this CSS selector into the querySelector(String) method, it will return a DomNode instance which can be used to inspect the result of the CSS selection. Simply asking for the domNode asText will return the name of the article’s author:

DomNode domNode = htmlPage.querySelector("#author > div > a");

The last significant achievement of the original article was to print out the text of every anchor link on the page. To achieve this with the HtmlUnit Java screen scraper, call the getAnchors method of the HtmlPage instance. This returns a list of HtmlAnchor instances. We can then iterate through the list and output the URLs associated with the links by calling the getAttribute method:

List<HtmlAnchor> anchors = htmlPage.getAnchors();
for (HtmlAnchor anchor : anchors) {

When the class runs, the following is the output:

Java screen scraper with HtmlUnit

Tough sample GitHub interview questions and answers for job candidates.
Cameron McKenzie.


JSoup vs HtmlUnit as a screen scraper

So what do I think about the two separate approaches? Well, if I was to write a Java screen scraper of my own, I’d likely choose HtmlUnit. There are a number of utility methods built into the API, such as the getAnchors() method of the HtmlPage, that makes performing common tasks easier. The API is updated regularly by its maintainers, and many developers already know how to use the API because it’s commonly used as a unit testing framework for Java web apps. Finally, HtmlUnit has some advanced features for processing CSS and JavaScript which allows for a variety of peripheral applications of the technology.

Overall, both APIs are excellent choices to implement a Java screen scraper. You really can’t go wrong with either one.

You can find the complete code for the HtmlUnit screen scraper application on GitHub.

HtmlUnit screen scraper

HtmlUnit screen scraper application

April 23, 2019  8:34 PM

Top 5 software development best practices you need to know

DmitryReshetchenko Profile: DmitryReshetchenko

Software is everywhere, but the process to create a new software product can be complicated and challenging. That’s why software development best practices are important and can help reduce costs and speed up processes.

Without goals, a software project doesn’t have direction. Projects should start with a clear definition of the planned software’s goals, a discussion of those goals with stakeholders and an evaluation of expectations and risks. Simultaneously, you should be ready for various challenges that can come up, and implement strategies to keep the development process on course.

Best practices aren’t always a revelation of thought. Sometimes they are obvious. But as obvious as they might be, they are often overlooked, and developers need to be reminded of them. These software development best practices are obligatory for all software development projects.

Top five software development best practices

  1. Simplicity

Any software should be created in the most efficient way without unnecessary complexity. Simpler answers are usually more correct, and this thought perfectly meets the needs of the development process. Simplicity coincides with minor coding principles such as Don’t Repeat Yourself (DRY) or You Aren’t Gonna Need It (YAGNI).

  1. Coherence

Teamwork is vital for big projects and it’s impossible without a high level of consistency. Code coherence stands for the creation and adherence to a common writing style for all employees who develop software. This will allow managers or other coders to tell who the author of a given fragment is. Yes, when the whole code has the same style, it’s coherent.

Consistency helps a lot because colleagues will be able to test, edit or continue the work of each other. Vice versa, inharmonious projects can confuse your team and slow down the development process. Here are some tools that will help you enforce a single style:

  • Editorconfig: A system for the unification of code written with different IDEs,
  • ESLint: A highly customizable linter based on node.js,
  • JSCS: A linter and formatting tool for JavaScript,
  • HTML Tidy: Another linter for HTML which also finds errors and;
  • Stylelint: A linter for CSS with various plugins.
  1. Testing

Testing is essential for any product and on any stage. From the very first test run to the final evaluations, you should always test the product.

Thanks to modern approaches and the rise of machine learning, engineers have access to powerful tools such as automated algorithms to run millions of tests each second. Strategic thinking helps when you have to choose a testing type: functional, performance, integration or unit. If you choose the tools and testing types carefully, you can find a host of bugs and others issues that can ideally be fleshed out before you deploy your product. But remember not to only focus on test-driven development, remember about users and their needs.

  1. Maintenance

Unlike physical entities, the software has the potential to be immortal. Nevertheless, this would only be possible with good maintenance including regular updates, more tests and analysis. You’ve probably seen a warning before about an application that isn’t compatible you’re your device. Elaborate maintenance can get rid of these alerts and keep apps compatible with any hardware.

This principle is a bit controversial as not all teams or developers want to waste time on product compatibility with everything. However, you should focus on maintaining fresh code to allow your software to work on new devices. Thus, your product will meet the needs of more customers and help old applications to remain useful.

  1. Analysis

Apart from the pre-launch evaluation conducted by QA engineers and dedicated software developers, let me suggest you focus on performance analysis post-launch. Even the most elaborate code that results in a seemingly perfect match with your client isn’t guaranteed to work properly. There are a number of factors that can affect these results. Ideally, you’d like to have an analytics department to evaluate your numbers, but outsourced specialists always will work.

Methodologies and best practices

Apart from the aforementioned approaches, there are some other software development best practices to consider. Minor principles such as these can help play a role in a successful deployment:

  • Agile: This approach can help optimize your work. It is based on several development iterations that involve constant testing and result evaluation,
  • Repositories: Platforms such as Git are helpful to track versions, move back to previous iterations, work synchronization, and merging,
  • Accuracy over speed: Focus on correct code instead of fast code. Later it will be easier to speed up processes than rewrite everything and;
  • Experience sharing: Consider exchanging ideas and results with other developers to get external reviews if your project isn’t confidential.

Finally, let me propose a bit paradoxical statement: you don’t have to blindly follow best practices all the time. Time-proven ideas work fine for traditional processes when developers want to create common software without unique features.

But game-changing apps or innovative projects require fresh thinking. Surely, these software development best practices are fairly obvious and cover the most basic practices, but it’s better to find or build a software development team with a perfect balance between best market approaches and new ideas.

March 26, 2019  8:38 PM

How to learn new technology in a corporate environment

BobReselman BobReselman Profile: BobReselman

Here’s how it usually goes when it comes to technical training in a corporate environment. A company decides to implement a new technology. The powers-that-be look around to determine if the IT staff has the knowledge and skills necessary to adopt the technology in question. If the determination is found wanting what usually happens is that management will decide to hire a training company to deliver an intensive training session on the technology in question. The length of the session typically run three days to a week, but never longer.

The company sends the employees to the training. The employees get trained. The technology gets implemented.



And, it’s wrong in so many ways. Allow me to elaborate on how to learn new technology in the corporate world.

Training vs. education

The first and foremost wrongness about the situation described above is that the thing that’s being called technical training isn’t really about training at all. Training is the process of instilling behavior in a subject in response to an event or expectation. Taking the Pavlovian approach, you can train a dog to salivate upon hearing a bell ring. Those of us with kids have gone through the whole process of potty training: getting the child to notify you when the urge strikes.

More advanced training, such as a teenager learning to drive or a pilot landing on an aircraft carrier require more skills and attention, but the end goal remains the same.

While training might be all well and good when it comes to driving or landing a plane, the goal, and the process to achieve it, are both well known. But, when it comes to how to learn new technology, the notion of training doesn’t fully apply.

You can’t train a deployment engineer to create and maintain an efficient Kubernetes cluster any more than you can train a chef to create a dish worthy of a 3-star Michelin rating. The process that gets this to happen is something different altogether. It’s called education.

Most tech requires education

Education takes place on a much broader cognitive landscape than training. Most training is confined to the lower end of the cognitive hierarchy. Education goes deeper, and targets advanced thinking and abstraction. Education wants you to acquire the skills and knowledge necessary to create new ideas and adapt to unusual circumstances.

You cannot train your way to innovation. Training, by nature, is not that concerned with creativity or cleverness. On the other hand, education is. Thus, when you take a look around to see what really matters in IT — creativity, innovation and efficiency, education becomes paramount.

Activities and tasks where IT staff can be trained are just candidates for the next round of automation. If you want your staff to be viable in modern IT, especially when you consider how quickly tech moves forward, then a proper education really counts.

Think about. The term is life-long learning, not life-long training.

Retention is key

Let’s say we accept that fact that for a company to effectively adopt a new piece of technology, its employees must be educated about it, not trained. Then, one might ask, what’s so bad about employees attending week-long intensive sessions? The problem is that it’s very difficult for learners to absorb and retain new information presented in these crash courses over a short period of time. Learning the information is one thing, retaining it is another.

You can be subjected to an intensive class that covers all aspects of a new technology and might even quickly get the hang of the tech. But to retain what you’ve learned, you need to use it every day or it will all slip away. For the process to be effective, it must be continuous.

Sadly, many companies don’t plan properly. Employees will be sent out for training over the course of a week, with no scheduled follow-ups to monitor progress. Some employees might be assigned to immediately use the new technology. Others have to wait months to get a shot at it. By that time, all they would’ve learned will be lost.

Is there a better way for companies to get the most bang for the “training” bucks? Yes, there is.

Save money, but keep your employees educated

The week long, intensive training session has been a conventional standard in the corporate education playbook for years. But does it work? Without some hard data in front of me, it’s hard to say. Nonetheless, maybe it’s time for companies to reconsider its effectiveness.

Now, please know that I say this with some hesitation because I make a portion of my living from these intensive classes. Although, I will say in my defense that I’ve cut back on this work since I came to the realization that there’s a better way.

So, what is this better way on how to learn new technology?

You first need to realize that most IT employees worth their salt are pretty good at learning new technologies on their own. They know how they learn, what books to read, which YouTube videos to watch and have a structured mindset

You’ll also need to realize that all good things take time. This is important, so let me say it again: all good things take time.

There are a limited number of people that can quickly acquire a long-term understanding of a new technology through various means. Most of us require a good deal of ongoing daily exposure and practice with the tech to get good at it.

As a result, I find that the better way to conduct technical education in a corporate environment is to provide employees with the time they need to get competent with the tech at hand.

If it’s an accomplished self-learner, give him or her the time to dabble. If it’s an employee that needs a structured learning experience, do a one-day intensive basics class followed up by once a week sessions that take place over an extended timespan, say three months. These sessions can be led by a third-party expert or by someone in-house. The most important thing is that employees get to work with the tech in a consistent, continuous manner over a long period of time so that they properly retain the information.

The choice is yours. You can continue to send employees to one-week intensive classes with the expectation they will learn everything they need to know for their positions.

Or, you can go with a cost-effective approach that gives employees to the time get a firm grasp on the tech at hand.

Me? I’ll go with the wise spend every time.

March 18, 2019  3:31 PM

How Instacart works around buggy Elasticsearch queries

George Lawton Profile: George Lawton

Enterprises that use Elasticsearch to find dynamic information in other apps are struggling to identify errant code that stalls enterprise apps. In theory, application performance monitoring tools should help. But, it wasn’t enough for Instacart to identify the queries that consistently created problems for their consumers and shoppers, said John Meagher, senior software engineer, search infrastructure at Instacart.

Simply scaling up their Elasticsearch instance didn’t solve the problem. So, Meagher decided to find a better way to find out what was responsible for their performance issues. As it turned out, a small number of poorly coded Elasticsearch queries were responsible for most of their problems. Once they found a better way to monitor queries, those errors were reduced by 90% and a lot of other problems went away too.

Building a digital grocer

A key element of Instacart’s business was the creation of the world largest and constantly updated digital catalog of grocery items. Consumers can access the catalog through mobile and web apps when they order food from one or more stores. It’s also used to guide shoppers through store aisles who purchase food on behalf of the consumers. The app needs to present a different view of the information to customers and shoppers.

Elasticsearch sits at the core of this whole process. It makes it easy to surface a dynamic view of the available food and presents different options for consumers and shoppers. Instacart standardized its catalog management on top of Elasticsearch because it’s highly scalable in a way that makes it easy to update items and subsequent information. For example, they wanted a platform that allows one team to update nutritional information and description of a product, and also allow the store to update its inventory.

Elasticsearch makes it easy for developers to code logic that dynamically aggregates and generates information in response to complex queries on the fly. However, when Elasticsearch goes down, everyone else’s services do too. Elasticsearch’s catalog features almost 600 million items that are updated about 750 times per second. Its distributed makes it easier to spread queries across clusters. As a result, each cluster only has to handle about 500 queries per second, while the entire infrastructure handles about 15,000 queries per second.

The pain of buggy Elasticsearch queries

Instacart’s main application includes some outdated code from its founding along with code from new developers. As a result, it’s hard to find the buggy code when a problem emerges. “It feels like we are trying to find a needle in the haystack when looking for what is causing a problem in a cluster, only its worse. It’s more like trying to find one needle in a pile of other needles,” Meagher said.

In early 2018, Instacart would see tens of thousands of time out errors per day. Many components of the Instacart app wouldn’t wait for Elasticsearch queries to come back, and they’d time out early. Some of the particularly bad queries would see as low as a 10% success rate, and a few had a 0% success rate. The site would often go down on the weekends during peak shopping periods, and cause major issues for the app.

The biggest problem area with these code issues was a lack of visibility. Instacart developers could see bulk aggregated errors and latency, but couldn’t get the proper visibility into the code that caused the problem. In most cases, Instacart staff would just get reports that Elasticsearch was slow, but they wouldn’t be able to show if their Elasticsearch infrastructure was working or not. And, it was also challenging to see if specific queries or APIs were behind the problems.

Create a bigger picture

Instacart had a variety of tools that provided some part of the big code problem picture, but not the whole thing. They used Kibana to visualize cluster performance, New Relic and other APM tools to track app performance and error reporting tools to look at raw logs. For example, when a bad query hit, it would jam up the queue and all the other queries would slow down. It was hard to find the one at the root of the problem.

Meager led the development of a new type of Elasticsearch monitoring tool, called ESHero, to make it easier to diagnose which queries would create bottlenecks. The tool’s main insight was to provide a way to aggregate information across server applications that were Elasticsearch cluster clients.

They used a collection of Ruby applications that ran on each application server, pulled the data into a central repository and then use machine learning to make sense of it. The tool provided a way to instrument all the calls to the Elasticsearch cluster, and could be further explored via Elasticsearch queries.

An important element of ESHero was to find a way to identity particular queries. However, the challenge is that each query’s payload was slightly different. Meagher’s team found a way to strip out the dynamic information and replace it with an associated query ID with a specific application call. They also added in other data such as collection time and where in the code a query was called from.

Once they finished the first iteration, Meager was surprised to find that the Elasticsearch clusters were basically healthy. They problems, however, were mostly caused by the spillover impact of poorly coded queries.

These insights gave them a way to prioritize development on the worst-performing queries, and to think about ways to retry good ones. For example, a small number of queries dominate shopping patterns and when these stall, so does the user experience. So, the team decided to focus on aggressively retrying the stalled queries, but they found that an arbitrary number of retries is extremely dangerous. If a well-formed query experiences a 10% success rate, further retires can create more problems.

After Meagher’s team identified and fixed the worst code, Instacart went from about 60,000 time outs per day to about 2,000.

Before they started this work, the site went down almost every weekend. “Now when my partner went on paternity leave for three months, I was happy to be on call,” said Meagher. Instacart hasn’t open-sourced ESHero yet, but Meager said he would be happy to work with others interested in the deployment of similar tools in their own organization.

February 28, 2019  12:17 AM

A simple Java Supplier interface example for those new to functional programming

cameronmcnz Cameron McKenzie Profile: cameronmcnz

There are only half a dozen classes you really need to master to become competent in the world of functional programming. The java.util.function package contains well over 40 different components, but if you can garner a good understanding of consumers, predicates, functions, unary types and suppliers, knowledge of the rest of the API simply falls into place. In this functional programming tutorial, we will work through a Java Supplier interface example.

Consumer vs Supplier interfaces

Java’s functional supplier interface can be used any time a function needs to generate a result without any data passed into it. Now, contrast that with Java’s functional Consumer interface which does the opposite. The Consumer interface will pass data, but no result will be returned to the calling program.

Java’s Consumer and Supplier interfaces are functional compliments to one another. If a function needs to both pass data into it and return data as a response, then use the Function interface, because it combines the capabilities of both the Consumer and Supplier interface.

Before we jump into a Java Supplier interface example, it’s a good idea to first reference the JavaDoc in order to see exactly how the API designers describe how the component should be used:

Supplier JavaDoc


Type Parameters: T – the type of results supplied by this supplier

public interface Supplier<T>
This represents a supplier of results. There is no requirement that a new or distinct result be returned when the supplier is invoked.

This functional interface has a single method named get().

As you can see from the JavaDoc, a complete Supplier interface example can be coded simply when you implement the component and override the get() method. The only caveat is that the get() method takes no arguments and returns a valid object as a result. For this Supplier interface tutorial, we will demonstrate how the functional component works by creating a class named RandomDigitSupplier which does exactly what the name implies; it generates a random digit and returns it to the calling program. The range of the randomly generated numbers will be between zero and 10.

supplier interface example

A simple, Java Supplier interface example and test class without Lambda expressions.

Java Supplier interface tutorial

As you can see, the code for the class that implements Java’s Supplier interface is fairly simple. The only requirements are the class declaration and the implementation of the get() method. Within the get() method, the Random class from Java’s util package is used to generate the random digit, but that’s about as complicated as this Java Supplier interface example gets.

package com.mcnz.supplier.example;

import java.util.function.Supplier;
import java.util.*;

/* Java's Functional Supplier interface example */
class RandomDigitSupplier implements Supplier<Integer> {

  public Integer get() {
    Integer i = new Random().nextInt(10);
    return i;


To test the Supplier interface, we simply code for a loop which creates a new RandomDigitSupplier interface on each iteration, and prints out the random value.

/* Test class for the Java Supplier interface example */
public class SupplierExampleRunner {

  public static void main(String[] args) {
    for (int i = 0; i<10; i++) {
      RandomDigitSupplier rds = new RandomDigitSupplier();
      int randomDigit = rds.get();
      System.out.print(randomDigit + " :: " );

A test run of the Supplier interface example generated the following results:

5 :: 4 :: 0 :: 0 :: 9 :: 0 :: 4 :: 2 :: 4 :: 5 ::

Lambda and Supplier interface example

Of course, since Supplier is a functional interface, you can eliminate much of the overhead involved with the creation of a separate class that implements the interface and codes a concrete get() method. Instead, simply provide an implementation through a Lambda expression. This approach allows all of the code above to be condensed into the following:

package com.mcnz.supplier.example;

import java.util.function.Supplier;
import java.util.*;

/* Test class for the Java Supplier interface example */
public class SupplierExampleRunner {

  public static void main(String[] args) {
    for (int i = 0; i<10; i++) {
      //RandomDigitSupplier rds = new RandomDigitSupplier();
      Supplier<Integer> rds = () -> new Random().nextInt(10);
      int randomDigit = rds.get();
      System.out.print(randomDigit + " :: " );

As you can see, the use of a Lambda expression greatly reduces the ceremony required to write functional code.

When to use the Java Supplier interface

Upon first glance, developers without any functional programming experience wonder when a component that seems so simple and straight forward is useful. The first answer to that question lies within Java’s functional package, which defines subtypes such as LongSupplier, IntSupplier, DoubleSupplier and BooleanSupplier.

The second answer is Java’s stream package, which represents the heart and soul of functional programming in Java. For example, the of method of the Collector interface takes a supplier as an argument, similar to the ways of the collect and generate methods of the Stream class. As developers get deeper into the world of functional programming in Java, they discover that the Supplier interfaces, along with various other classes that implement it, are peppered throughout the Java API.

Functional programming in Java is a new concept to those entrenched in traditional, server-side development. But, if developers can familiarize themselves with fundamental components such as the Supplier interface, they can begin to understand more advanced concepts and implement them into their code in a relatively easy fashion.

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: