Coffee Talk: Java, News, Stories and Opinions

July 3, 2017  11:26 AM

Advancing JVM performance with the LLVM compiler

cameronmcnz Cameron McKenzie Profile: cameronmcnz

The following is a transcript of an interview between TheServerSide’s Cameron W. McKenzie and Azul Systems’ CTO Gil Tene.

Cameron McKenzie: I always like talking to Gil Tene, the CTO of Azul Systems.

Before jumping on the phone, PR reps often send me a PowerPoint of what we’re supposed to talk about. But  with Tene, I always figure that if I can jump in with a quick question before he gets into the PowerPoint presentation, I can get him to answer some interesting questions that I want the answers to. He’s a technical guy and he’s prepared to get technical about Java and the JVM.

Now, the reason for our latest talk was Azul Systems’ 17.3 release of Zing, which includes an LLVM-based, code-named Falcon, just-in-time compiler. Apparently, it’s incredibly fast, like all of Azul Systems’ JVMs typically are.

But before we got into discussing Azul Systems’ Falcon just-in-time compiler, I thought I’d do a bit of bear-baiting with Gil and tell him that I was sorry that in this new age of serverless computing and cloud and containers, and a world where nobody actually buys hardware anymore, that it must be difficult flogging a high-performance JVM when nobody’s going to need to download one and to install it locally. Well, anyways, Gil wasn’t having any of it.

Gil Tene: So, the way I look at it is actually we don’t really care because we have a bunch of people running Zing on Amazon, so where the hardware comes from and whether it’s a cloud environment or a public cloud or private cloud, a hybrid cloud, or a data center, whatever you want to call it, as long as people are running Java software, we’ve got places where we can sell our JVM. And that doesn’t seem to be happening less, it seems to be happening more.

Cameron McKenzie: Now, I was really just joking around with that first question, but that brought us into a discussion about using Java and Zing in the cloud. And actually, I’m interested in that. How are people using Java and JVMs they’ve purchased in the cloud? Is it mostly EC2 instances or is there some other unique way that people are using the cloud to leverage high-performance JVMs like Zing?

Gil Tene: It is running on EC2 instances. In practical terms, most of what is being run on Amazon today, it is run as virtual instances running on the public cloud. They end up looking like normal servers running Linux on an x86 somewhere, but they run on Amazon, and they do it very efficiently and very elastically, they are very operationally dynamic. And whether it’s Amazon or Azure or the Google Cloud, we’re seeing all of those happening.

But in many of those cases, that’s just a starting point where instead of getting server or running your own virtualized environment, you just do it on Amazon.

The next step is usually that you operationally adapt to using the model, so people no longer have to plan and know how much hardware they’re going to need in three months time, because they can turn it on anytime they want. So they can empower teams to turn on a hundred machines on the weekend because they think it’s needed, and if they were wrong they’ll turn them off. But that’s no longer some dramatic thing to do. Doing it in a company internal data center? It’s a very different thing from a planning perspective.

But from our point of view, that all looks the same, right? Zing and Zulu run just fine in those environments. And whether people consume them on Amazon or Azure or in their own servers, to us it all looks the same.

Cameron McKenzie: Now, cloud computing and virtualization is all really cool, but we’re here to talk about performance. So what do you see these days in terms of bare iron deployments or bare metal deployments or people actually deploying to bare metal and if so, when are they doing it?

Gil Tene: We do see bare metal deployments. You know, we have a very wide mix of customers, so we have everything from e-commerce and analytics and customers that run their own stuff, to banks obviously, that do a lot of stuff themselves. There is more and more of a move towards virtualization in some sort of cloud, whether it’s internal or external. So I’d say that a lot of what we see today is virtualized, but we do see a bunch of the bare metal in latency-sensitive environments or in dedicated super environments. So for example, a lot of people will run dedicated machines for databases or for low-latency trading or for messaging because they don’t want to take the hit for what the virtualized infrastructure might do to them if they don’t.

But having said that, we’re seeing some really good results from people on consistency and latency and everything else running just on the higher-end Amazon. So for example, Cassandra is one of the workloads that fits very well with Zing and we see a lot of turnkey deployments. If you want Cassandra, you turn Zing on and you’re happy, you don’t look back. In an Amazon, that type of cookie-cutter deployment works very well. We tend to see that the typical instances that people use for Cassandra in Amazon with or without us is they’ll move to the latest greatest things that Amazon offers. I think the i3 class of Amazon instances right now are the most popular for Cassandra.

Cameron McKenzie: Now, I believe that the reason we’re talking today is because there are some big news from Azul. So what is the big news?

Gil Tene: The big news for us was the latest release of Zing. We are introducing a brand-new JIT compiler to the JVM, and it is based on LLVM. The reason this is big news, we think, especially in the JVM community, is that the current JIT compiler that’s in use was first introduced 20 years ago. So it’s aging. And we’ve been working with it and within it for most of that time, so we know it very well. But a few years ago, we decided to make the long-term investment in building a brand-new JIT compiler in order to be able to go beyond what we could before. And we chose to use LLVM as the basis for that compiler.

Java had a very rapid acceleration of performance in the first few years, from the late ’90s to the early 2000s, but it’s been a very flat growth curve since then. Performance has improved year over year, but not by a lot, not by the way that we’d like it to. With LLVM, you have a very mature compiler. C and C++ compilers use it, Swift from Apple is based on its, Objective-C as well, the RAS language from Azul is based on it. And you’ll see a lot of exotic things done with it as well, like database query optimizations and all kinds of interesting analytics. It’s a general compiler and optimization framework that has been built for other people to build things with.

It was built over the last decade, so we were lucky enough that it was mature by the time we were making a choice in how to build a new compiler. It incorporates a tremendous amount of work in terms of optimizations that we probably would have never been able to invest in ourselves.

To give you a concrete example of this, the latest CPUs from Intel, the current ones that run, whether they’re bare metal or powered mostly on Amazon servers today, have some really cool new vector optimization capabilities. There’s new vector registers, new instructions and you could do some really nice things with them. But that’s only useful if you have some optimizer that’s able to make use of those instructions when they know it’s there.

With Falcon, our LLVM-based compiler, you take regular Java loops that would run normally on previous hardware, and when our JVM runs on a new hardware, it recognizes the capabilities and basically produces much better loops that use the vector instructions to run faster. And here, you’re talking about factors that could be, 50%, 100%, or sometimes 2 times or 3 times faster even, because those instructions are that much faster. The cool thing for us is not that we sat there and thought of how to use the latest Broadwell chip instructions, it’s that LLVM does that for us without us having to work hard.

Intel has put work into LLVM over the last two years to make sure that the backend optimizers know how to do the stuff. And we just need to bring the code to the right form and the rest is taken care of by other people’s work. So that’s a concrete example of extreme leverage. As the processor hits the market, we already have the optimizations for it. So it’s a great demonstration of how a runtime like a JVM could run the exact same code and when you put it on a new hardware, it’s not just the better clock speed and not just slightly faster, it can actually use the instructions to literally run the code better, and you don’t have to change anything to do it.

Cameron McKenzie: Now, whenever I talk about high-performance JVM computing, I always feel the need to talk about potential JVM pauses and garbage collection. Is there anything new in terms of JVM garbage collection algorithms with this latest release of Zing?

Gil Tene: Garbage collection is not big news at this point, mostly because we’ve already solved it. To us, garbage collection is simply a solved problem. And I do realize that that often sounds like what marketing people would say, but I’m the CTO, and I stand behind that statement.

With our C4 collector in Zing, we’re basically eliminating all the concerns that people have with garbage collections that are above, say, half a millisecond in size. That pretty much means everybody except low-latency traders simply don’t have to worry about it anymore.

When it comes to low-latency traders, we sometimes have to have some conversations about tuning. But with everybody else, they stop even thinking about the question. Now, that’s been the state of Zing for a while now, but the nice thing for us with Falcon and the LLVM compiler is we get to optimize better. So because we have a lot more freedom to build new optimizations and do them more rapidly, the velocity of the resulting optimizations is higher for us with LLVM.

We’re able to optimize around our garbage collection code better and get even faster code for the Java applications running it. But from a garbage collection perspective, it’s the same as it was in our previous release and the one before that because those were close to as perfect as we could get them.

Cameron McKenzie: Now, one of the complaints people that use JVMs often have is the startup time. So I was wondering if there’s anything that was new in terms of the technologies you put into your JVM to improve JVM startup? And for that matter, I was wondering what you’re thinking about Project Jigsaw and how the new modularity that’s coming in with Java 9 might impact the startup of Java applications.

Gil Tene: So those are two separate questions. And you probably saw in our material that we have a feature called ReadyNow! that deals with the startup issue for Java. It’s something we’ve had for a couple of years now. But, again, with the Falcon release, we’re able to do a much better job. Basically, we have a much better vertical rise right when the JVM starts to speed.

The ReadyNow! feature is focused on applications that basically want to reduce the number of operations that go slow before you get to go fast, whether it’s when you start up a new server in the cluster and you don’t want the first 10,000 database queries to go slow before they go fast, or whether it’s when you roll out new code in a continuous deployment environment where you update your servers 20 times a day so you rollout code continuously and, again, you don’t want the first 10,000 or 20,000 web request for every instance to go slow before they get to go fast. Or the extreme examples of trading where at market open conditions, you don’t want to be running your high volume and most volatile trades in interpreter Java speed before they become optimized.

In all of those cases, ReadyNow! is basically focused on having the JVM hyper-optimize the code right when it starts rather than profile and learn and only optimize after it runs. And we do it with a very simple to explain technique, it’s not that simple to implement, but it’s basically we save previous run profiles and we start a run assuming or learning from the previous run’s behavior rather than having to learn from scratch again for the first thousand operations. And that allows us to run basically fast code, either from the first transaction or the tenth transaction, but not from the ten-thousandth transaction. That’s a feature in Zing we’re very proud of.

To the other part of your question about startup behavior, I think that Java 9 is bringing in some interesting features that could over time affect startup behavior. It’s not just the Jigsaw parts, it’s certainly the idea that you could perform some sort of analysis on code-enclosed modules and try to optimize some of it for startup.

Cameron McKenzie: So, anyways, if you want to find out more about high-performance JVM computing, head over to Azul’s website. And if you want to hear more of Gil’s insights, follow him on Twitter, @giltene.
You can follow Cameron McKenzie on Twitter: @cameronmckenzie

February 13, 2019  5:37 PM

Don’t struggle to learn new programming languages

George Lawton Profile: George Lawton

Modern applications developers are often tasked to learn new programming languages and patterns to improve their skills. The classic do-it-yourself approach with books or tutorial videos is great, but it still requires the developer to set up a programming environment to out that newfound knowledge in a more practical setting.

Fahim Ul Haq and Naeem Ul Haq created a new interactive platform called Educative that makes it easier to learn new programming language skills inside of a pre-built development environment. I caught up with Fahim Ul Haq to find out what they have discovered about how developers try to learn new languages faster.

How did you come up with the idea for Educative?

Fahim Ul Haq: The idea for Educative evolved in a couple of phases. We’re obviously developers ourselves, so we really felt the struggle of trying to update our skills with the currently existing tools.

We first dabbled with interactive learning for developers when we launched a mobile app to teach developers as a side project. The app became popular and we would sometimes receive requests from developers to create more content like this. But with our day jobs at Facebook and Microsoft, that wasn’t possible.

Then in 2014, one of the largest publishers in America approached us to write a book for software engineers, building on the app we’d developed. We wanted to create a free companion website with interactive learnings, but the publisher wasn’t interested in that. Even though they rejected the idea, it gave us the inspiration to create a platform where developers could learn interactively.

Once we started exploring the idea and talking to potential authors, we got one unanimous piece of feedback: authors liked the idea of creating interactive training for developers, but it seemed like a lot of work, compared to making a video tutorial.

So we came up with Educative: a platform that provides interactive learning for software developers, powered by an authoring platform that makes it extremely easy to create content.

How does Educative build on the work of other interactive training programs or approaches to provision fully configured training environments like CodeEnvy or what Sensei has done with security training?

Ul Haq: I think all these different solutions come out of a simultaneous recognition of the same need: a developer learning resource that tracks with all the advances in technology we’re seeing today. That’s really the underlying theme here. Educative and the two tools you mentioned are applying similar approaches — but to different niches. That makes it somewhat difficult for us to build directly on their work, but we’re keeping a close eye on them and would love to see what we can learn from each other.

What are your thoughts on how to measure the results of this sort of training method to quantify the speed of learning compared to other approaches?

Ul Haq: This is something we want to do in the very near future. I think it’s just a case of the current metrics we have on demand for our product and the anecdotal evidence for its effectiveness being so strong, that we haven’t yet felt a need to objectively study its benefits. It will become more urgent as we scale up.

What have you learned about on how to organize software training programs that improve the process to learn new programming languages?

Ul Haq: There are no one-size-fits-all solutions for how anyone learns anything, particularly for how developers learn to program. This applies on two levels. The first one is more obvious: trying to learn to code through videos is just frustrating for so many people. There’s an assumption that people progress more or less at the same linear pace — so going back and forth in a video, re-watching parts or skimming through parts, is just so cumbersome. This is where our platform really helps.

However, the second level is that even on our platform there’s a need for different levels of difficulty on the programming problems. For example, some people will inevitably learn quicker, and just find our practice problems too easy. That’s why we’re planning to launch adaptive learning in the future — to put such people on personalized accelerated tracks. As they answer practice problems, we’re able to get data on how they’re performing and adjust the level of problems they’re served up accordingly.

What are the biggest stumbling blocks that developers face when they learn new programming languages?

Ul Haq: I would say they’re largely the same ones anyone faces when trying to learn a new skill. It’s time-consuming and puts you out of your comfort zone. It’s so much easier to just stick with what you know and not pursue further learning. This effect is magnified in the developer world, where the resources for learning new skills are sometimes highly technical and unfriendly. Those are roadblocks that we are working to overcome with Educative.

What advice might you give prospective course authors?

Ul Haq: Just keep it simple. Teach like you talk normally. Sometimes when someone knows a subject backwards and forwards, it’s very easy to forget what it felt like to be a new learner and just start speaking in this abstract jargon. High-level programming languages get very abstract, very quickly. We encourage authors to use real-world examples to put those abstract concepts in a more easy-to-understand context. Ideally, you want your learners to not just know something, but know how to apply it.

How do you expect the kinds of interactive training tools for developers to evolve, not just for Educative, but for software development in general?

Ul Haq: I expect that interactive tools like Educative will become the new norm, not just for customers, but for corporations as well. Too many smart people are investing too much time and money for outdated methods to survive forever. I think that in the future people will be taught by machines that know exactly what kind of content it takes to keep them engaged, and serve up personalized, interactive material to optimize for their growth. It sounds scary, but it just makes too much sense to not do.

February 4, 2019  7:29 PM

A quick look at inferred types and the Java var keyword

cameronmcnz Cameron McKenzie Profile: cameronmcnz

The biggest language change packaged with the Java 10 release, aka JDK 18.3, was the introduction of the inferred type. This addition, combined with the ability to use the long reserved Java ‘var’ keyword in your code, will have a significant impact on how programs are both read and written.

The case for the Java var keyword

Java has always had a weird syntax to declare variables. A manifest type declaration on the left side must polymorphically match up with the object type provided on the left hand side of the equation. This creates a somewhat verbose and dare I say it, clunky syntax for what is an exceptionally common task.

Java declaration

Java variable declaration without the var keyword

A Java var keyword example

As you can see from that simple code snippet, traditionally developed Java code lends itself to verbosity. But with the use of the var reserved work and the type inference, the code can be cleaned up quite a bit.

Java var keyword

The use of Java inferred types with the var keyword.

With this new syntax, the object type does not need to be explicitly declared on the left hand side of the initialization. Instead, the object type can simply be inferred if you look at the right hand side of the equation, thus the term inferred type. Of course, the right hand side of the equation always has the final say on what type of object is created, so this Java 10 feature doesn’t really change how the Java language works, nor will it have any impact on how code will be interpreted.

In the end, the language change simply drives towards the goal to make Java, a language often criticized for being far too verbose, more readable.

January 31, 2019  7:49 PM

Compare, contrast your image recognition tool options

YanaYel1na Profile: YanaYel1na

If you’ve been presented with an opportunity to work with machine learning tools with advanced image recognition functionality, you’d be wise not to pass it up, even if you’re new to this technology. An array of high-profile tech giants have developed their image recognition tools for developer use, and without the need to build a neural network from scratch.

Here’s an overview of three mature image recognition and detection tools from some tech giants for you to consider, and help choose the optimal one to meet your development needs.

Google Cloud Vision

With Google’s visual recognition API, you can easily add advanced computer vision functionality to your application:

  • Face, landmark and logo detection helps recognize multiple faces and related attributes such as emotions or headwear (note that facial recognition is not supported here), natural and handmade structures as well as product logos within one picture. A user can perform image analysis on a file located in Google Cloud Storage or on the web.
  • Optical character recognition (OCR) can be used to spot and extract text within a file of various formats, from PDF and TIFF to PNG and GIF. The tool also automatically identifies a vast array of languages and can detect handwriting.
  • Label detection and content moderation allows a user to establish categories and also spot explicit material — such as adult or violent content — within an image.
  • Object localizer and image attribute functionality helps identify the exact place and type of object in an image as well as detect its general attributes such dominant colors or cropping vertices.

After you enable the Cloud Vision API for your project, a user can start to implement it with a variety of programming languages via client libraries. The image recognition tool also offers AutoML Vision, which lets you train high-quality custom machine learning models without the need for prior experience.


Clarifai’s API is another image recognition tool that doesn’t require any machine learning knowledge prior to implementation. It can recognize images and also perform thorough video analysis.

A user can start to make image or video predictions with the Clarifai API after they specify a parameter. For example, if you input a “color” model, the system will provide predictions about the dominant colors in an image. You can either use Clarifai’s pre-built models or train your own one.

Clarifai video analysis processes one video frame per second, and provides a list of predicted concepts for every second of video. The user will need to input the parameter to begin, and split a video into different components if it exceeds maximum size limits.

Clarifai also offers additional tools for further experimentation and analysis. Explorer is a web application where you can introduce additional inputs, preview your applications and also create and train new models with your own images and concepts. The Model Evaluation tool can provide relevant performance metrics on custom-built models.

Amazon Rekognition

Amazon Rekognition is another image recognition tool to consider. Rekognition provides similar functionality as its counterparts, and also adds in facial comparison and celebrity recognition from a variety of pre-built categories, such as entertainment, business, sports and politics.

With Rekognition Image, the service can measure the likelihood of a face appearing in multiple pictures, and also verify a user against a reference photo in near real time.

Apart from image recognition, Amazon also offers near real-time analysis of streaming video. The system automatically extracts rich metadata from Amazon Rekognition Video and outputs it to a Kinesis data stream to detect objects and faces, create a searchable video library and carry out content moderation.

Which tool should you choose?

Each tool provides its own set of features that can potentially meet your image recognition demands. Here is a chart that compares Cloud Vision, Clarifai and Rekognition on several important parameters.

  Google Cloud Vision Clarifai Amazon Rekognition
Face analysis
Facial recognition x
Object and label detection
Explicit content identification
Video analysis and scene recognition x
Activity detection x
Image attributes
Client libraries Python, Ruby, PHP, C#, Java, Go, Node.js, Objective-C, Swift Python, Ruby, PHP, C#, Java, JavaScript, Objective-C, Haskell, R Python, Ruby, PHP, Java,  JavaScript, Node.js, .Net
OS Linux, macOS, Windows, iOS, Android Linux, macOS, Windows, iOS, Android, IoT


Linux, macOS, Windows, iOS, Android, IoT

The image recognition tool space is crowded with tools that can potentially enhance your product. Weigh all of your options and compare their different features before you make a decision. If one of these tools doesn’t fit, consider some alternatives such as Watson Visual Recognition from IBM or Ditto Labs.

January 10, 2019  5:48 PM

Why developers don’t stay in management for IT career change

BobReselman BobReselman Profile: BobReselman

There’s a saying in the life insurance industry that goes like this:

“The minute you become successful your first inclination is to stop doing all the things that made you successful. You stop making the phone calls, you stop scheduling the sit downs.”

This dynamic isn’t confined to life insurance. It happens in IT all the time. The moment an engineer gets anywhere close to good at developing code, the next step is an IT career change to management. In fact, for many developers, it’s an aspiration. Sadly most don’t have the skill or experience to make the leap. If I had a hundred dollars for every gifted engineer who went on to be a crappy manager, I’d be moderately well off.

Believe me, I know. I was one those people. Why? For me it was mostly a matter of money and prestige. I succumbed to the perception of being “just a developer.” I wanted to be more than just anything. My need for status needed to be satisfied. And, I wanted the bucks. Maybe if I had a better self-esteem, things would have turned out differently.

So I ended up in management, for a while anyway. What I came to realize is that most activities in middle-management are akin to sitting at the information kiosk at Grand Central Station and directing people to the right train track. But, in addition to providing the platform number and departure time, I had the added responsibility of making sure that everyone got on the train when they were supposed to.

Test the management waters

I had moderate success with my IT career change and foray into management. I actually accomplished to get some stuff out the door, no pun intended. But, after a while I came to realize that I was happier to learn new technology and make things with what I learned. That’s the good news.

The bad news is that it took me a long time to come to this point of self-awareness. The lure of the money and prestige that goes with management was hard to resist.

If I had ambitions to work in a bigger software company, maybe things would have turned out differently. Bigger companies understand that to stay viable in the long term, their culture needs to give the same status and compensation to creative talent as it does to management. For example, Microsoft has parallel career tracks for individual contributors and management. A Distinguished Engineer has the same rank and status as a Vice President. The compensation is the same too, and ranges anywhere from $900K to $1.25M a year. Google, Facebook, Apple and other large tech companies all have similar structures. It’s good work, if you can get it.

But, life at Microsoft or one of the FANGs isn’t always a true representation of life in IT. Jobs in technology are mostly about ensuring that the phones work, inventory is maintained, orders are filled and that employees and bills get paid. These companies worry more about an operational digital infrastructure than talent that ascends to the heights of Distinguished Engineer. As a result, the career path for many in IT is to end up in management.

Then one day some of these folks who made the leap to management wake up with a sense of emptiness that can’t be assuaged by directing people about in the IT career change. They started out with a desire to make stuff that made a difference only to end up with HR on their backs about employee reviews. Fortunately, despite episodes of existential angst, many management skills are transferable.

Tech is different. The details count a lot. Some of those in tech that stay true to the creative imperative and continue to make stuff come to a different conclusion in their mid-life. They battle to keep up with new technologies that are, for the most part, reinventions of something that was created a decade ago and are thus unknown to the growing workforce of the twenty-somethings that fill the room. While the younger employee can pull the all-nighter, by the time you hit 45, it’s not that you can’t do the all-nighter, it’s more that you know that there will always be another one, so why set the precedent and cooperate now?

Aging managers might question the value of their work. Aging developers wonder if they made the right decision to stay with hands-on work. At the age of 45-50, many wonder if it would be better to go into management and leave the creative work to the younger generation. They fear the prospect of being old in tech. But being old in tech isn’t something to be ashamed of. It should be worn like a badge of honor. There’s something respectable about continuing to be creative and to write code while many in management have long since retired.

January 9, 2019  7:59 PM

How your team will benefit when you hire a full stack developer

charlesdearing Profile: charlesdearing

Full stack developers are some of the best developers on the market. They understand every layer of software from the back-end to the front-end. Full stack developers have a hand in every development stage, from database implementation to the finishing touches on front-end layout.

Great full stack developers can be hard to find. Developers and engineers with such a broad and intimate knowledge of software processes and structures can be nearly impossible to find. A vast array of skills like those possessed by full stack developers is exceedingly rare.

Since full stack developers are in such high demand, they’re quite expensive to keep on your payroll. Competitive compensation packages can be difficult for small startups to offer. However, even the smallest startups should consider hiring full stack developers. They add tremendous value to the team and can provide invaluable insight into the development process.

The jack-of-all-trades

Top full stack developers immediately understand software engineering. They have a working knowledge of every layer. This level of understanding makes full stack developers extremely valuable to teams comprised of diverse specialists. They can act as liaisons for software development teams, and help teams collaborate and communicate more openly.

Full stack developers can translate areas of concern from one specialist to another, and serve as team leaders or translators for various software issues that can occur across the stack.

These IT pros have the Swiss-army knife of skill sets. They are active in every step, and that makes their input about design and implementation invaluable to the team.

Full stack developers boost productivity

The top full stack developers can help round out a team of specialists. They can help everyone communicate more efficiently and effectively. The cohesiveness as a team is increased substantially. Team morale is boosted, increasing motivation, determination, and passion for the project.

Top full stack developers help the entire team become more efficient. They facilitate communication and foster collaboration, which greatly increases a team’s productivity as a whole.

It’s important to remember that great full stack developers are not only expert engineers, they are also superb team players with exceptional social skills. They speak concisely, openly and are eager to help other team members.

As leaders, they can help the team remember overarching goals and meet software requirements. While all full stack developers have an incredible technical ability, it only serves them if they have the soft skills to match.

Full stack developer benefits

These well-rounded professionals can be an incredible addition to any team. Startups can benefit from experienced full stack developers because of the insight, perspective, and expertise they can bring to the table.

Dedicated full stack developers can also enhance the efficiency of a software development team. Since they have a comprehensive understanding of the development process, full stack developers can help address potential blind spots and bottlenecks.

Software creation is more of an art than a science. Specialists can only write so much source code and abstract so many actions into algorithms. Team members can only communicate so much since they operate on different levels of the stack. Full stack developers can help bridge these knowledge gaps.

Software engineering is never an easy feat. However, it is one that can be enhanced by reducing waste and creating greater paths to efficiency. Full stack developers can help guide the rest of your team on this quest.

However, these people are exceedingly rare. They have an incredible skill set that allows them to understand every layer of software.

Since full stack developers possess a deep understanding of software creation, they’re in short supply and often quite expensive. Young startups low on cash might want to begin their search for an experienced full stack developer who can help make their software development teams more productive.

Companies, especially nascent startups, should consider full stack developers as well. They can help with any number of software issues, and work with front-end engineers or back-end developers to open up more direct lines of communication to boost efficiency.

When you hire full stack developers, team morale boosts, productivity increases, and inefficiencies decrease. While spending on such lavishly expensive salaries can seem overwhelming or risky, it is one of the best investments you can make for your company.

January 6, 2019  8:27 PM

How Atomist’s Rod Johnson works with pull requests

George Lawton Profile: George Lawton

Pull requests play an important role in any large software development project. They facilitate efficient code review, reduce bugs, track progress, and help coordinate a shared understanding of large code bases. Some type of pull request mechanism is built into every version control system and they can be integrated into a wide variety of notification systems including email, chat, issue trackers, and project management systems. However, there is not a lot of information on best practices for making pull requests work smoothly and efficiently as part of different styles of application development workflows.

I recently caught up with Rod Johnson to learn more about best practices for getting the most out of Git and GitHub pull request workflows. Johnson created the Spring Framework, and more recently Atomist, a framework for software delivery infrastructure.

What are some of the different ways you use pull requests in your app dev workflow? i.e. what tools do you integrate to consume pull requests and facilitate better communications among your team?

We use the GitHub Review mechanism, with Atomist to facilitate communications via GitHub and Slack. You’ve forced me into a product plug here, but since we started using Atomist ourselves, we haven’t looked back!

When we raise a PR, we typically choose reviewers using the GitHub UI. We have Atomist running on our GitHub org and community Slack team, so reviewers will get direct messages in Slack to let them know. We all tend to live in Slack, so that works a lot better for us than email alerts. Atomist attaches labels to the PR. Of course, you can do that with other tools, too. I think it’s a good practice for automations to add information that helps speed review.

Atomist is based around handling events, and a PR is an important event that we can add custom code to handle at team level rather than individual repo level. This enables consistent policy.

In our teams, Atomist auto-merges PRs when reviews are complete and automatically updates the change log on merged PRs. It automatically runs code quality checks and linting on all pushes, even before a PR is raised.

I don’t think integrating a host of tools into your software configuration management makes sense. It’s better to plug in one portable thing that gives you a rich model for handling events. The same approach works on GitHub, BitBucket and GitLab. Instead of managing a number of integrations, you just integrate Atomist and use its programming model. For example, our “autofixes” are doing more and more things over time and saving us a ton of work. Initially we just did linting. Now we add missing license headers, format imports etc. And it’s easy to add further things like this without changing any GitHub setup. Our customers use this to integrate tools like SonarQube.

You can see the output of this at and in our community Slack (

Do you find that one style of pull request workflow works best across a single organization or are different workflows better for different types of development?

We use PRs and try to keep branches short-lived. Any branch that lives longer than a day or two is questionable IMO. The default branch should always be fairly up to date.

What would you consider to be best practices for improving the use of pull requests to speed development, improve quality, and keep teams on the same page?

I think it’s good to review PRs. It can help with quality, but it also ensures that knowledge is clearly communicated in the team and makes it easy for people outside to follow. In open source, the latter is important as you’d love to inspire and empower people to contribute. I think tone is important in reviewing PRs. I don’t believe in accepting all PRs, but it’s crucial to be clear and respectful in comments. The PR mechanism can be an amazing way of working toward a better solution than any individual envisaged when setting out.

What kind of challenges have you encountered in making pull requests work smoothly?

Every challenge we’ve had involved long-lasting PRs, where you get into rebasing hell, where you need to continually rebase a long-lived branch from master, potentially resolving conflicts

What do you think of useful ways of thinking about pull requests as more than just a communications feature baked in to Git, GitHub, etc. and as part of streamlining the app dev to production lifecycle?

I like it when the default branch is deployed to production and you can use PRs to drive promotion. Of course, that isn’t appropriate for all organizations–some need a separate formal promotion mechanism–but it’s great if it is possible.

January 1, 2019  3:16 PM

How to use Java’s functional Consumer interface example

cameronmcnz Cameron McKenzie Profile: cameronmcnz

Quite often a Java Stream or other component needs an object passed to it in order to perform some type of calculation or process, but when the process is complete, nothing gets returned from the method. This is where Java’s functional Consumer interface comes in handy.

According to the JavaDoc, the Consumer interface accepts any type of object as input. The java.util.function.Consumer class has one non-default method named accept which takes a single object as its argument and has a void return type.

Consumer function type Parameters:
T - object type to be passed to the Consumer accept method

Consumer function methods:
void accept(T t)  This method operates on a single object passed in as an argument.
default Consumer<T> andThen(Consumer after)  Returns a functional Consumer interface 
that can be daisy chained in sequence.

The Consumer's non-default accept method takes a single argument and does not return a result.

Functional programming with a Java Consumer

Sometimes programmers new to lambdas and streams get intimidated by the components defined in the java.util.function package, so I always like to remind developers that all of the interfaced defined in this package follow all of the standard, pre Java 8 rules for implementing interfaces. As such, you can incorporate the functional Consumer interface into your code simply by creating a class that implements java.util.function.Consumer, or by coding an inner class.

Consumer example tutorial

Here is the Java code used in this Consumer function example tutorial.

Implement a Consumer with a Java class

Here is the Java Consumer function implemented using a Java class instead of a lambda expression:

class SimpleConsumerExample implements Consumer<Long> {
  public void accept(Long t) {

Inside of a main method or any other piece of Java code, the SimpleConsumerExample class can be instatiated according to traditional Java syntax rules:

/* Java Consumer example using a class */
SimpleConsumerExample sce = new SimpleConsumerExample();
sce.accept(new Long(2));

Similarly, an inner class can also be used:

/* Functional Consumer example using inner class */
Consumer<Long> innerConsumer = new Consumer<Long>() {

  public void accept(Long t) {

innerConsumer.accept(new Long(4));

Lambda and Consumer interface example

As you can see, there is nothing special about the interfaces defined in the java.util.function package. They are regular Java interfaces that comply with all of the traditional rules of syntax. However, they also work with lambda expressions, which is where functional interfaces really shine. Here is the functional Consumer interface example implemented using a somewhat verbose lambda expression:

Consumer<Long> lambdaConsumer = (Long t) -> System.out.println(t*t);
lambdaConsumer.accept(new Long(5));

I like to use a verbose lambda syntax when demonstrating how they work, but one of the reasons for using lambda expressions is to make Java less verbose. So the lambda expression above can be written in a much more concise manner:

Consumer<Long> conciseLambda = t -> System.out.println(t*t);
conciseLambda.accept(new Long(10));

Sample Consumer interface use cases

The functional Consumer interface is used extensively across the Java API, with a number of classes in the java.util.function package, such as ObjIntConsumer, BIConsumer and IntConsumer providing extended support to the basic interface.

Furthermore, a variety of methods in the Java Stream API take the functional Consumer interface as an argument, inclusing methods such as collect, forEach and peek.

There are only a few key intefaces you need to master in order to become a competent functional programmer. If you understand the concepts laid out in this functional Consumer interface example, you’re well on your way to mastering the update Java APIs.

Consumer tutorial code

Here is the code used in this tutorial on how to use the Consumer.

package com.mcnz.lambda;
import java.util.function.*;

public class JavaConsumerExample {

  public static void main (String args[]) {

    /* Java Consumer example using a class */
    SimpleConsumerExample sce = new SimpleConsumerExample();
    sce.accept(new Long(2));

    /* Functional Consumer example using inner class */
    Consumer<Long> innerConsumer = new Consumer<Long>() {
      public void accept(Long t) {
    innerConsumer.accept(new Long(4));

    /* Implemented Consumer function with verbose lambda expression */
    Consumer<Long> lambdaConsumer = (Long t) -> System.out.println(t*t);
    lambdaConsumer.accept(new Long(5));

    /* Concise lambda and Consumer function example */
    Consumer<Long> conciseLambda = t -> System.out.println(t*t);
    conciseLambda.accept(new Long(5));


/* Class implementing functional Consumer example */
class SimpleConsumerExample implements Consumer<Long> {
  public void accept(Long t) {



December 13, 2018  1:30 PM

Learn Java lambda syntax quickly with these examples

cameronmcnz Cameron McKenzie Profile: cameronmcnz

For those who are new to functional programming, basic Java lambda syntax can be a bit intimidating at first. Once you break lambda expressions down into their component parts, though, the syntax quickly makes sense and becomes quite natural.

The goal of a lambda expression in Java is to implement a single method. All Java methods have an argument list and a body, so it should come as no surprise that these two elements are an important part of Java lambda syntax. Furthermore, the Java lambda syntax separates these two elements with an arrow. So to learn Java lambda syntax, you need to be familiar with its three component parts:

  1. The argument list
  2. The arrow
  3. The method body

To apply these concepts, we first need a functional interface. A functional interface is an interface that only defines a single method that must be implemented. Here is the functional interface we will use for this example:

interface SingleArgument {
   public void foo(String s);

An implementation of this method requires a String to be passed in and a body that performs some logic on the String. We will break it down into its constituent elements in a moment, but for now, here’s a very basic example in which a lambda provides an implementation to the SingleArgument interface, along with a couple of invocations of the interface’s foo method:

SingleArgument sa1 =  n -> System.out.print(n);"Let us all");" learn lambda syntax");

The following is a complete class implementing this logic:

package com.mcnz.lambda;

public class LearnJavaLambdaSyntax {
   public static void main(String args[]) {	
      SingleArgument sa1 =  n -> System.out.print(n);"Let us all");" learn lambda syntax");

interface SingleArgument {
   public void foo(String s);

Concise and verbose Java lambda syntax

The implementation demonstrated here is highly abbreviated. This can sometimes make it a bit difficult for newcomers to learn Java lambda syntax. It is sometimes helpful, then, to add a bit more ceremony to the code. One enhancement that can make it easier to learn Java lambda syntax is to put round brackets around the method signature and include type declarations on the left-hand side:

SingleArgument sa2 =  (String n) -> System.out.print(n) ;

Furthermore, you can put curly braces around the content on the right-hand side and end each statement with a semi-colon.

SingleArgument sa3 =  (String n) -> { System.out.print(n); } ;
learn java lambda syntax

Compare these different approaches to learn Java lambda syntax.

Multi-line lambda expression syntax

In fact, if your method implementation has more than a single statement, semi-colons and curly braces become a requirement. For example, if we wanted to use a regular expression, strip out all of the whitespace before printing out a given piece of text, our Java lambda syntax would look like this:

(String n) -> {
    n = n.replaceAll("\\s","");

Multi-argument lambda functions

In this example, the method in the functional interface has only one argument, but multiple arguments are completely valid, so long as the number of arguments in the lambda expression match the number in the method of the functional interface. And since Java is a strongly typed language, the object types must be a polymorphic match as well.

Take the following functional interface as an example:

interface MultipleArguments {
   public void bar(String s, int i);

The highly ceremonial Java lambda syntax for implementing this functional interface is as follows:

MultipleArguments ma1 = (String p, int x) -> {
   System.out.printf("%s wants %s slices of pie.\n", p, x);

As you can see, this lambda expression leverages multiple arguments, not just one.

I described this example as being highly ceremonial because we can significantly reduce its verbosity. We can remove the type declarations on the left, and we can remove the curly braces and the colon on the right since there is only one instruction in the method implementation. A more concise use of Java lambda syntax is as follows:

( p, x ) -> System.out.printf ( "%s wants %s slices.\n", p, x )

As you can see, Java lambda syntax is quite a bit different from anything traditional JDK developers are used to, but at the same time, when you break it down, it’s easy to see how all the pieces fit together. With a bit of practice, developers quickly learn to love Java lambda syntax.

Here is the full listing of code used in this example:

package com.mcnz.lambda;

public class LearnJavaLambdaSyntax {
  public static void main(String args[]) {
    SingleArgument sa1 =  n -> System.out.print(n);"Let us all ");"learn Java lambda syntax.\n");

    SingleArgument sa2 =  (String n) -> System.out.print(n);"Java lambda syntax ");"isn't hard.\n");
    SingleArgument sa3 =  (String n) -> { System.out.print(n); };"You just need a few ");"good Java lambda examples.\n");
    SingleArgument sa4 =  (String n) -> {
      n = n.replaceAll("\\s","");
    };"This Java lambda example ");"will not print with whitespace.\n");
    MultipleArguments ma1 = (String p, int x) -> {
      System.out.printf("%s1 wants %s2 slices of pie.\n", p, x);
    };"Cameron ", 3);"Callie", 4);
    MultipleArguments ma2 = 
      ( p, x ) -> System.out.printf ( "%s1 wants %s2 slices.\n", p, x );"Brandyn", 1);"Carter", 2);


interface SingleArgument {
  public void foo(String s);

interface MultipleArguments {
 public void bar(String s, int i);

When this Java lambda syntax example runs, the full printout is:

Let us all learn lambda syntax.
Java lambda syntax isn't hard.
You just need a few good Java lambda examples.
ThisJavalambdaexamplewillnotprintwithwhitespace.Cameron 1 wants 32 slices of pie.
Callie1 wants 42 slices of pie.
Brandyn1 wants 12 slices.
Carter1 wants 22 slices.

You can find the source code used in this tutorial on GitHub.

December 1, 2018  11:33 PM

What is a lambda expression and from where did the term ‘lambda’ elute?

cameronmcnz Cameron McKenzie Profile: cameronmcnz

Due to various language constraints, lambda expressions had, until recently, never made it into the Java language. The concept had long been baked into other languages, such as Groovy and Ruby. That all changed with Java 8. As organizations slowly move to Java 8 and Java 11 platforms, more and more developers are getting the opportunity to use lambda expressions — or lambda functions, as they’re also called — in their code. This has generated a great deal of excitement but also confusion. Many developers have questions. So, why is this new language feature called a lambda function?

Why are they called ‘lambda functions?’

The term lamdba function actually has its roots in Calculus. In mathematics, a lambda function is one in which a single set of assigned variables are mapped to a calculation. Here are a few algebraic lambda functions. Anyone who took high school math should recognize them.

(x) = x2
(x, y) = x + y
(x, y, z) = x3 - y2 + z

For the first equation, if x is 3, the function would evaluate to 9. If x and y are both 2 for the second function, the result is 4. If x, y and z are 1, 2 and 3 in the third function, the calculated result is zero.

As you can see, a single set of variables are mapped onto a function, which generates a result. The corollary to computer science is to take a set of variables and map those variables to a single function. Let’s place extra emphasis on the word single. Lambdas work when there is only a single function to implement. The concept of a lambda completely falls apart in computer science when multiple methods get thrown into the mix.

The anonymous nature of lambda functions

A second point worth mentioning is that lambda functions are anonymous and unnamed. That’s not an obvious point when dealing with mathematic functions, but if you look at the first function listed earlier, the following was sufficient to explain everything that was going on:

(x) = x2

There was no need to give the function a name, such as:

basicParabola (x) = x2

In this sense, lambda functions are unnamed and anonymous.

Lambda functions in Java

This discussion on the etymology of lambda functions is interesting, but the real question is how these mathematical concepts translate into Java.

lambda function in Java

An example of a lambda function in a Java program.

In Java, there are many, many places in which a piece of code needs a single method implementation. And there are many interfaces in the Java API where only a single method needs to be implemented. Also known as functional interfaces, commonly used single-method interfaces include Flushable, Runnable, Callable, Comparator, ActionLIstener, FileFilter, XAConnection and RowSetWriter. Using any of these interfaces in Java can be somewhat cumbersome. For example, Comparator is a functional interface that allows you to rank objects for easy sorting. Code for sorting an array prior to Java 8 would look something like this:

Integer[] numbers = {5, 12, 11, 7};
Arrays.sort(numbers, new Comparator<Integer>() {
   public int compare(Integer a, Integer b) {
      return b - a;


When you use a lambda function, the verbosity goes away, and the result is this:

Integer[] numbers = {5, 12, 11, 7};
Arrays.sort(numbers, (a, b) -> b-a);
lambda expression code bloat

An implementation of the Comparator interface with and without a lambda function.

When you first learn to use lambda expressions, sometimes it’s easier to assign the lambda expression to the interface it’s implementing in a separate step. The prior example might read a bit clearer if coded with an additional line:

Integer[] numbers = {5, 12, 11, 7};
Comparator theComparator =  (a, b) -> b-a ;
Arrays.sort(numbers, theComparator); 

As you can see, lambda expressions make code is much more concise. Lambda expressions make code much easier to write and maintain and have relatively few drawbacks. The only drawback is that the syntax may seem somewhat cryptic to new users. After a little bit of time with lambdas, though, it becomes natural. Developers will wonder how they ever managed to write code without them.

The code for these Lambda expressions in Java example can be found on GitHub.

November 30, 2018  5:21 PM

DeepCode and AI tools poised to revolutionize static code analysis

George Lawton Profile: George Lawton

Developers use static analysis tools to identify problems in their code sooner in the development lifecycle. However, the overall architecture of these tools has only changed incrementally with the addition of new rules crafted by experts. Researchers are now, though, starting to use AI to automatically generate much more elaborate rule sets for parsing code. This can help identify more problems earlier in the lifecycle and provide better feedback.

Some companies, like the game maker Ubisoft, are already working on these kinds of tools internally. A team of researchers at ETH Zurich is now making a similar AI tool available for mainstream adoption, called DeepCode. It analyzes Java, JavaScript and Python code using about 250,000 rules compared to about 4,000 for traditional static analyzer tools. We caught up with Boris Paskalev, CEO at DeepCode, to find out how this works and what’s next.

What experiences and related work informed your decision in using AI to improve software development?

Boris Paskalev: The idea for using AI to improve software came from longer-term research done at the Secure, Reliable, and Intelligent Systems Lab at the Department of Computer Science, ETH Zurich ( During a period of several years, we explored a number of concepts, built several research systems based on these (some of which are widely used), and received various awards. We observed the enormous impact our technology could have on software construction. As a result, we started DeepCode with the vision of pushing the limit of these AI techniques and bringing those benefits to every software developer worldwide.

How does DeepCode compare with other approaches like static or dynamic analysis in terms of usage, performance, or the kinds of problems it can identify?

Paskalev: DeepCode relies on a creative and non-trivial combination of static analysis and custom machine learning algorithms. Unlike traditional static analysis, it does not rely on manually hardcoded rules, but learns these automatically from data and uses them to analyze your program. This concept of never-ending learning enables the system to constantly improve with more data, without supervision.

DeepCode also enables analysis with zero configuration which means one can simply point their repository to DeepCode which will then provide the results several seconds later, without the need to compile the program or locate all external code. These features are especially desirable in an enterprise setting, where running the code via dynamic analysis or trying to perform standard static analysis can be very time-consuming and difficult.

How does DeepCode fit into the developer workflow, and how does this contrast with other approaches for finding similar bugs, such as identifying a problem in QA or after code is released?

Paskalev: Currently, we optimized DeepCode to report issues at code review time, as this is a serious pain point in the software creation lifecycle. However, it is possible to integrate DeepCode at any step of the lifecycle.

How does DeepCode compare, contrast, and complement JSNice, Nice2Predict, and DeGuard?

Paskalev: JSNice and DeGuard are systems we created which target the specific problem of code layout deobfuscation. DeepCode is a more general system which aims to automatically find a wide range of issues in code. This makes DeepCode applicable not only when trying to understand someone else’s code (e.g. to audit it for security), but also when writing and committing new code.

What other research on using AI to explore bugs have you come across, and how does DeepCode compare and contrast with these?

Paskalev: The field of using AI for code is fairly new but growing. However, we are currently not aware of any system with the capabilities of DeepCode. Unlike other systems that try to use AI methods directly over code, DeepCode is based on AI that is actually able to learn interpretable rules. This means the rules can be examined by a human and easily integrated into an analyzer.

Can you say more about the process of parsing code with the AI tools and building up the rule-set? What kinds of AI or other analytics techniques are used?

Paskalev: DeepCode is based on custom AI and semantic analysis techniques specifically designed to learn rules and other information from code, as opposed to other data (e.g., images, videos) which are less effective when dealing with code.

How do you go about classifying code as a mistake?

Paskalev: Our AI engine learns rules based on patterns that others have fixed in the past and understands what problem it fixed for them based on the commit messages and bug databases. Then, it uses the learned rules to analyze your code, which if they trigger, are reported to the developer.

What have you learned about making recommendations for fixing bugs?

Paskalev: We learned that simply localizing the bug is not enough. The real challenge is to explain the issue and provide an actionable feedback on what the problem actually is. DeepCode connects the report to how others have fixed a similar issue, which is an important step towards that goal.

What languages does it support now, and what is involved in adding support for new ones?

Paskalev: Currently, DeepCode supports Java, JavaScript, and Python. Adding a language requires adding a parser and extending our semantic code analyzer to handle special features of the language. Because of the particular way DeepCode is architected, we can add a language every few months.

How does DeepCode compare differ from traditional static analysis tools?

Paskalev: Static analysis tools available out there often come with a set of hardcoded rules that aim to capture what is considered “bad” in code. Then, they detect these rules in your code. Over the last decade, many companies have created such tools, e.g. Coverity, Grammatech, JetBrains, SonarSource, and others. That type of approach typically gets one to a few thousands of rules across tens of programming languages.

250,000 rules seem like a lot compared to 4,000. Is it the case that it can identify more types of problems, or that it can provide greater granularity in identifying how to rectify an issue, or perhaps a little bit of both? 

Paskalev: We identify many different types of issues than what existing hardcoded rule analyzers cover. We also provide a more detailed explanation what the issue is and how others have fixed a similar problem. This enables users to more quickly figure out what fix they should apply.

What categories of problems does it identify now – is it just different categories of bugs or can it find opportunities for performance improvement?

Paskalev: DeepCode finds bugs, security issues, possible performance improvements and also code style issues. We learn these from commits in open source code and we use natural language processing to understand the issue that the commits fix.

Can DeepCode be used for code or architecture refactoring? Is that something you are looking at doing in the future?

Paskalev: Some of our suggestions are indeed suggesting refactoring of code, but not yet on a project-wide architectural level. Our platform’s utility is to enable any service that requires a deep understanding of your code to be quickly and easily created. We are already scoping the launch of several exciting services that some of our early adopters have asked for.

How do you expect the use and technology of DeepCode to evolve and the use of AI as part of improving developer workflow in general?

Paskalev: Our platform is constantly getting better. This will enable developers to work on much larger projects/scopes with the same or smaller effort while minimizing the risk defects and costly production problems.

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: