Coffee Talk: Java, News, Stories and Opinions

Page 1 of 2112345...1020...Last »

July 3, 2017  11:26 AM

Advancing JVM performance with the LLVM compiler

cameronmcnz Cameron McKenzie Profile: cameronmcnz

The following is a transcript of an interview between TheServerSide’s Cameron W. McKenzie and Azul Systems’ CTO Gil Tene.

Cameron McKenzie: I always like talking to Gil Tene, the CTO of Azul Systems.

Before jumping on the phone, PR reps often send me a PowerPoint of what we’re supposed to talk about. But  with Tene, I always figure that if I can jump in with a quick question before he gets into the PowerPoint presentation, I can get him to answer some interesting questions that I want the answers to. He’s a technical guy and he’s prepared to get technical about Java and the JVM.

Now, the reason for our latest talk was Azul Systems’ 17.3 release of Zing, which includes an LLVM-based, code-named Falcon, just-in-time compiler. Apparently, it’s incredibly fast, like all of Azul Systems’ JVMs typically are.

But before we got into discussing Azul Systems’ Falcon just-in-time compiler, I thought I’d do a bit of bear-baiting with Gil and tell him that I was sorry that in this new age of serverless computing and cloud and containers, and a world where nobody actually buys hardware anymore, that it must be difficult flogging a high-performance JVM when nobody’s going to need to download one and to install it locally. Well, anyways, Gil wasn’t having any of it.

Gil Tene: So, the way I look at it is actually we don’t really care because we have a bunch of people running Zing on Amazon, so where the hardware comes from and whether it’s a cloud environment or a public cloud or private cloud, a hybrid cloud, or a data center, whatever you want to call it, as long as people are running Java software, we’ve got places where we can sell our JVM. And that doesn’t seem to be happening less, it seems to be happening more.

Cameron McKenzie: Now, I was really just joking around with that first question, but that brought us into a discussion about using Java and Zing in the cloud. And actually, I’m interested in that. How are people using Java and JVMs they’ve purchased in the cloud? Is it mostly EC2 instances or is there some other unique way that people are using the cloud to leverage high-performance JVMs like Zing?

Gil Tene: It is running on EC2 instances. In practical terms, most of what is being run on Amazon today, it is run as virtual instances running on the public cloud. They end up looking like normal servers running Linux on an x86 somewhere, but they run on Amazon, and they do it very efficiently and very elastically, they are very operationally dynamic. And whether it’s Amazon or Azure or the Google Cloud, we’re seeing all of those happening.

But in many of those cases, that’s just a starting point where instead of getting server or running your own virtualized environment, you just do it on Amazon.

The next step is usually that you operationally adapt to using the model, so people no longer have to plan and know how much hardware they’re going to need in three months time, because they can turn it on anytime they want. So they can empower teams to turn on a hundred machines on the weekend because they think it’s needed, and if they were wrong they’ll turn them off. But that’s no longer some dramatic thing to do. Doing it in a company internal data center? It’s a very different thing from a planning perspective.

But from our point of view, that all looks the same, right? Zing and Zulu run just fine in those environments. And whether people consume them on Amazon or Azure or in their own servers, to us it all looks the same.

Cameron McKenzie: Now, cloud computing and virtualization is all really cool, but we’re here to talk about performance. So what do you see these days in terms of bare iron deployments or bare metal deployments or people actually deploying to bare metal and if so, when are they doing it?

Gil Tene: We do see bare metal deployments. You know, we have a very wide mix of customers, so we have everything from e-commerce and analytics and customers that run their own stuff, to banks obviously, that do a lot of stuff themselves. There is more and more of a move towards virtualization in some sort of cloud, whether it’s internal or external. So I’d say that a lot of what we see today is virtualized, but we do see a bunch of the bare metal in latency-sensitive environments or in dedicated super environments. So for example, a lot of people will run dedicated machines for databases or for low-latency trading or for messaging because they don’t want to take the hit for what the virtualized infrastructure might do to them if they don’t.

But having said that, we’re seeing some really good results from people on consistency and latency and everything else running just on the higher-end Amazon. So for example, Cassandra is one of the workloads that fits very well with Zing and we see a lot of turnkey deployments. If you want Cassandra, you turn Zing on and you’re happy, you don’t look back. In an Amazon, that type of cookie-cutter deployment works very well. We tend to see that the typical instances that people use for Cassandra in Amazon with or without us is they’ll move to the latest greatest things that Amazon offers. I think the i3 class of Amazon instances right now are the most popular for Cassandra.

Cameron McKenzie: Now, I believe that the reason we’re talking today is because there are some big news from Azul. So what is the big news?

Gil Tene: The big news for us was the latest release of Zing. We are introducing a brand-new JIT compiler to the JVM, and it is based on LLVM. The reason this is big news, we think, especially in the JVM community, is that the current JIT compiler that’s in use was first introduced 20 years ago. So it’s aging. And we’ve been working with it and within it for most of that time, so we know it very well. But a few years ago, we decided to make the long-term investment in building a brand-new JIT compiler in order to be able to go beyond what we could before. And we chose to use LLVM as the basis for that compiler.

Java had a very rapid acceleration of performance in the first few years, from the late ’90s to the early 2000s, but it’s been a very flat growth curve since then. Performance has improved year over year, but not by a lot, not by the way that we’d like it to. With LLVM, you have a very mature compiler. C and C++ compilers use it, Swift from Apple is based on its, Objective-C as well, the RAS language from Azul is based on it. And you’ll see a lot of exotic things done with it as well, like database query optimizations and all kinds of interesting analytics. It’s a general compiler and optimization framework that has been built for other people to build things with.

It was built over the last decade, so we were lucky enough that it was mature by the time we were making a choice in how to build a new compiler. It incorporates a tremendous amount of work in terms of optimizations that we probably would have never been able to invest in ourselves.

To give you a concrete example of this, the latest CPUs from Intel, the current ones that run, whether they’re bare metal or powered mostly on Amazon servers today, have some really cool new vector optimization capabilities. There’s new vector registers, new instructions and you could do some really nice things with them. But that’s only useful if you have some optimizer that’s able to make use of those instructions when they know it’s there.

With Falcon, our LLVM-based compiler, you take regular Java loops that would run normally on previous hardware, and when our JVM runs on a new hardware, it recognizes the capabilities and basically produces much better loops that use the vector instructions to run faster. And here, you’re talking about factors that could be, 50%, 100%, or sometimes 2 times or 3 times faster even, because those instructions are that much faster. The cool thing for us is not that we sat there and thought of how to use the latest Broadwell chip instructions, it’s that LLVM does that for us without us having to work hard.

Intel has put work into LLVM over the last two years to make sure that the backend optimizers know how to do the stuff. And we just need to bring the code to the right form and the rest is taken care of by other people’s work. So that’s a concrete example of extreme leverage. As the processor hits the market, we already have the optimizations for it. So it’s a great demonstration of how a runtime like a JVM could run the exact same code and when you put it on a new hardware, it’s not just the better clock speed and not just slightly faster, it can actually use the instructions to literally run the code better, and you don’t have to change anything to do it.

Cameron McKenzie: Now, whenever I talk about high-performance JVM computing, I always feel the need to talk about potential JVM pauses and garbage collection. Is there anything new in terms of JVM garbage collection algorithms with this latest release of Zing?

Gil Tene: Garbage collection is not big news at this point, mostly because we’ve already solved it. To us, garbage collection is simply a solved problem. And I do realize that that often sounds like what marketing people would say, but I’m the CTO, and I stand behind that statement.

With our C4 collector in Zing, we’re basically eliminating all the concerns that people have with garbage collections that are above, say, half a millisecond in size. That pretty much means everybody except low-latency traders simply don’t have to worry about it anymore.

When it comes to low-latency traders, we sometimes have to have some conversations about tuning. But with everybody else, they stop even thinking about the question. Now, that’s been the state of Zing for a while now, but the nice thing for us with Falcon and the LLVM compiler is we get to optimize better. So because we have a lot more freedom to build new optimizations and do them more rapidly, the velocity of the resulting optimizations is higher for us with LLVM.

We’re able to optimize around our garbage collection code better and get even faster code for the Java applications running it. But from a garbage collection perspective, it’s the same as it was in our previous release and the one before that because those were close to as perfect as we could get them.

Cameron McKenzie: Now, one of the complaints people that use JVMs often have is the startup time. So I was wondering if there’s anything that was new in terms of the technologies you put into your JVM to improve JVM startup? And for that matter, I was wondering what you’re thinking about Project Jigsaw and how the new modularity that’s coming in with Java 9 might impact the startup of Java applications.

Gil Tene: So those are two separate questions. And you probably saw in our material that we have a feature called ReadyNow! that deals with the startup issue for Java. It’s something we’ve had for a couple of years now. But, again, with the Falcon release, we’re able to do a much better job. Basically, we have a much better vertical rise right when the JVM starts to speed.

The ReadyNow! feature is focused on applications that basically want to reduce the number of operations that go slow before you get to go fast, whether it’s when you start up a new server in the cluster and you don’t want the first 10,000 database queries to go slow before they go fast, or whether it’s when you roll out new code in a continuous deployment environment where you update your servers 20 times a day so you rollout code continuously and, again, you don’t want the first 10,000 or 20,000 web request for every instance to go slow before they get to go fast. Or the extreme examples of trading where at market open conditions, you don’t want to be running your high volume and most volatile trades in interpreter Java speed before they become optimized.

In all of those cases, ReadyNow! is basically focused on having the JVM hyper-optimize the code right when it starts rather than profile and learn and only optimize after it runs. And we do it with a very simple to explain technique, it’s not that simple to implement, but it’s basically we save previous run profiles and we start a run assuming or learning from the previous run’s behavior rather than having to learn from scratch again for the first thousand operations. And that allows us to run basically fast code, either from the first transaction or the tenth transaction, but not from the ten-thousandth transaction. That’s a feature in Zing we’re very proud of.

To the other part of your question about startup behavior, I think that Java 9 is bringing in some interesting features that could over time affect startup behavior. It’s not just the Jigsaw parts, it’s certainly the idea that you could perform some sort of analysis on code-enclosed modules and try to optimize some of it for startup.

Cameron McKenzie: So, anyways, if you want to find out more about high-performance JVM computing, head over to Azul’s website. And if you want to hear more of Gil’s insights, follow him on Twitter, @giltene.
You can follow Cameron McKenzie on Twitter: @cameronmckenzie

June 13, 2018  9:20 PM

JPA and Hibernate enum mapping with annotations and the hbm.xml file

cameronmcnz Cameron McKenzie Profile: cameronmcnz

Just how hard is it to perform a JPA or Hibernate enum mapping with either annotations or a hbm.xml file? It’s actually not that hard at all. In fact, you don’t necessarily have to perform either of the aforementioned options.

JPA and Hibernate enum mapping

The Java enum, introduced in Java 5, will map to the underlying database without any intervention. The JPA framework, be it Hibernate or Toplink or DataNucleus, will recognize the Java enum and subsequently read or write the enums state to and fro. So it doesn’t require any additional annotations or coding in a hbm.xml file.

Whenever I need to prove out a concept, I always like to code up a little rock-paper-scissors program. In this use case, we can represent a competitor’s chosen gesture in the game as a Java enum:

package com.mcnz.rps;
public enum Gesture {

Working with persistent entities

So long as any persistent object in the problem domain is decorated with the requisite @Entity and @Id attributes, the Java enum database mapping will proceed without error. The following is a JPA annotated entity that uses the Java enum named Gesture:

/* JPA & Hibernate enum mapping example */	 
package com.mcnz.rps;
import javax.persistence.*;

public class GameSummary {
	private Long id;
	private Gesture clientGesture;
	private Gesture serverGesture;
	private String result;
	private java.util.Date date = new java.util.Date();
	public GameSummary(){}
	public GameSummary(Gesture clientGesture, Gesture serverGesture) {
		this.clientGesture = clientGesture;
		this.serverGesture = serverGesture;
/* End of Hibernate and JPA enum mapping example */	

JPA and Java enum persistence

Now that we’ve created the Java enum and coded the JPA or Hibernate entity, all we need to do is give the JPA EntityManager or the Hibernate Session some attention and database persistence should be a lead-pipe cinch.

/* Entity that has a JPA/Hiberante mapped enum */
GameSummary gs = new GameSummary();
gs.clientGesture = Gesture.PAPER;
gs.serverGesture = Gesture.ROCK;
gs.result = "win";

/* Persisting the JPA/Hiberate mapped enum */		
EntityManagerFactory emf = Persistence.createEntityManagerFactory("jpa-enum-mapping");
EntityManager em = emf.createEntityManager();

Java enums and the EnumType annotation

There is, however, a last-minute surprise with this Hibernate enum mapping approach. The ordinal number of the Java enum is written to the database, as opposed to an actual String. So the ROCK gesture is persisted as 0 and the PAPER gesture is persisted as 1.

Ordinal values appear with a default JPA or Hibernate enum mapping

Default JPA Hibernate enum mapping writes ordinal values to the database.

To get the JPA or Hibernate enum mapping process to write text to the database instead of an ordinal number, the trick is to use the @Enumerated annotation in conjunction with @EnumType.

private Gesture clientGesture;
private Gesture serverGesture;
Using EnumType annotation to force JPA and Hibernate enum mapping to write a text String to the database.

Using EnumType String to persist text instead of Java enum ordinal numbers.

Mapping with @Enumerated and @EnumType annotations

With the EnumType set to STRING as opposed to ORDINAL, the JPA and Hibernate enum mapping represents the Java enum with specific text instead of the enum’s ordinal value.

Of course, all of these mappings have used JPA. The concepts all map directly to Hibernate, although the syntax is slightly different, especially if you use a hbm.xml file for enum mapping. The XML would look as follows:

<property name="gesture" column="GESTURE">
<type name="org.hibernate.type.EnumType">
<param name="enumClass">com.mcnz.rps.Gesture</param>
<param name="useNamed">true</param>

And without the useName parameter — or if the useName parameter is set to false — an ordinal value representing the enum is used, otherwise the String representation of the enum is persisted to the database.

And that’s it. Those are the ins and outs of using Hibernate and JPA enum mapping facilities.

Hibernate and JPA enum mapping meme - enum num num.

May 23, 2018  4:41 PM

How to ‘git cherry-pick’ from another branch to your own

cameronmcnz Cameron McKenzie Profile: cameronmcnz

In a previous tutorial, we took a look at how to cherry-pick a commit on the current branch, but one of the ancillary questions that commonly arises is how to perform a git cherry-pick from another branch. The topic of how to git cherry-pick from another branch, along with what the result of such an operation would be, is the focus of this tutorial.

As with all git tutorials, this one will start off with a clean repository and an empty working directory, which means the first step is to create a new folder, which I will name git cherry-pic example. The next step is to issue a git init call from within that folder.

/c/ git cherry-pick example (master)
$ git init
Initialized empty Git repository in C:/_git-cherry-pick-example/.git/

Preparing a branch for a git cherry-pick

With the repository initialized, the next step is to create three new files, adding a commit after each individual file is created. Since the repo was just initialized, all of this will occur on the master branch.

/c/ git cherry-pick example (master)
$ echo 'abba' > abba.html
$ git add . | git commit -m '1st commit: 1 file'
$ echo 'bowie' > bowie.html
$ git add . | git commit -m '2nd commit: 2 files'
$ echo 'chilliwack' > chilliwack.html
$ git add . | git commit -m '3rd commit: 3 files'

We are about to git cherry-pick from another branch, and specifically, we will be pulling in the second commit, but before we do we will delete all of these files and perform a commit to put the master branch back into an empty state.

/c/ git cherry-pick example (master)
$ rm *.html
$ git add . | git commit -m ‘4th commit: 0 files’

[master d6a8ce2] 4th commit: 0 files
3 files changed, 3 deletions(-)
delete mode 100644 abba.html
delete mode 100644 bowie.html

Inspecting the commit history

Issuing a git reflog command will show the rich commit history of the master branch. Note the hexadecimal id of the second commit, 63162ea, as this is the one we will use when we git cherry-pick from another branch.

/c/ git cherry-pick example (master)
$ git reflog
d6a8ce2 (HEAD -> master) HEAD@{0}: commit: 4th commit: 0 files
bc0f7d1 HEAD@{1}: commit: 3rd commit: 3 files
63162ea HEAD@{2}: commit: 2nd commit: 2 files
6adc6ff HEAD@{3}: commit (initial): 1st commit: 1 file

Switching to a feature branch

We will now create and move development onto a new branch named feature.

/c/ git cherry-pick example (master)
$ git branch feature
$ git checkout feature
Switched to branch 'feature'
/c/ git cherry-pick example (feature)

We will then create one file named zip.html and commit this file in order to create a small history of development on the feature branch.

/c/ git cherry-pick example (feature)
$ echo 'zip' > zip.html
$ git add . | git commit -m '1st feature branch commit: 1 file'

The next step is to git cherry pick from another branch to this new one, but before we do, think about what the expected result is. We will cherry-pick the 2nd commit from the master branch, namely the commit where the file named bowie.html was created. In the other branch, the bowie.html file sits alongside the abba.html file, which was created prior. What will the cherry-pick bring back? Will it bring back the abba.html and bowie.html files? Will it resurrect just the bowie.html file? Or will the command fail as we try to git cherry-pick across branches? Let’s see what happens.

How to git cherry-pick across branches

The id of the bowie.html commit was 63162ea, so the command to git cherry-pick is:

/c/ git cherry-pick example (feature)
$ git cherry-pick 63162ea
[feature d1c9693] 2nd commit: 2 files
Date: Thu May 17 17:02:12 2018 -0400
1 file changed, 1 insertion(+)
create mode 100644 bowie.html
$ ls
bowie.html zip.html

The output of the command to git cherry-pick from another branch is a single file being added to the current working tree, namely the bowie.html file. The directory listing command issued above shows two files, the zip.html file and the bowie.html file, indicating that the only change to the working tree was the addition of the second file.

How git cherry-pick works

As you can see from this example, when you cherry-pick, what is returned is not the entire state of the branch at the time the commit happened, but instead, only the delta between the commit that happened and the state of the git repository prior to the cherry-picked commit.

It should also be noted that any time you git cherry-pick from another branch, a new commit gets registered in the branch history, as is evidenced by the following reflog:

Needing to git cherry-pick from another branch is a common occurrence during software development cycles. As you can see from this example, so long as the hexadecimal id of the commit is known, performing a git cherry-pick from another branch is a safe and rather simple function to perform, especially if the branch doing the cherry-pick can merge the change without any clashes or conflicts occurring.

May 17, 2018  9:41 PM

Need to ‘git cherry-pick’ a commit? Here’s a simple example showing how

cameronmcnz Cameron McKenzie Profile: cameronmcnz

One of the most commonly misunderstood version control commands is git cherry-pick, and that’s a real shame because the ability to git cherry-pick a commit is one of the most useful skills a developer can employ when trying to isolate a software bug or fix a broken build.

What is git cherry-pick?

According to the official git documentation, the goal of a cherry-pick is to “apply the changes introduced by some existing commit.” Essentially, a cherry-pick will look at a previous commit in the repository’s history and apply the changes that were part of that earlier commit to the current working tree. The definition is seemingly straight forward, yet in practice there is a great deal of confusion over what exactly happens when someone tries to git cherry-pick a commit, or even cherry-pick from another branch. This git cherry-pick example will eliminate that confusion.

A git cherry-pick example

We will begin this git cherry-pick example with a completely clean repository, which means first you will create a new folder, which we will name git cherry-pick tutorial, and then issuing a git init command from within it.

/c/ git cherry-pick tutorial
$ git init
Initialized empty Git repository in C:/ git cherry-picktutorial/.git/

With the git repository initialized, create five html files, and each time a you create a file, perform a commit. In each commit message, the commit number and the number of files in the working tree will be included as part of the commit message. In a subsequent step, where we cherry-pick a commit, it is one of these commit steps that will be chosen.

Here are the commands to create the five, alphabetically ordered .html files along with the git commands required to add each file independently to the git index and subsequently commit those files to the repository:

/c/ git cherry-pick tutorial
$ echo 'alpha' > alpha.html
$ git add . | git commit -m "1st commit: 1 file"

$ echo 'beta' > beta.html
$ git add . | git commit -m "2nd commit: 2 files"

$ echo 'charlie' > charlie.html
$ git add . | git commit -m "3rd commit: 3 files"

$ echo 'whip it' > devo.html
$ git add . | git commit -m "4th commit: 4 files"

$ echo 'Big Lebowski' > eagles.html
$ git add . | git commit "5th commit: 5 files"

A note on the echo command

The echo command will be used as a file creation shortcut. For those unfamiliar with this shortcut, echoing text and then specifiying a file name after the greater-than sign will create a new file with that name, containing the text that was echoed. So the following command will create a new file named agenda.html with the word devops contained within it:

$ echo 'devops' > agenda.html

Delete and commit before the cherry-pick

We will now delete all five recently created files to clear out the working directory. A subsequent commit will be needed here as well.

/c/ git cherry-pick tutorial
$ rm *.html
$ git add . | git commit -m "all deleted: 0 files"

To recap, five files were created, and each file created has a corresponding commit. And then all the files were deleted and a sixth commit was issued. A chronicling of this commit history can be concisely viewed in the reflog. Take special note of the hexadecimal  identifier on the third commit, which we will cherry-pick in a moment:

/c/ git cherry-pick tutorial
$ git reflog
189aa32 HEAD@{0}: commit: all deleted: 0 files
e6f1ac7 HEAD@{1}: commit: 5th commit:  5 files
2792e62 HEAD@{2}: commit: 4th commit:  4 files
60699ba HEAD@{3}: commit: 3rd commit:  3 files
4ece4c7 HEAD@{4}: commit: 2nd commit:  2 files
cc6b274 HEAD@{5}: commit: 1st commit:  1 file

What happens when we git cherry-pick a commit?

This is where the git cherry-pick example starts to get interesting. We need to git cherry-pick a commit, so let’s choose the third commit where the file named charlie.html was created. But before we do, ask yourself what you believe will happen after the command is issued. When we git cherry-pick a commit, will we get all of the files associated with that commit, which would mean alpha.html, beta.html and charlie.html will come back? Or will we get just one file back? Or will the attempt to git cherry-pick a commit fail since all of the files associated with the commit have been deleted from our workspace?

The git cherry-pick command

Here is the command to git cherry-pick commit number 60699ba:

/c/ git cherry-pick tutorial (master)
$ git cherry-pick 60699ba

[master eba7975] 3rd commit: 3 files
1 file changed, 1 insertion(+)
create mode 100644 charlie.html

As you can see, only one file was added to the working directory, namely charlie.html. The files that were added to the repository in prior commits were not added, which tends to be the expectation of many users. Many users believe that when you git cherry-pick a commit, all of the files that are part of that branch, at that time of the commit, will be brought into the working directory. This is obviously not the case. When you git cherry-pick a commit, only the change associated with that commit is re-applied to the working tree.

The git cherry-pick command

Here are the various options available when issuing a git cherry-pick for a commit.

A hidden git cherry-pick commit

It should also be noted that it’s not just the working tree that is updated. When you git-cherry-pick a commit, a completely new commit is created on the branch, as the following reflog command indicates:

/c/ git cherry-pick tutorial (master)
$ git reflog
eba7975 HEAD@{0}: cherry-pick: 3rd commit: 3 files
189aa32 HEAD@{1}: commit: all deleted: 0 files
e6f1ac7 HEAD@{2}: commit: 5th commit: 5 files
2792e62 HEAD@{3}: commit: 4th commit: 4 files
60699ba HEAD@{4}: commit: 3rd commit: 3 files
4ece4c7 HEAD@{5}: commit: 2nd commit: 2 files
cc6b274 HEAD@{6}: commit (initial): 1st commit: 1 file

When a developer encounters a problem in the repository, the ability to git cherry-pick a commit can be extremely helpful in fixing bugs and resolving problems in GitHub, which is why understanding how the command works and the impact it will have on the current development branch is pivotal. Hopefully, with this git cherry-pick example under your belt, you will have the confidence needed to use the command to resolve problems in your own development environment.

May 11, 2018  12:47 PM

Google positions ‘Android Things’ to solve the IoT problem

BarryBurd Profile: BarryBurd

Google’s Android Things development Iokit turned v1.0 on Monday. I celebrated the birthday of the new tool that hopes to address IoT problems in development by belatedly attending a “What’s New in Android Things” talk at Google I/O (Google’s annual developer conference).

In the name “Android Things”, the word “Things” refers to devices that communicate with one another, with or without human intervention. For example, your smart thermostat is one component in your home’s Internet of Things. The thermostat optimizes your home’s energy usage and adjusts its setting when you say, “OK Google, raise the temperature by one degree.”

From where we’re sitting in the year 2018, IoT is like the proverbial Wild West, which can create IoT problems when it comes to development and integration. People build prototype IoT devices by soldering parts from many sources, with help from many incompatible standards using several different communications protocols. There’s simply no agreement on the best tools to use and the best way to use these tools.

Solving IoT problems

The fundamental IoT problem is that devices must be very small. They can’t be the big, clunky computers that you see on peoples’ desktops and in peoples’ laps. They have to blend into the background and be part of whatever hardware they’re driving. They must respond in real-time to somewhat unpredictable conditions so the traditional chips and motherboards that you find in desktops and laptops don’t cut the muster.

Unlike today’s desktops and laptops, the new low-power, small profile devices that control IoT devices haven’t been around long enough, which creates an IoT problem in terms of standardization. What’s more, solving the IoT problem with standards is harder than the problem for conventional computers. A small computing system that works well in a home thermostat might be totally unsuitable for a wearable strap that measures a swimmer’s strokes.

So, getting started means learning about dozens (maybe even hundreds) of different IoT hardware platforms. Arduino and Raspberry Pi seem to dominate the field, but there so many alternatives, each with its own specs and features, that the choice of hardware for a particular project can be mind-bending. Each hardware platform comes with its own capabilities, its own way of being controlled (perhaps with C, Python, or a home-grown assembly language) and its own operating system (if it even has an operating system). How does an IoT innovator deal with all these choices?

One possible answer to the IoT problem is Android Things. To create Android Things, Google took the existing Android SDK (the one that powers 80% of the world’s smartphones) and modified it to deal with IoT problems. Think about a device that monitors traffic on a busy street and sends the data to a server in the cloud. That device has no keyboard, no screen, and no need for a user interface. So, the Android Things toolkit provides no packages for communicating with an ordinary user. Instead, the toolkit’s packages allow engineers to communicate through general purpose input/output (GPIO) pins. Setting up the communication means typing code such as this:

var button = PeripheralManager.getInstance().openGpio(pinName);


There are many advantages of creating an IoT version of Android from the venerable mobile-phone version of Android. For one thing, developers can leverage their existing Android coding skills as a starting point for creating IoT applications. Also, from its years as a smartphone operating system, Android comes with a full stack of services that can be used to create new IoT applications.

Google has partnered with several companies such as Qualcomm, MediaTek, NXP, and Raspberry Pi, to make Android Things available for their products. They’ve created lists of pre-certified hardware that run the Android Things SDK. This pre-certified hardware spans the range from devices that are used mainly for prototyping and development, to devices that are practical in real world settings. So, with Android Things, the transition from development and testing to actual production doesn’t require a change in the software platform.

Security: Another IoT problem

Security is an important concern in IoT. That’s why security is baked into several stages of Android Things. This includes a sandbox for apps, the existing Android permissions scheme, the signing of system images, a verification step whenever a device reboots, and keys to identify particular hardware. It’s tempting to say that, with the Android Things security features, an app developer is relieved from the usual concerns about security. Of course, this view is too simplistic. In this day and age, everyone has to think about security, even if that thinking is on the high level that’s provided by the Android Things SDK.

Each release of Android Things will be supported for at least three years, and Android Things devices will receive consistent, automatic updates. What’s more, Android Things is free to use. According to the speakers at Google I/O 2018, no fees are required in order to use Android Things. (Please check your legal team to verify all the details. I’m not a lawyer and I’ve never played a lawyer on TV. My claims are Android Things licensing are based only on what I’ve heard, not on anything that I know for sure.)

In the olden days, each farm object was hand crafted with its own size and shape. Then, as the 20th century rolled around, the world discovered standardized parts. At the dawn of the consumer computer age (the early 1980s) we had several different vendors proposing their own standards for disk storage and operating systems. The field eventually settled into only a few players such as UNIX, Linux and Windows.

Today, as we approach the 2020s, we have hundreds of standards for IoT hardware and software platforms. The typical innovator must choose among the many alternatives to create a system that has internal consistency and, if the stars are all aligned, can also talk to devices that are external to the system. Perhaps Android Things is a way to create order from this chaos. Let’s ask our robot arms to keep their mechanical fingers crossed.

May 9, 2018  2:43 PM

Smart Compose and the Visual Positioning System impress at Google I/O

BarryBurd Profile: BarryBurd

This year’s Google I/O conference kicked off on May 8 with a nearly two-hour keynote. The keynote was held at the outdoor Shoreline Amphitheater near Google’s headquarters in Mountain View, California. The presentation was packed with announcements about new app features, some of which were quite mundane while others, such as Smart Compose and the Visual Positioning System for the Google Maps app prompted me to ask, “Where has this been all my life?”

Introducing Smart Compose

The first new feature, called Smart Compose for Gmail, auto-completes sentences that the user begins typing. You’re probably used to the simple word completion that keyboard apps provide, but Smart Compose does much more. Smart Compose offers to auto-type the remaining half of a sentence and does so by reusing terms from the previous sentences in the email. For example, if you type “Please pay for,” then Smart Compose might suggest “the shirts and socks.” It does this because you typed “shirts” and “socks” in some earlier sentences. Apparently, Smart Compose takes hints from your entire document, not just from beginnings of words or from beginnings of sentences. Smart Compose will be available later this month.

Advanced photo filters

In the next few months, Google Photos will be able to make automatic changes to your snapshots. Your phone’s software will decide that a photo should be a bit brighter and, with the touch of an icon, the photo’s brightness will be enhanced.

Even more impressive is the treatment of images containing documents. Imagine that you take a picture of a letter-sized page of paper. No matter how careful you are, the image is bound to be a bit skewed. Maybe the top of the paper is farther from the camera lens than the bottom, so the page isn’t exactly rectangular. Google Photos can adjust the image so that the page fills the screen precisely. The Android app can turn the image into a PDF and you can copy text within the document. You can quickly use the text to trigger searchers and other actions.

Usurping Siri with Google Assistant

Introducing Google Maps VPS features

Introducing enhancements to Google Maps, including the new Visual Positioning System.

In the coming weeks, you’ll be able to conduct a continued conversation with Google Assistant. You won’t have to repeat “OK, Google” at the beginning of each sentence. In a demonstration during the Google I/O keynote, a speaker issued compound sentences with multiple requests. (“Tell me about such-and-such and do this-other-thing.”). Google Assistant understood these complicated commands. Apparently, the Assistant is also capable of determining when a conversation ends without being given an explicit “Stop” command. Unfortunately, this demo went quickly, and it didn’t include much detail about these new Google Assistant features.

There was also a demo, straight out of science fiction magazines, in which Google Assistant made a phone call to a hair salon on behalf of a human user. The demo started with the human user saying something like “OK Google. Make a hair appointment for me.” The Assistant called a hair salon and said something like, “I want to make an appointment for Joan Smith on Thursday around noon.” The person at the hair salon offered alternative times and Google Assistant followed up with intelligent responses. I can’t say for sure, but I’m willing to bet that this demo was very carefully curated, and that most attempts to have Google Assistant carry on long conversations with unknowing storekeepers would end up in less-than-satisfactory results. One way or another, no time frame was announced for the full release of such Google Assistant features.

How often do you check the ratings for restaurants, only to find that the first seven restaurants all get the same 4.5 rating? Sometime this summer, Google Maps will be able to refine the ratings based on your tastes. Do you tend to visit medium-priced restaurants or slightly higher-priced establishments? Have you rated similar restaurants with a 5, or have you given the nearby dive a bad rating? Google Maps will tell you which of the highly-rated restaurants you’ll probably prefer and will tell you why it came to those conclusions. All this mining of information about your habits and tastes might seem a bit creepy but I’m fascinated by the fact that these guesses can be automated.

Google invents the compass

Above all, my favorite new feature was another enhancement to Google Maps called the Visual Positioning System. I often come out of a subway station in New York City at an intersection that’s unfamiliar to me. I know I should walk southward, but I can’t tell which direction is south. On a particular intersection, some street signs might be missing, and if I ask two passers-by which direction is south, I get three different answers. I start walking on a trial and error basis until I have a way of getting my bearings.

A new Visual Positioning System

With a new Google Maps feature called the Visual Positioning System, the user holds the phone up so that the camera lens faces the street in any direction. The Google Maps app overlays the camera’s street image on top of the ordinary map. With all this information Google Earth figures out what direction the phone is facing and draws big fat arrows pointing toward the place where the user should walk. The Visual Positioning System is yet another example of the benefits of augmented reality.

OK, I admit it. Some of these features won’t work as smoothly in real life situations as they did during the Google I/O keynote demos, and a few of them are downright scary in their implications for security and privacy. But I’m a techie and I enjoy living in a world with futuristic conveniences. It makes me feel special.

May 1, 2018  2:27 PM

SunCertPathBuilderException fix for Jenkins plugin download

cameronmcnz Cameron McKenzie Profile: cameronmcnz

As I continue to publish Maven, Git and Jenkins tutorials as part of TechTarget’s coverage of popular DevOps tools, occasionally as I work on examples I run into peculiar problems that are both difficult to diagnose and frustrating to fix. The random and annoying SunCertPathBuilderException Jenkins plugin download error is just one of the many such problems that comes to mind. I’m not sure why plugins regularly become a source of consternation, be it a Jenkins plugin or a Maven plugin, but plugins are routinely problematic.

Jenkins admin console SunCertPathBuilderException

The SunCertPathBuilderException on the Jenkins plugin download page.


To quickly fix the SunCertPathBuilderException Jenkins plugin download problem, change the update site’s protocol prefix from https to http.


I’m not exactly sure what triggers the SunCertPathBuilderException when one attempts to download Jenkins plugins. Sometimes a machine with a fresh installation of the JDK and Jenkins can access the Jenkins plugins download page without triggering the error. Other times a fresh installation of Jenkins and the JDK runs into it. Maybe some incremental versions of the JDK or the embedded Jetty web server are more persnickety than others?

Sometimes I think the SunCertPathBuilderException error is related to the use of a virtual machine or a particular operating system, but the problem happens so randomly, regardless of whether I run a virtual Ubuntu box or local Windows installation, I can’t figure out exactly what the Jenkins SunCertPathBuilderException issue is.

Fix the Jenkins plugin problem with certs

If you dig though the forums, you’ll find two commonly recommended solutions. The first, which is both the most onerous and the most technically correct is to update the security certificate catalog used by the embedded Jetty web container and the underlying JDK. I might advocate this particular SunCertPathBuilderException solution if I worked at a bank, but it’s a load of work, and if you just need to install Jenkins locally in order to learn the tool and do some Jenkins tutorials, it’s overkill.

The Skip Certificate Check plugin

The second popular option to fix the SunCertPathBuilderException Jenkins plugin download problem is to install the Skip Certificate Check plugin. Jenkins creator Kohsuke Kawaguchi has created a lightweight add-on that tells the underlying JVM to bypass all security certificate checks. I’ve used this SunCertPathBuilderException solution myself, and it’s a good one. Originally I thought I’d run into the paradoxical problem of trying to install a Jenkins plugin that addresses the issue of not being able to install Jenkins plugins, but it would appear that this add-on is bundled with the tool. This solution gets a thumbs up from me, but there’s actually a much simpler solution to the SunCertPathBuilderException problem.

The Skip Certificate Check plugin fix for the Jenkins plugin download problem.

You can fix the SunCertPathBuilderException Jenkins plugin download error with the Skip Certificate Check plugin.

The SunCertPathBuilderException fix

The fastest SunCertPathBuilderException fix is to change the protocol of the Jenkins update site from https to http. Since it’s the secure socket layer communication that causes the problem, if you don’t use SSL, the problem goes away.

Fix SunCertPathBuilderException Jenkins plugin download errors with a URL edit.

The easiest SunCertPathBuilderException Jenkins plugin download error fix.

To change the Jenkins plugin download URL,  go to the advanced tab of the Jenkins plugin manager and scroll down to the edit box for the Jenkins update site URL. Remove the ‘s’ in ‘https’, submit the change and then resume your search for Jenkins plugins. The download catalog will be easily accessed, and subsequent steps involved in the download of components such as the Jenkins Git plugin or the Jenkins Maven plugin will proceed without issue.

April 30, 2018  5:35 PM

Here’s why you need to learn Maven and master the build tool’s fundamental concepts

cameronmcnz Cameron McKenzie Profile: cameronmcnz

While at various conference sessions, or as a participant in development workshops, one of the sad realities to which I’m constantly reminded is that there are a large number of very experienced enterprise Java developers out there who simply don’t know how Maven works. As organizations start to plot a course on their DevOps roadmap, more and more Maven-based and  Maven-integrated technologies will be introduced into the workforce. If developers don’t learn Maven and don’t get properly up to speed on Maven fundamentals, DevOps transitions will leave those developers in the dust.

A lack of Maven fundamentals

A  lack of familiarity with Maven fundamentals among my  enterprise Java compatriots is understandable. Java developers who cut their teeth on J2EE frameworks dealt heavily with Apache ANT and never got the chance to learn Maven. By the time Maven started pushing ANT out of the picture, there was tooling built directly into IDEs such as NetBeans and Eclipse that abstracted away the Maven underpinnings. This largely hid how Maven worked, so it was not necessary to learn Maven fundamentals in order to build a complex application.

Given the seamless integration of Maven with IntelliJ or JDeveloper, a software developer had no compelling reason to dig down into the weeds and learn Maven by finding out how a Maven repository functions or how a maven dependency gets resolved. But in this brave new world of DevOps tooling, that lack of understanding of Maven fundamentals will come back to bite. An understanding how Maven works is  more important than ever as organizations introduce DevOps tools like Jenkins and Gradle to the application lifecycle, both of which lean heavily on Maven repositories to store third-party libraries and Maven dependency management techniques.

Doing DevOps? Then learn Maven

For example, the most commonly run Jenkins build jobs are created using the Maven plugin. If a developer is thrown into playing the role of a DevOps engineer doesn’t understand Maven fundamentals, it will be tricky to read through a Jenkins log file and troubleshoot broken builds. And while Gradle can certainly be considered a competitor to Maven in the build tool space, many of the tasks a Gradle script performs are tied directly to Maven. A Gradle script might connect to a Maven repository, call upon Maven to resolve dependencies and reference Maven hosted JAR files and libraries during the build.

Learn Maven and master Gradle, Jenkins and Ant

Learn Maven fundamentals and working with other DevOps tools like Ant, Gradle and Jenkins will be much easier.

Learn Maven’s key concepts

Of course, the curator of the CI build doesn’t have to be a DevOps master. A strong understanding of Maven fundamentals is important, but being able to play the Maven game on expert level isn’t necessary either. The key Maven concepts every Java developer should be adept with include:

  • How the project object model (POM) file works. This is the pom.xml file that sits at the root of every Maven project.
  • The basic structure of a Maven project, and how Maven archetypes can be used to structure different types of projects, be it a a J2SE, Jakarta EE or Spring Boot app.
  • The purpose of the commonly used Maven lifecycle phases and hooks, including: clean, compile, test, package, install and deploy.
  • How dependencies are configured in the pom.xml file and how Maven resolves and links to external libraries at runtime.
  • The role of the local Maven repository and the benefits that can be garnered through the use of a shared binary repository such as Apache Archiva, Sonatype’s Nexus repository manager or JFrog’s Artifactory.

That may sound like a tall order, but it’s actually not. Any software developer or member of the operations team could easily take a few hours on a Friday afternoon when they’re doing nothing other than pretending to work and just download Apache Maven, install the tool and configure the requisite MAVEN_HOME and M2_HOME system variables.  Then take the rest of the afternoon to play around with the mvn command tool, create a Maven project or two, run a few Maven tasks and add a logging framework dependency to a piece of code. Within an hour or two, they’d likely know more about Maven than the captain of the DevOps team.

It’s not hard to learn Maven. It is worth a developer’s time to invest a few hours into playing with the build tool and mastering the Maven fundamentals, especially if the adoption of DevOps tools like Jenkins or Gradle are on the horizon.

April 28, 2018  3:33 AM

Thwart the threat by abiding to network security fundamentals

Daisy.McCarty Profile: Daisy.McCarty

Cloud,  mobile, and IoT have changed the face of the modern network so it’s no surprise  network security fundamentals have become important for businesses of all sizes. It seems even the largest organizations are just one mistake away from a massive data breach or other system failure, all of which could be avoided if more attention was paid to network security fundamentals.

#1 Prioritize full network visibility

According to Ryan Hadley, COO of Signal Forest, today’s IT leaders know they need to be aware of everything in their network. But this isn’t always easy to do. “CIOs who want visibility on all wired clients have to look at all of the switching infrastructure and what switches are capable of in terms of higher end security like 802.1X, etc.,” he said. “They need to be able to tell what types of devices are in use, where they are connecting to, what kind of access has been given to them by default, and if all the network access controls fail, whether the system is failing open or closed.”

One of the network security fundamentals on the wireless side is to maintain full transparency about each connected device and provide special pages for each user to register their own device for specific network access. These are best practices. The fact the “edge” will disappear and the core network will expand outward doesn’t mean location no longer matters. In Hadley’s view, the knowledge of where a device is located doesn’t just determine what type of access is allowed. Location data can also be used to identify potential intrusions. For example, if a worker is logged in from the on-site network and then signs in from a smartphone fifty miles away, that’s a red flag.

#2 Do away with PSK

In Hadley’s view, “Companies that still have PSK networks are quickly discovering that they are the ones that are most liable for a breach. Everyone is migrating away from blanket level access to a more role-based security access point where you are a user with certain privileges that are representative of a particular role assigned based on access directory credentials, your title in the company, the type of device being used, where you are connecting, etc. They also want to move to 802.1X where there’s a cert that’s pushed out to each of the connecting devices and you’ve got a cert that’s on the server that’s facilitating the e-transaction.”

#3 Network security fundamentals are changing

Many of today’s enterprises are complacent and rely on network security techniques that were helpful in the past but are no longer pertinent today, said Brian Engle, cybersecurity consultant and risk advisor. “They aren’t hunting for their own weaknesses,” he explained. “Instead, they are reliant on things that have perhaps worked in the past such as firewall technology or anti-virus measures. They haven’t considered that most of their business processes are extended into software services in the cloud or places that their defensive measures aren’t currently reaching.”

#4 Never leave the door open for physical intrusion

One of the oldest network security fundamentals is to simply limit direct physical access to the network. An open USB port is like an unlocked car door; it can give data thieves deep access into a network.  “Physical security of devices is paramount to correctly configuring your firewalls,” Hadley said. “If you have a public facing location and you have USBs on those computers, you don’t want those to be active. You want them shut down. Or, at the very least you want to have a policy in place. This might be something at the Microsoft level or having a PC management program running that will lock down that USB port—or at least alert someone that a USB has been put into that slot and determine if it is OK to use.”

#5 Check and double check—then hire someone else to check

Both experts agreed a third party should perform penetration testing.  “Having someone else test what you have in place has a lot of value,” Engle said. “When you’ve built it, you see it through rose-tinted glasses. You may think you can see the holes better because you constructed it. But you want to have someone else looking at it from a different vantage point—especially someone whose sole focus is getting good at breaking into networks. They bring a different skillset and mindset than someone who is building things from a defender’s point of view. Having someone else checking and trying to break the things you built will reveal weaknesses you couldn’t see otherwise.”

#6 Don’t ignore the human factor

Attacks can happen when things are sent to individual users too, Engle pointed out.”When users click, it activates ransomware or something else that could lead to infiltration into the environment or exfiltration of data. Each of those things need to be detected. Most detection is based outward rather than looking inward at what might be leaving. And a lot of enterprise security programs aren’t built to see the things that evaded what the traditional security measures would catch.”

While education and encouragement can help users avoid attacks, the classroom doesn’t always simulate the pressures that occur in real life when a target is being tempted, distracted, or frightened into clicking on a suspicious link or opening a potentially dangerous attachment. That’s why it is critical for organizations to plan in advance to detect these attacks promptly, respond swiftly, and recover fully.

#7 Understand the purpose and importance of the security environment

Engle spoke to the short-sightedness of approaching security as a challenge that is solved solely with technology. “It’s important to start a conversation about strategy and objectives to reduce risk rather than just building a security stack with various technologies.” Much like DevOps, security is not a set of tools to buy, but a cultural change to implement.

Finally, enterprises should take advantage of the real-life examples hitting the media to inform their own security strategies and ensure that those strategies address all network security fundamentals and best practices. Organizations need to start asking, “What if what happened to Equifax or Facebook happened to us? What then?” While it’s not pleasant to contemplate, it’s certainly the best way to highlight why security should be taken seriously, no matter the apparent cost.​

April 11, 2018  9:14 PM

Borderless blockchain collaboration to change how software is developed

DeanAnderson Profile: DeanAnderson

The gaming industry has, despite its roots in technology, so far failed to adopt a collaborative model. But that may be about to change.

Gamers are known as among the most passionate of the world’s digital communities and more games are being released than ever before. This wealth of releases brings a major problem: it is extremely difficult for indie studios and developers to rival the influence of multinational companies and bring attention to new games.

Blockchain technology could take advantage of the popularity of borderless online communities and be the catalyst that transforms this situation. New platforms, such as Gamestatix, will soon provide a level playing field for all developers, no matter how small. With more sophisticated technology, we can expect to see borderless collaboration continue, on a far wider and more advanced scale.

Rewards for co-creation?

When blockchain and cryptocurrencies were in their infancy, there was no feasible way to financially reward gamers on mass for co-creating games. Now these technologies are established, a model where co-creators are guaranteed financial rewards is possible. Now community creators will quickly become third party developers and gain financial rewards for their work.

Gamers who play and review games could receive cryptocurrency. Developers will in-turn have access to millions of gamers throughout the entire creation process, and will even be able to license user generated content, sell, and market the games through the platform. All of this could happen with blockchain collaboration. And this new model comes ahead of a key transition in the global labor market.

While it is widely known AI and automation could render many traditional jobs obsolete, it is also true new careers will arise. For some, this could mean part-time passions may become full-time careers. But blockchain collaboration is different from the current sharing economy, as the livelihoods of players and developers will be able to bypass economic turmoil in their countries of residence.

Accessing world-wide talent

With blockchain facilitated peer-to-peer cryptocurrency payments and transactions, employers are now able to pay anyone anywhere in the world, and ultimately tap into a global talent pool.

In the UK, according to TIGA’s 2018 business survey, more than two-thirds (68 percent) of video games firms plan to increase their workforce. But 29 percent of developers are concerned about Brexit’s impact on their ability to recruit the right talent. “In order to grow and thrive, the UK video games industry will need to continue to recruit talent on a global level,” said Dr Richard Wilson, TIGA CEO, in a statement.

The ability to call on a dedicated community with a wealth of knowledge and expertise means developers will be able to efficiently generate, curate and promote their content.

Meanwhile make choices based on what they hear about from the communities they trust, rather than traditional brand placements. This allows developers to take advantage of the multiplier effect: the wider a gamer’s reach, the greater the awareness and interest among newer audience groups.

Deeper community involvement will yield game co-creation and will allow developers to work directly with community moderators and creators to formally co-craft, publish and sell add-on content that resonates with their communities.

The games industry could use blockchain collaboration technology and secure cryptocurrencies to lead the way, recognize talents and encourage real innovation and creativity, all while power and wealth are distributed.

A work revolution?

With blockchain involved in work, everything from recruitment to collaborative labor, finances and contracts can be far more productive, efficient, and democratic.

The video game industry should be optimistic. With a blockchain-facilitated, collaborative global workforce of gamers and developers on the horizon the industry is primed to revolutionize the world of work for its own people. It will influence other sectors as well.

Dean Anderson is the Co-Founder of Gamestatix

Gamestatix is an upcoming social platform for the co-creation of PC games that recognises, encourages and rewards user contribution, whilst giving game developers access to global pool of talent to efficiently generate, curate and promote their content. The company was founded in 2016 by Dean Anderson, Visar Statovci and Valon Statovci.

Gamestatix is currently preparing to launch an ICO (Initial Coin Offering) – a form of crowdfunding for cryptocurrency and blockchain projects.

April 10, 2018  8:41 PM

Using Agile for hardware development to deliver products faster

Daisy.McCarty Profile: Daisy.McCarty

When metal and plastic are manipulated instead of ones and zeros, is it possible to pursue an Agile development process? Or is the idea of Agile for hardware development an misnomer?

The fact is, more and more organizations have given waterfall the could shoulder and turned to scrum, lean and kanban based models as they make Agile for hardware development a reality. A combination of rapid prototyping, modular design, and simulation testing makes Agile for hardware development a real possibility for modern manufacturing in technology.

From Agile software to Agile hardware

With Agile software development, creative processes are the cornerstone of the framework and constant change is the order of the day. It’s expected that alterations will happen continuously. Relatively speaking, there’s a low cost associated with constant change in software since it requires only time, knowledge, and the ability to type some code. However, the physical world is not as amenable to alteration as the digital world. Hardware changes are associated with high costs and significant resistance.

Yet according to Curt Raschke, a Product Development Project Manager and guest lecturer at UT Dallas, Agile for hardware development isn’t really a fresh idea. In fact, Agile had its roots in the manufacturing world to begin with. “These ideas came out of lean and incremental product development,” he said. In that sense, it is no surprise  the concept has come full circle and Agile has appeared in hardware development. It’s simply requires a fresh look at existing best practices.

For Shreyas Bhat, CEO of FusePLM, a full-cloud product lifecycle management system that uses a cards-based approach to the development process, the question isn’t if Agile for hardware development can be done, but instead, how? “Hardware tends to be self-contained and has too many dependencies,” he said. “How do you split it into deliverable sections? You need to be able to partition the project and come up with milestones for the customer.”

Prototyping and Agile hardware development

Rapid prototyping is one part of the answer to the question of how. “These days, you can build a prototype cheaply and early on to give a customer a feel for the eventual functionality,” Bhat said. “With 3D printing, the cost of prototyping has gone down significantly.” Contract manufacturers have turned prototyping into a commodity in an industry where cost is always a driving factor. Being able to turn to a prototype provider for quick, inexpensive modeling saves both time and money for the actual manufacturer later.

As Bhat explained, the requirements for hardware have to be stable since changing tooling increases costs. Prototyping helps pin down the proper tooling early on. “You have a better idea of what is needed for production. That way, you’re getting the right tooling for your manufacturing line up front rather than having to retool down the road.”

But that doesn’t mean prototypes can solve all problems. Raschke explained one of the prime limitations. I “It works for product development from a form factor point of view,” he said. “You can use it to get feedback ahead of time on look and feel. But you can’t use 3D printing alone for stuff with moving parts that require assembly.”

Practical thinking in Agile hardware design

One answer to the issue of Agile for hardware development is the nature of the design itself. The more modular an item is, and the fewer dependencies are involved within any given component, the easier it will be to make changes. Also, choosing the fewest material types possible to get the job done is key aspect to move hardware in a more flexible direction. Again, these are well-known practices in the manufacturing sector and lend themselves well to implementation within an Agile framework.

When simulation or other types of iterative hardware evaluation are part of the process, it is a smart bet to design with testing in mind. Test driven development (TDD) can readily be incorporated into hardware as long as the limitations and capabilities of the simulating environment are known. Cross-functional teams are the best fit for developing hardware in this way, with an eye toward validating design functionality through iterative testing.

In fact, Raschke revealed  this is one of the prime challenges facing hardware where agility is concerned, particularly in terms of pretesting and system integration. “Design, development, and delivery are not always done by the same team,” he said. “Even if pieces can be developed by the same team, they are different subject matter experts. In a hardware product, there are electrical, mechanical, and software engineers along with firmware specialists, and so on.” These diverse experts would all have to be brought together in order to create a faster, more iterative approach to hardware development. That would be quite a feat for any Scrum Master to pull off.

The future of Agile for hardware development

Startups and fast-growing companies with bright ideas are primed to take on the Agile for hardware development challenge. While it may be risky, there is the chance of a high payoff. In Bhat’s view, “Where there is innovation, there is uncertainty. That’s in the sweet spot for Agile. You can do a lot of ‘what if’ scenarios to try out ideas and quickly release them to the customer.”

However, Agile does have one obvious limitation that Raschke addressed. “Hardware products are developed incrementally, but they can’t be released that way,” he explained. That’s certainly true. Given the high percentage of customers who ignore factory recalls and upgrades on everyday consumer goods, it’s hard to imagine them happy about the constant need to update their hardware, particularly if it involves a trip to a local provider or installation of a kit received in the mail. So, while Agile for hardware development and design may play an increasingly important role in the industry, delivery will remain a fixed point with little room for trial and error.

Page 1 of 2112345...1020...Last »

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: