Coffee Talk: Java, News, Stories and Opinions

Page 1 of 2012345...1020...Last »

July 3, 2017  11:26 AM

Advancing JVM performance with the LLVM compiler

cameronmcnz Cameron McKenzie Profile: cameronmcnz

The following is a transcript of an interview between TheServerSide’s Cameron W. McKenzie and Azul Systems’ CTO Gil Tene.

Cameron McKenzie: I always like talking to Gil Tene, the CTO of Azul Systems.

Before jumping on the phone, PR reps often send me a PowerPoint of what we’re supposed to talk about. But  with Tene, I always figure that if I can jump in with a quick question before he gets into the PowerPoint presentation, I can get him to answer some interesting questions that I want the answers to. He’s a technical guy and he’s prepared to get technical about Java and the JVM.

Now, the reason for our latest talk was Azul Systems’ 17.3 release of Zing, which includes an LLVM-based, code-named Falcon, just-in-time compiler. Apparently, it’s incredibly fast, like all of Azul Systems’ JVMs typically are.

But before we got into discussing Azul Systems’ Falcon just-in-time compiler, I thought I’d do a bit of bear-baiting with Gil and tell him that I was sorry that in this new age of serverless computing and cloud and containers, and a world where nobody actually buys hardware anymore, that it must be difficult flogging a high-performance JVM when nobody’s going to need to download one and to install it locally. Well, anyways, Gil wasn’t having any of it.

Gil Tene: So, the way I look at it is actually we don’t really care because we have a bunch of people running Zing on Amazon, so where the hardware comes from and whether it’s a cloud environment or a public cloud or private cloud, a hybrid cloud, or a data center, whatever you want to call it, as long as people are running Java software, we’ve got places where we can sell our JVM. And that doesn’t seem to be happening less, it seems to be happening more.

Cameron McKenzie: Now, I was really just joking around with that first question, but that brought us into a discussion about using Java and Zing in the cloud. And actually, I’m interested in that. How are people using Java and JVMs they’ve purchased in the cloud? Is it mostly EC2 instances or is there some other unique way that people are using the cloud to leverage high-performance JVMs like Zing?

Gil Tene: It is running on EC2 instances. In practical terms, most of what is being run on Amazon today, it is run as virtual instances running on the public cloud. They end up looking like normal servers running Linux on an x86 somewhere, but they run on Amazon, and they do it very efficiently and very elastically, they are very operationally dynamic. And whether it’s Amazon or Azure or the Google Cloud, we’re seeing all of those happening.

But in many of those cases, that’s just a starting point where instead of getting server or running your own virtualized environment, you just do it on Amazon.

The next step is usually that you operationally adapt to using the model, so people no longer have to plan and know how much hardware they’re going to need in three months time, because they can turn it on anytime they want. So they can empower teams to turn on a hundred machines on the weekend because they think it’s needed, and if they were wrong they’ll turn them off. But that’s no longer some dramatic thing to do. Doing it in a company internal data center? It’s a very different thing from a planning perspective.

But from our point of view, that all looks the same, right? Zing and Zulu run just fine in those environments. And whether people consume them on Amazon or Azure or in their own servers, to us it all looks the same.

Cameron McKenzie: Now, cloud computing and virtualization is all really cool, but we’re here to talk about performance. So what do you see these days in terms of bare iron deployments or bare metal deployments or people actually deploying to bare metal and if so, when are they doing it?

Gil Tene: We do see bare metal deployments. You know, we have a very wide mix of customers, so we have everything from e-commerce and analytics and customers that run their own stuff, to banks obviously, that do a lot of stuff themselves. There is more and more of a move towards virtualization in some sort of cloud, whether it’s internal or external. So I’d say that a lot of what we see today is virtualized, but we do see a bunch of the bare metal in latency-sensitive environments or in dedicated super environments. So for example, a lot of people will run dedicated machines for databases or for low-latency trading or for messaging because they don’t want to take the hit for what the virtualized infrastructure might do to them if they don’t.

But having said that, we’re seeing some really good results from people on consistency and latency and everything else running just on the higher-end Amazon. So for example, Cassandra is one of the workloads that fits very well with Zing and we see a lot of turnkey deployments. If you want Cassandra, you turn Zing on and you’re happy, you don’t look back. In an Amazon, that type of cookie-cutter deployment works very well. We tend to see that the typical instances that people use for Cassandra in Amazon with or without us is they’ll move to the latest greatest things that Amazon offers. I think the i3 class of Amazon instances right now are the most popular for Cassandra.

Cameron McKenzie: Now, I believe that the reason we’re talking today is because there are some big news from Azul. So what is the big news?

Gil Tene: The big news for us was the latest release of Zing. We are introducing a brand-new JIT compiler to the JVM, and it is based on LLVM. The reason this is big news, we think, especially in the JVM community, is that the current JIT compiler that’s in use was first introduced 20 years ago. So it’s aging. And we’ve been working with it and within it for most of that time, so we know it very well. But a few years ago, we decided to make the long-term investment in building a brand-new JIT compiler in order to be able to go beyond what we could before. And we chose to use LLVM as the basis for that compiler.

Java had a very rapid acceleration of performance in the first few years, from the late ’90s to the early 2000s, but it’s been a very flat growth curve since then. Performance has improved year over year, but not by a lot, not by the way that we’d like it to. With LLVM, you have a very mature compiler. C and C++ compilers use it, Swift from Apple is based on its, Objective-C as well, the RAS language from Azul is based on it. And you’ll see a lot of exotic things done with it as well, like database query optimizations and all kinds of interesting analytics. It’s a general compiler and optimization framework that has been built for other people to build things with.

It was built over the last decade, so we were lucky enough that it was mature by the time we were making a choice in how to build a new compiler. It incorporates a tremendous amount of work in terms of optimizations that we probably would have never been able to invest in ourselves.

To give you a concrete example of this, the latest CPUs from Intel, the current ones that run, whether they’re bare metal or powered mostly on Amazon servers today, have some really cool new vector optimization capabilities. There’s new vector registers, new instructions and you could do some really nice things with them. But that’s only useful if you have some optimizer that’s able to make use of those instructions when they know it’s there.

With Falcon, our LLVM-based compiler, you take regular Java loops that would run normally on previous hardware, and when our JVM runs on a new hardware, it recognizes the capabilities and basically produces much better loops that use the vector instructions to run faster. And here, you’re talking about factors that could be, 50%, 100%, or sometimes 2 times or 3 times faster even, because those instructions are that much faster. The cool thing for us is not that we sat there and thought of how to use the latest Broadwell chip instructions, it’s that LLVM does that for us without us having to work hard.

Intel has put work into LLVM over the last two years to make sure that the backend optimizers know how to do the stuff. And we just need to bring the code to the right form and the rest is taken care of by other people’s work. So that’s a concrete example of extreme leverage. As the processor hits the market, we already have the optimizations for it. So it’s a great demonstration of how a runtime like a JVM could run the exact same code and when you put it on a new hardware, it’s not just the better clock speed and not just slightly faster, it can actually use the instructions to literally run the code better, and you don’t have to change anything to do it.

Cameron McKenzie: Now, whenever I talk about high-performance JVM computing, I always feel the need to talk about potential JVM pauses and garbage collection. Is there anything new in terms of JVM garbage collection algorithms with this latest release of Zing?

Gil Tene: Garbage collection is not big news at this point, mostly because we’ve already solved it. To us, garbage collection is simply a solved problem. And I do realize that that often sounds like what marketing people would say, but I’m the CTO, and I stand behind that statement.

With our C4 collector in Zing, we’re basically eliminating all the concerns that people have with garbage collections that are above, say, half a millisecond in size. That pretty much means everybody except low-latency traders simply don’t have to worry about it anymore.

When it comes to low-latency traders, we sometimes have to have some conversations about tuning. But with everybody else, they stop even thinking about the question. Now, that’s been the state of Zing for a while now, but the nice thing for us with Falcon and the LLVM compiler is we get to optimize better. So because we have a lot more freedom to build new optimizations and do them more rapidly, the velocity of the resulting optimizations is higher for us with LLVM.

We’re able to optimize around our garbage collection code better and get even faster code for the Java applications running it. But from a garbage collection perspective, it’s the same as it was in our previous release and the one before that because those were close to as perfect as we could get them.

Cameron McKenzie: Now, one of the complaints people that use JVMs often have is the startup time. So I was wondering if there’s anything that was new in terms of the technologies you put into your JVM to improve JVM startup? And for that matter, I was wondering what you’re thinking about Project Jigsaw and how the new modularity that’s coming in with Java 9 might impact the startup of Java applications.

Gil Tene: So those are two separate questions. And you probably saw in our material that we have a feature called ReadyNow! that deals with the startup issue for Java. It’s something we’ve had for a couple of years now. But, again, with the Falcon release, we’re able to do a much better job. Basically, we have a much better vertical rise right when the JVM starts to speed.

The ReadyNow! feature is focused on applications that basically want to reduce the number of operations that go slow before you get to go fast, whether it’s when you start up a new server in the cluster and you don’t want the first 10,000 database queries to go slow before they go fast, or whether it’s when you roll out new code in a continuous deployment environment where you update your servers 20 times a day so you rollout code continuously and, again, you don’t want the first 10,000 or 20,000 web request for every instance to go slow before they get to go fast. Or the extreme examples of trading where at market open conditions, you don’t want to be running your high volume and most volatile trades in interpreter Java speed before they become optimized.

In all of those cases, ReadyNow! is basically focused on having the JVM hyper-optimize the code right when it starts rather than profile and learn and only optimize after it runs. And we do it with a very simple to explain technique, it’s not that simple to implement, but it’s basically we save previous run profiles and we start a run assuming or learning from the previous run’s behavior rather than having to learn from scratch again for the first thousand operations. And that allows us to run basically fast code, either from the first transaction or the tenth transaction, but not from the ten-thousandth transaction. That’s a feature in Zing we’re very proud of.

To the other part of your question about startup behavior, I think that Java 9 is bringing in some interesting features that could over time affect startup behavior. It’s not just the Jigsaw parts, it’s certainly the idea that you could perform some sort of analysis on code-enclosed modules and try to optimize some of it for startup.

Cameron McKenzie: So, anyways, if you want to find out more about high-performance JVM computing, head over to Azul’s website. And if you want to hear more of Gil’s insights, follow him on Twitter, @giltene.
You can follow Cameron McKenzie on Twitter: @cameronmckenzie

March 9, 2018  1:38 AM

Acts of discrimination lets gender inequality in technology go unresolved

Daisy.McCarty Profile: Daisy.McCarty

Coming up in tech over the past few decades wasn’t easy. A successful entrepreneur told me a story of how she landed her first tech job as a sales rep for a telecom agency. This was in the early days of deregulation, long before gender inequality in technology was an issue organizations were willing to address. She had a background in sales but knew nothing about the industry. She did well in the interview, but the hiring manager contacted her with bad news. The word from the VP of sales was, “We just hired our first woman and we’re not going to hire another until we see if she works out.”

This vivacious woman wasn’t about to take that as a final answer. She told the hiring manager in no uncertain terms: “I want you to get me an interview with the sales VP.” During that meeting, she talked her way into a job. The VP was reluctant, and warned, “I’m going to give you a chance, but I’m going to be watching you.”

“I hope you do!” she answered.

The VP got to watch as she went on to fight gender inequality in technology and become a top performer. She learned as she went and hopefully made it easier for the next few females who tried to join the sales force.

A woman who is now an executive at a software firm revealed how dreadful it was to work at a different company many years ago as the sole female coder in her department.

“I was the only woman there, and I guess this was before people knew how to deal with us,” she said. The harassment was overtly sexual and very distressing. “As just one example, when they would have a company dinner and start telling off color jokes, my name was always the one inserted as the butt of the joke.”

She took these issues to her boss who simply advised, “Don’t tell your fiancé about it.” After all, she wouldn’t want to upset her husband-to-be with such trivialities when it was all just in good fun. When she couldn’t stand the harassment any longer, she left without even having another job lined up. Sadly, the recent Uber scandal reveals sexism is alive and well in some organizations in the tech industry. While it may be in vogue to be politically correct on the surface, a culture of male entitlement only serves to advance gender inequality in technology, while wreaking havoc in the lives and careers of women in tech.

Perhaps the second most annoying expression of sexism is the dismissal of the opinions of women—even those who have credentials and experience that put them in a position to be exceptionally knowledgeable. A highly educated woman in the analytics field discovered that being heard takes real effort.

“I’m the only female in this role within my organization,” she said. “I have to push a lot to be taken seriously. I communicate in writing with clients. Having a PhD can help with respect and it’s definitely easier to navigate the situation because of having that education level. It shouldn’t be that way, but when gender inequality in technology prevails, it is. I’ve still found I have to get pushy verbally and give evidence that I am right. It’s like trying to swim in mud.”

Women in tech are there to win

An entrepreneur who started and still runs a thriving business found that she faced hurdles in getting venture capital.

“In one meeting I remember the men looking at us with veiled amusement,” she said. “They didn’t come out and say it but the sense we got was that they felt, ‘You have a great idea, but you are women! Come back when you have a male CEO.’ Based on the questions they were asking, the VCs couldn’t wrap their minds around the fact that we had long term goals. They obviously thought we wouldn’t be around long.”

It’s not just men who can make life hard for women in tech. Female leaders can also contribute to gender inequality in technology related fields without even realizing it. One woman told me about getting passed over for one promotion after another because her female bosses simply assumed she wanted to stay home with her children. A failure to take female ambition seriously means that many organizations are missing out on the opportunity to invest in developing and mentoring high potential women.

The unpaid second shift

For some women, the environment doesn’t have to be sexist in any overt way to create an obstacle to advancement. It just has to ignore the responsibilities that fall on women’s shoulders on the home front.

According to an expert speaker on female empowerment in the workplace, “Women in technology are often faced with having to work in an environment where they don’t feel comfortable. Silicon Valley in particular has a startup culture. Working in a startup environment or a new department within a company requires intense, long work hours. A lot of younger women leave. They can’t adequately manage being a wife, mother, and so on.”

Women are still being forced to choose career versus taking care of children and aging parents.

“Even with all the focus on placing women in senior positions, when you see women who are highly successful they are often unmarried or have a stay-at-home husband who relocated to support them in their career,” this expert said.

She pointed out that men are facing hard choices as well in their careers. But women are usually the ones who end up stepping in to take on the unpaid work while their plans for promotion are sacrificed.

Individual encounters can rankle

According to a mid-level manager who manages a sizeable team, it pays to be choosy about where to work. Women really do research a company’s culture before accepting a position. They need to know if they are getting into a bad situation.

“I’ve worked in the military, automotive, and construction arenas,” the manager said. “The tech company where I am now does not tolerate harassment. It has a good reputation, and I made sure of that before taking the job.”

Yet she certainly knows what it is like to run into men who are jarred by the presence of a woman in their midst. They aren’t going to welcome an outsider with open arms.

“Some people don’t think women should be in that role. You can work to make that relationship professional, but it is not going to be cordial.”

There are also women who say they’ve never noticed any harassment or gender-based discrimination in a conventional corporate tech environment. According to one woman who worked in a larger organization for many years, “Either I’m extraordinarily dense, or I wasn’t being discriminated against.” Yet as a consultant who now deals directly with clients, she can easily think of that one person who routinely makes her feel uncomfortable. Not surprisingly, he’s an older gentleman who holds views that can most kindly be called “highly traditional.”

“He has told me to my face that ‘Women shouldn’t be in the workforce’ and ‘The reason women get married is so men can protect them,'” she said.

Even the client’s condescending greeting of “It’s so good to see your pretty face!” rightfully irritates this successful and accomplished business owner. She certainly can’t imagine a male counterpart being treated in a similar way.

Coping with a male-dominated workplace

As I’ve talked with women over the past year about what it takes to make things work as part of a team or as a leader, I have noticed an interesting trend. Women in the Baby Boomer generation are much more likely to say they had to “act like one of the guys” or simply treat gender as irrelevant in order to survive in their careers. They have learned to lead and collaborate in a masculine way to fit in and be seen as an equal.

Younger women in the 30 to 45 age-range are more likely to say they consciously bring female aspects and qualities to the table to help them succeed. Things like emotional intelligence, good communication, multi-tasking, and the ability to collaborate are traits they use to their advantage.

Both strategies certainly have merit and can be used depending on the workplace culture, a woman’s career goals, and her natural aptitudes. It will take persistence, experimentation, and courage to find the right mix for each individual. Here is some additional advice from the women in my tech network for how to make it work at work.

  • On fitting in: “If you are a minority in a group it’s partly your responsibility to fit in. As an example, I’m not a big follower of sports, but I will skim the headlines before going into the CEO staff meeting. We have to see what we can contribute to the conversation.”
  • On work/life balance: “We find that work-life balance is a day-by-day thing. Some days, works gets most of you. Other days, you have to make the decision to put family first.”
  • On earning respect: “You can’t be a shy or retiring person and get your voice heard. You have to speak up and step up. You can’t be uncertain, you need to have knowledge and come across as knowledgeable.”
  • On getting promoted: “Be bold. Let people know what’s important to you and go after it.”

While the fight against gender inequality in technology remains ongoing, most women see a positive outcome as inevitable. It’s just a matter of persistence, patience, and boldness.

February 27, 2018  11:27 PM

Re-introducing Jakarta EE: Eclipse takes Java EE back to its roots

DarrylTaft Profile: DarrylTaft

The Eclipse Foundation has chosen Jakarta EE as the new name for the technology formerly known as Java EE.

Last September, the Eclipse Foundation announced that Oracle would hand over the Java Platform, Enterprise Edition (Java EE), to Eclipse to foster industrywide collaboration and advance the technology. Since then, Eclipse has established the top-level Eclipse Enterprise for Java (EE4J) project to oversee the future of enterprise Java at the Eclipse Foundation. EE4J is a solid and even semi-cool name for that project, but it was never meant to replace the Java EE brand.

Oracle’s Java trademark

However, the new name for the brand – or for the top-level project, for that matter — could not be Java EE, or even start with Java. Oracle’s trademark usage guidelines require that they are the only entity that can use Java at the beginning off the name, so the phrase “for Java” was a requirement from the start, wrote Mike Milinkovich, executive director of the Eclipse Foundation, in a September 2017 blog post.

After a series of trademark reviews to secure a proper name that passed legal and trademark searches,   Eclipse came up with a few possible names. Jakarta EE won 64.4 percent of the Java community’s vote, while Enterprise Profile captured 35.4 percent.

“I am personally very happy that the community selected Jakarta EE,” Milinkovich said, happy because Jakarta EE is a simple name with potential for future technology, and a nod to the past. “This is just one small step in the overall migration, but the community reaction has been universally positive.”

Alternatives, including Enterprise Profile, were deemed too boring, and the word “enterprise” is viewed by many to mean stodgy and unapproachable to younger developers.

“The Jakarta EE name is a reasonable compromise, considering the objectives of the community and the objections that Oracle has raised,” said Cameron Purdy, CEO of, a stealth mode startup, and former vice president of development at Oracle.

Apache Jakarta project

Other Java thought leaders said the new name, which comes from an old Apache Software Foundation project, might actually be better for enterprise Java.

Jakarta EE is named after Apache Jakarta, which was founded in 1999 and retired in 2011. “Those were roughly the dates when Java EE mattered,” said Rod Johnson, CEO of Atomist and creator of the Spring Framework. “The Java EE brand is tired now, so while a new name will have less recognition, it might also escape some negative associations.”

It’s also a nudge to the ‘JEE’ shortened form, and taps into Jakarta‘s reputation in the Java ecosystem,” said Martijn Verburg, co-founder of the London Java Community and CEO of jClarity, a London-based provider of software that helps developers solve Java performance problems with machine learning.

The renaming of Java EE seems to fit the ongoing saga of Java, said Sacha Labourey, CEO of CloudBees and former CTO of JBoss. That product’s original name was “EJBoss: Enterprise Java Beans Open Source Server,” but after a cease-and-desist letter from Sun Microsystems’ lawyers over the use of ‘EJB,’ JBoss founder and CEO Marc Fleury pragmatically dropped the E.

Reminiscing, Labourey said the Java community has had to “learn how to dance with its trademark and community process [Java Community Process (JCP)], so what else could we expect?” He noted that the JavaPolis conference had to be renamed to Devoxx because of Java trademark issues. “We would have been so disappointed not to have one more chapter to add to the Java trademark book,” he said.

February 21, 2018  6:36 PM

Clear software development governance needed in this polyglot world

cameronmcnz Cameron McKenzie Profile: cameronmcnz

New architectures composed out of language agnostic software containers have made polyglot programming a new reality. But out of this newfound freedom chaos can ensue if clear software development governance policies do not exist that describe when, where and why certain programming languages can be used, and which other languages cannot.

When advocates talk about the various things organizations should be doing to prepare for the move towards modern middleware servers like Amazon and OpenShift, the increased use of serverless systems and all of the headaches that go along with DevOps adoption, a topic that far too often gets overlooked is software development governance. Organizations should address how they are going to manage these new software development stacks and deployment targets rather than discuss technology adoption or how to continuously deploy.

“If you look at all of the Cloud platforms, you know, open shared Cloud Foundry, Amazon, Google, Azure, OpenShift, they’re polyglot,” said RedHat Product Manager Rich Sharples. “They support multiple languages.” And that’s great for organizations that want to put together a tapistry of best-of-breed applications where each one is built with the language that best suits the application’s purpose. But best-of-breed environments can be incredibly difficult to manage over the long term, especially when an application written in a language that has fallen out of favor needs fixed, enhanced or just generally maintained.

Flexible software development governance

How can software development governance address the problems a poloyglot programming shop might encounter? One way is to simply make the organization monoglot and settle on a highly proven and capable programming language like Java. “Java is clearly the best language for running large, complicated, transactional applications in the traditional, long-lived server model,” said Sharples. “It is the active standard for enterprises building those kind of applications.”

But an inflexible and rigid governance model is exactly what caused the colonies to revolt and create a union of states. While settling on Java as a fundamental standard is a good start, a software development governance model should also allow for alternate languages to be used when certain extenuous conditions are met. For example, an organization might specify Scala to be the language of choice if it is understood that an application might have an exceptional requirement for parallelism or list processing speed. Similarly, Kotlin might be agreed upon for the development of mobile applications that require support for the latest Java version.

It’s understood that Java might not always be the right language for every situation, especially in new arenas where containers and microservices dominate. “Java must prove itself when it comes to building smaller microservices on a small stack. It’s got to proove itself in serverless where the environment is even more dense and constrained. And it’s got to compare with languages like GO, which is super lightweight.”

Containing the polyglot chaos

Ask a group of programmers to choose the best coding language and you’ll find it’s a great way to start a fight. To ensure peace and harmony on the development team, a software development governance policy needs to be well defined so programmers  know which languages are permissible, and under which conditions alternates can be used. Setting clear guidelines avoids the chaos that can ensue when every program is written in a different language. Furthermore, clear expectations in a software development governance model ensure egos don’t get bruised when a developer’s choice of programming options is limited to those the organization has standardized upon.

In this new age of poloyglot programming, DevOps adoption and serverless systems, the potential for chaos is a real threat to the wellness of the IT department. Software developers with a host of different languages from which to choose are understandably excited about developing microservices and deploying them into language-agnostic, container based architectures. But for the long term health of the IT department, software development governance models should never be agnostic about the languages they permit.



January 30, 2018  11:21 PM

Developers, learn from the iPhone battery glitch

MarkSpritzler Profile: MarkSpritzler

Apple is forcing all of its older iPhone and iPad users to buy new devices by throttling their batteries as they get older. Countries that perceive this as unsavoury forced obsolescence have demanded Apple executives explain themselves. Class action lawsuits were launched, as Apple finds its business practices coming under an unprecedented amount of scrutiny. Apple has always been a lightening rod for both passion and angst, and there have no doubt been times in the past where Apple’s business practices were worthy of the public lashings it may have taken. But when it comes to the controversy over the legacy iPhone battery glitch, the backlash and public derision the company has suffered is not warranted. But, it is an important situation for developers to keep in mind.

The new iPhone battery update for legacy phones is actually a great feature. It’s a feature that we have not only in phones, but also on our desktop and laptop computers. On desktops it’s simply called Power Save. It’s that feature that enables your computer to reduce the power consumption to increase the amount of time your battery lasts on a single charge. Windows has it, Linux has it and Mac OS has it. Even Android phones have it too.

Will it slow down your computer? Yes it will. Does it give you less functionality than when you are not in power mode? Yes it does. But it is a powerful feature to help in situations where the battery is about to die.

So how is this new iPhone battery “glitch” any different from the corresponding one that Linux and Windows desktop users already enjoy?

I know what you are thinking: Power Save mode is optional on the desktop, and it can be turned on and off, and it is just to make a single charge last longer, rather than  about the life of my battery. And it is only on rare occasions that I would ever use it. Power Save mode is just that. However, what if you wanted to make your battery life last longer over many charges — wouldn’t it be a great feature to have too? It is just an extension of this design model.

Let us say you have a battery that lasts two years and allows 500 charges. Batteries have different ranges, but let’s assume they are all the same for this discussion.

If you have an Android phone and never use Power Save mode, at the end of two years, your phone battery dies. You now have to go get a new phone.

If you have an iPhone and never use Power Save mode and Apple did not code special protection for the battery, at the end of two years, you too will have a dead phone and battery which you now have to replace.

Now, let’s take the special Power Save mode Apple has developed. As the battery is getting closer to its two-year end of life, Apple will put it into this special Power Save mode so that the battery/phone now lasts longer than two years. Now you are getting a better longer lasting phone and reducing any planned obsolescence. How is that a bad thing? It’s really not an iPhone battery glitch.

It is a bad thing in one way and only one way — performance. That is the trade off Apple chose to take here. We can give up some speed nearer to the two year time span and make the phone last longer. Is it a bad thing that the speed is slower? Is it a great thing that the phone actually lasts longer?

It all depends on your perspective. Extend the battery life at the expense of performance near the end of life of the battery, or keep the battery life as is and cause earlier upgrades — which would you choose? I think that determines what your perspective will be towards Apple.

As a software developer, we make these trade off choices everyday. For instance, I am always asking questions like: do I write an SQL query to grab the data that might create a very complex SQL String with many joins that is extremely fast; or maybe use a couple of simple queries to gather some Java Entity objects that will be easier to loop through and do work on, are slower than the query, but much easier to understand, maintain and write. This happens a lot when writing reports in Jasper. I could write a 300-500 line SQL query inside the Jasper Report file, or write 20 lines of code to gather a List of Entities I can pass to the report.

From my perspective, I would have chosen to make the batteries last longer and therefore allow users to use the devices for longer. Performance is also very subjective; sometimes what appears to be slow might appear very fast to someone else.

But you can’t say that extending the battery life is a form of planned obsolescence or an iPhone battery glitch. And in recent news, Apple says in an upcoming release of iOS that it will become an option to turn it on and off, making it exactly like Power Save mode on computers.

January 23, 2018  9:00 PM

Choosing the right container orchestration tool for your project

Daisy.McCarty Profile: Daisy.McCarty

The use of software containers like Docker has been one of the biggest industry trends this past decade. Hitherto, the focus has been to educate the development community on the capabilities of container technology and how enterprises might use container orchestration tools effectively. But it would appear that an event horizon in terms of container awareness has been breached, as familiarity has increased to the point where organizations have moved beyond simply entertaining the thought of bringing containers into the data center and have moved forward into actual adoption. But before companies actually deploy software into containers, the question remains about which software container, and complimentary set of container orchestration tools, an organization should adopt. Options such as Swarm, Cloud Foundry and Mesos make strong arguments for themselves so the container technology landscape is a difficult one to navigate.

Container orchestration with Kubernetes

Of course, Craig McLuckie, CEO of Heptio, is confident in the horse his organization is betting on to create cloud-native applications. “Kubernetes over the last three and a half years has emerged as the standard.” Mark Cavage, VP of Product Development at Oracle, also gives Kubernetes the thumbs up. “It’s the right open source option that abstracts away clouds and infrastructure, giving you a distributed substrate to help you build resilient microservices.”

Kubernetes has been compared favorably with other container orchestration solutions before, with one of its big selling points the vendor-agnostic nature of its open source availability. McLuckie poins out the lack of vendor lock-in as an appealing benefit. Other experts have praised Kubernetes for working well with Docker while requiring only a one-tier architecture. Because of its flexibility, developers who use Kubernetes may be less likely to face the situation of writing applications to match the expectations of a vendor-driven container orchestration platform rather than writing applications in the way that makes the most sense for what their organization hopes to accomplish.

Simplify operations

McLuckie speaks highly of Kubernetes’ ability to simplify ops with containers that are a hermetically sealed, highly predictable and portable unit of deployment. But deployment is just where the real work begins. Keeping the containers running and automating a container based architecture without a hiccup is where Kubernetes shines. “By using a dynamic orchestrator, you can rely on an intelligent system to deal with all of those operational mechanics like scaling, monitoring, and reasoning about your application.”

Increased uptime and resiliency are obvious benefits of having a well-orchestrated container system. And since the self-contained units run the same way on a laptop as they do in the cloud, they can make development and testing easier for transitioning DevOps teams. End users also enjoy the perks of Kubernetes without realizing it, since the orchestration tooling can perform automatic rollout and rollback of services live without impacting traffic, even in the event of a failure of the new version.

Container orchestration and middleware

One interesting point McLuckie makes pertains to how middleware is being “teased apart” as a result of containerization. “Systems like Kubernetes are emerging as a standard way to run a lot of the underlying distributed system components, effectively stitching together a large number of machines that can operate as a single, logical, ideal computing fabric.” As a result, “A lot of the other functionality is getting pushed up into application level libraries that provide much of the immediate integration you need to run those applications efficiently.”

What might the outcome of this fragmentation of middleware be? According to McLuckie, it could open Java apps to a polyglot world by making it simpler to peacefully coexist with other languages. In addition, it would make dependencies easier to manage, even supporting the ability to run something like Cassandra DB on the same core infrastructure and underlying orchestrator as the Java app that relies on that database.

Cutting down on system complexity could be what makes Kubernetes and containerization itself attractive to larger organizations in the long run. “This could address enterprise concerns for governance, risk management, and compliance with a single, consistent, underlying framework.” And enterprises that prefer on-premise or hybrid clouds could use this approach just as readily as those that are fully cloud-reliant.

With container orchestration tools becoming more sophisticated with time, don’t expect the hype around container technology to die down in 2018. It will only intensify, with the focus moving away from education and awareness and instead towards adoption and implementation.

January 12, 2018  5:36 PM

Four wise pieces of advice for women in technology

Daisy.McCarty Profile: Daisy.McCarty

One of my favorite things about interviewing women in technology has been hearing all their helpful tips and insights. Many of these women spent decades in the tech world, moved up the career ladder, innovated in their areas of expertise, started new businesses, and created more opportunities for the next generation of women. Here are some of  takeaways that resonated.

Tip #1: You can end up in tech from just about anywhere

Tanis Cornell, principal of TJC Consulting, offered this insight to young women and teens, “You don’t have to be an engineer to have a job in technology. What I discovered in my own career was an affinity for absorbing a technical concept, grasping the advantages and disadvantages, and becoming fairly technically adept quickly.” Her educational background in communications ended up translating well into the tech sector, where the ability to speak about complex ideas in terms business decision-makers can understand is a highly valued skill.

Jen Voecks, a former event planner and current CEO of the bride-to-vendor matchmaking service Praulia, found that jumping into technology gave her an advantage as a brand new startup owner. “I learned the ropes one step at a time, front end and back end.” she said. “For a while I was stuck in a learning curve, wearing all the hats and trying to build a product while running a business. Once I got into the ‘developer brain’ mode, I realized that it’s like a puzzle with a lot of pieces and things got a lot easier. But I also learned to appreciate what developers do and how to communicate with devs and engineers.” Now she has enough experience to make smart decisions about hiring specialists to handle the various aspects of her technology needs and can focus her own efforts on strategic growth. Her advice: “Just stay at it. Most women I’ve talked with are experiencing hurdles. We should stick together and push through. We are powerful.”

Tip #2: Communicate calmly and clearly

CeCe Morken, EVP and general manager of the ProConnect Group at Intuit, spoke highly of Raji Arasu, who is the senior vice president, platform and core services CTO. “She is an excellent leader, and she leads a lot of men,” she said. “She’s always very calm and never changes her demeanor. Raji takes in information and handles it elegantly even in high stakes situations.” Morken also pointed out that Arasu speaks in a language everyone can understand, translating concepts from software architects to business leaders in a way that makes sense.

Charlene Schwindt, a software business unit manager at Hilti, agreed that being able to customize communication for a given audience is critical for success. “You really need to be able to transition your communication style and tailor it to the level of understanding of your audience,” she explained. “When I’m talking with developers, I can be highly technical. With customer support, I talk about things at a higher, summary level. Business leaders want a conversation that’s results oriented, but I might drill down to more detail if questions are asked.”

Of course, being confident as a communicator doesn’t always mean you have to be right about everything. In fact, it’s completely OK to pivot as you grow. Mary McNeely, Oracle database expert and owner of McNeely Technology Solutions, shared this sage advice. “Don’t be afraid to change your mind. You have to decide something in order to move forward. But once you have more information and time to think, anything could change. You might reach a different conclusion. It’s OK to reconsider your decisions, perspectives, and opinions.” And when you do change your mind, remember to let people know!

Tip #3: Believe in yourself. Seriously.

What would it be like to grow up in a culture that simply accepts that women are awesome? Candy Bernhardt, head of design and UX for Capital One’s auto lending division, recounted her experience of being raised by her grandmother who is from the Philippines. “It’s a very matriarchal society, so I didn’t know any better. I just thought women ruled the world and our opinions mattered. I challenged authority because I thought it was my right.” That pluck and boldness served her well when she landed in a career she thoroughly enjoys.

For those who grew up a little less sure of themselves, it’s not too late to gain the confidence to grasp the brass ring. Meltem Ballan, a data scientist with a PhD in complex systems and brain sciences, encouraged this can-do attitude as well. “Get out in front and show that a female can do it,” she said. “Ask for mentorship and stand for your own rights. Women must support women.” For her part, she found that giving a presentation on short notice, even though she didn’t feel completely ready, was a turning point in her speaking skills. “It’s important to go out and have that moment where we leave our comfort zone. Then our comfort zone gets larger.”

Tip #4: Never stop improving yourself and helping others grow

Julie Hamrick, COO and founder of Ignite Sales, spoke about the value of continuous improvement for success as an entrepreneur. “The harder I work, the luckier I get,” she offered. “At first I had to work hard to make my product good. Now, I am still thinking every day about how to deliver even better results for my customers.”

What about lending a helping hand to other women? How can managers and leaders do better in this area? Morken offered this advice for those who want to be effective mentors: “Focus on developing the person first and then the goals.” It’s important to understand what energies someone because that is what fuels growth.

My own final tip is this: Grow your network starting today. There’s a wealth of information and insight available from the women in tech all around you, and they are happy to share. Start leveraging these resources to take you farther than you ever thought possible!

January 6, 2018  12:29 AM

Requirements management and software integration: Why one can’t live without the other

RebeccaDobbin Profile: RebeccaDobbin

With the complexity of products being built today, no one person can understand all the pieces. Yet everyone involved is still responsible for ensuring their work is compatible with co-workers and aligns with the company’s strategy. The only way to keep everyone moving toward the same goal is to have a shared understanding of what’s being built and why. This is where requirements management and software integration comes in.

Requirements define what a product should be. Equally important, though, they’re the single shared communication bridge between all disciplines. Everything involved in the pursuit of product development can be tied back to the definition of what you’re building — otherwise there’s misalignment somewhere. The goals of the market-facing teams translate into product requirements, making abstract desires achievable. The effort of design, development, and testing teams are driven by requirements. Product documentation, software integration, marketing, and spec all tie back to requirements at the end of a project, all derived from the same set of goals.

However, the relationship between requirements and everyone else’s work is not always easily visible. There might be degrees of separation, such as many levels of decomposition from market goals down to tasks. Information may be kept in multiple task-specific tools. While a shared understanding — in the form of requirements — is essential for alignment, it’s not sufficient in many cases. Companies with complex, multi-level, cross-tool product data run the risk of creating silos and disconnects from the original product goal.

This is why software integration and requirements management work hand-in-hand to realize the full potential of collaboration. When managed effectively, requirements are dynamic. Thoughtful integrations facilitate timely communication between those defining the product goals and those building the product, so those changes on either side are visible and part of continuous decision making.

Aligning teams with value stream networks

While it’s tempting to think of product delivery as a linear process of sequential steps, in reality it’s much more like a chaotic, organic network. Activities occur in tandem, spanning many parts of the organization. Requirements can change after implementation has begun, and modifications made to one part of the process can have far-reaching impacts to several different teams. It should be no surprise, then, that the root cause of failure in product development can be traced to an under-connected value stream network.

A value stream is a notion borrowed from lean manufacturing; it’s the sequence of activities an organization undertakes to deliver a customer request, focusing on both the production flow from raw material to end-product, and on the design flow from concept to realization. Counter to our common understanding, product management should be viewed as a value stream network, rather than simply as a value stream, as the process truly consists of many activities that occur in conjunction, overlap, and influence one another. In order to be truly successful, your value stream network must have a high level of connectivity between each node (or team) within your organization.

Given that requirements are central to driving actions and decisions throughout the product lifecycle, key challenges arise when they’re not integrated and easily accessible to all teams throughout the organization. Despite relying on separate, purpose-driven tools, each team must be able to access the same information in order to work cohesively. And that information isn’t static. Because of the dynamic nature of requirements, it’s essential that each team be able to review and absorb any modifications made to the requirement in real time. That’s where integration comes in.

While each organization is unique, our experience has allowed us to uncover several common software integration patterns. In many cases, these patterns can build on one another to create a cohesive network of connection throughout an organization.

When getting started, we recommend that organizations first identify the scenarios that are causing the most immediate pain, and enable the associated integration patterns to ameliorate that pain. Over time, organizations can adopt and deploy more and more of these integration patterns, to work toward a wholly unified and cohesive value stream network.

Requirements management and Agile planning alignment

Often, business analysts and product managers create and manage requirements in a requirements management tool, such as Jama Software or IBM DOORS. If their organization uses agile methods, the development team may expect to see those requirements within an agile planning tool, such as JIRA or CA Agile Planning, where they can manage them and further break them down into user stories for execution.

In this pattern, requirements flow from the requirements tool to the agile planning tool as epics or features. They can then be broken down into user stories or sub-tasks within the agile planning tool. If desired, teams can maintain the relationship between the epic and its subtasks when flowing that data back to the requirements tool.

Aligning requirements with agile planning

Figure 1: Aligning requirements with agile planning.

Software integration and test planning alignment

In order to create their test plans, QA teams must have access to the requirements for each feature. The problem is that requirements and user stories are usually created by business analysts and product managers in a tool such as Jama or IBM DOORS, while QA teams create their test plans, test cases, and test scripts in another set of tools, such as HPE QC or Tricentis Tosca. By flowing requirements from the product team’s requirements tool to the QA team’s testing tool as test cases, organizations are able to keep these two teams aligned, while allowing each to use their own tool of choice. This eliminates the risk of miscommunication or bottlenecks as teams are able to seamlessly flow information on acceptance criteria, test status, and more between each tool.

aligning testing with requirements.

Figure 2: Test planning alignment.

Tracing requirements

Requirements, epics, and user stories span multiple disciplines and tools in the product development and delivery process. No matter where they’re created and managed, there are many distinct stakeholders who may need access to them. For this reason, “requirements traceability” can be viewed as a composite pattern, composed of several smaller integration patterns.

For example, a project manager may manage project deliverables within a PPM tool, which then flow to a business analyst’s requirements management tool. Once the business analyst has determined the scope and details of the deliverable, that information flows to the developer’s tool as an epic, which is further broken down into user stories and tasks. That epic then flows to the QA team’s testing tool to ensure that the testers understand the requirements that they’re testing. In this way, four separate teams are able to share information and collaborate, all within their own purpose-built tools and practices. By making use of each team’s existing processes and practices, they’re able to eliminate wasted ramp-up time needed to gain access and proficiency in each tool, while allowing each team to use the systems that best facilitate their unique goals.

Value stream network in action

Figure 3: The value stream network in action.

While there’s always been a need for integration, managing complex, dynamic requirements makes it all the more necessary. In order to demonstrate product compliance and auditability, practitioners must relate their features, test cases, and other work items back to the original compliance requirements.

Without integration, practitioners must spend valuable time on status calls, communication via e-mail, and on double-entry between systems, leaving them no time to do the work they were actually hired to do: the design, development, and quality assurance tasks needed to create and ship their product.

Harnessing requirements management and software integration

Integration serves as the glue between these separate disciplines, people, and tools. And when the integrations are domain-aware, they can do seemingly magical things like translate between tools that have conceptual and technical mismatches. Smart integrations are able to translate different work item types and field values between tools in order to facilitate communication between teams, regardless of differences in tool architecture.

Recognizing the inherent network that connects teams in an organization is the first step to enhancing product development. Once you’re able to identify the key areas where teams must share information, you’ll be able to navigate areas in which communication breakdowns are occurring. By implementing key software integration patterns, you’ll increase connectivity between each node of your organizational network. And as connectivity increases and, with it, each team’s ability to access key requirements, you’ll see enhanced success within your organization.

Co-author Rebecca Dobbin is a Product Analyst with Tasktop Technologies (@rebecca__dobbin)
Co-author Robin Calhoun is a Product Manager for Jama Software

December 13, 2017  12:04 AM

Is there a hidden threat embedded in the Management Engine of your Intel chip?

George Lawton Profile: George Lawton

A couple of years ago, Intel invited me to a press luncheon to talk about how great their new chips were. They had new chips that were faster and used less power, and they were selling like hot cakes. The food was good and the new machines were smaller and ran a few minutes longer on batteries than last year’s models. Almost in passing I heard one of their product managers describing a secret operating system buried on enterprise computers, called the Management Engine (ME). They called it a feature, and all I could see was a hidden threat.

They said it only ran on “enterprise computers,” and I remember sleeping a little better at night imagining that this little gremlin did not run inside my consumer laptop at the time. I just found out they have a new test for this hidden threat that can determine if your computer is infested with this incurable disease. Yep, I have it. You probably have it too, along with most of the cloud servers keeping trillions of dollars of enterprise apps secure.

They have also released a so called cure for the symptoms, which is thus far only available from Lenovo. But it is not really a cure in the way an antibiotic eradicates an infection. Its more like those $50,000/year cocktails that manages AIDs, but leaves its hosts at risk of communicating it to others. The fundamental problem is that Intel has thus far not shared much about how this hidden threat works, or whether it can in fact be eradicated. They have just patched some of the vulnerabilities, which thus far are probably not a great danger to cloud apps since someone must physically insert a USB drive to compromise them.

All systems are vulnerable

The fundamental problem in other words is not the news that someone found a vulnerability and patched it. The problem is that Intel has relied on a very flawed theory that something running on virtually every enterprise and cloud server out there is protected because no one outside of Intel knows how it works. This was the same theory that the utility industry relied upon until the US and Israel figured out how Stuxnet could be used to take out the Iranian nuclear program and perhaps an Iranian power plant. But once this attack was shared, all the power infrastructure in the world became vulnerable to Stuxnet’s progeny.

I am sure Intel’s greatest minds did a great job of identifying and mitigating every vulnerability they could dream up at the time. So did the folks that developed SSL, and none of the craftiest minds in the security industry recognized that hidden threat until after the code had been in the public domain for two years.

One of the key developments over the last couple of years has been a move towards DevSecOps, which assumes that all code has vulnerabilities. It’s just that no one has figured out how to exploit them at the time of deployment. Therefore, a mechanism must be in place to quickly and automatically find and update these systems smoothly when a new patch is required. DevSecOps breaks down when it relies on 3rd parties like Lenovo, Dell, and HP to tune the update to their particular configurations.

Its not clear how bad this whole episode will end up being for Intel. Thus far, they have done a pretty good PR job of suggesting that these attacks requiring physical access are not a big deal. This whole thing might blow over by the time they release a new series of chips that leave the little demon out.

The keys to the hidden threat

But then again, the final impact of Intel’s foray into security by obscurity will have to get past the test of the NSA and Joe. The NSA because it seems credible that Intel decided it was important to share such important details to protect American cyber security. We all know that the NSA has the best resources and commitment to protecting these secrets from foreign states, angry contractors, and Wikileaks, so they obviously will never let the secret get out.

No, the real threat is probably someone like Joe. ME runs in a kind of always on mode that allows it to communicate on a network even when the power is off, as long as the computer is plugged in. It is protected by an encryption key. I would like to imagine that the only key to all the Intel computers in the world is locked inside a secret vault with laser beams protecting it from mission impossible style attacks.

It would not be surprising if the reality was much more mundane. Its probably on a little security token that Joe took home one day to debug a few components of the ME server. Joe is probably well meaning, but made a copy of this key one day when management was pushing him to meet an unrealistic software delivery target. Joe’s a good guy and would never do anything deliberately to hurt the company, much less all Intel users around the world.

Unfortunately for the rest of us, Joe has been trading Bitcoins lately. No one will come looking for the key to all the Intel computers when they penetrate his workstation trying to steal his Bitcoin wallet. But some nefarious hacker may see this discovery as a divine omen of his destiny to create a business around penetrating the most sensitive cloud servers in the world by exploiting this hidden threat. And maybe, just maybe, if Joe happens to be reading this, he’ll have the foresight to delete the keys before its too late.

November 28, 2017  2:20 AM

MVC 1.0: The perfect fit for microservice admin tools

cameronmcnz Cameron McKenzie Profile: cameronmcnz

The following is a transcript of the conversation TheServerSide’s Cameron McKenzie had with Ivar Grimstad out hot topics in the Java ecosystem, with an emphasis on MVC 1.0 and the new security specification, JSR-375.

Getting people talking about MVC 1.0 and JSR-375

Cameron McKenzie: TheServerSide was really lucky to catch up with Ivar Grimstad earlier this year. These days he’s evangelizing a couple of what I think are pretty important topics. One is the new MVC framework, and the other one is Java security.

The interesting thing, though, is that despite how important these specifications are, MVC and JSR-375 just don’t quite get the headlines like, say, microservices and containers do. So I wanted to know from Ivar, what are the big things that people need to know about the new MVC specification and JSR-375.

Ivar Grimstad: If I take MVC first, we had a lot of attention around that a couple of years ago when the spec was a part of the EE platform. And there was some noise about it when Oracle took it out. And then, happily, I was fortunate to be in the position that I could be the lead of that specification, so I got it from Oracle and keep on doing it. I also brought on Christian Kaltepoth. Since we were the two most active members of that spec, we were the best guys to take it further.

And there has been a little bit of silence around MVC, and we don’t get much attention anymore. The community really wanted MVC when it started and then they kind of moved away towards microservices and containers.

So while we are kind of in the back stream of the cool technology, MVC is still something I think will be used. We get a lot of community responses when I tweet or blog or say anything about it. We have a lot of contributors on the mailing list, and it’s doing fine.

Cameron McKenzie: Now, one of the things about MVC 1.0 is the fact that it seems to work really well with microservices. And I can see it being used heavily to create UIs for container-based applications. Is that where you see the focus being?

Ivar Grimstad: I also think it’s going to be used a lot in more enterprise, in-house applications, but that’s not the sexy topic that attracts the audience at conferences.

MVC 1.0 and JSR-RS

Cameron McKenzie: So in your eyes, what is that makes MVC 1.0 so special?

Ivar Grimstad: Well, the most important thing, the way I see it, is that it’s built on top of JAX-RS, so if you’re using JAX-RS to create your REST endpoints, the transition to also add some web interfaces to your applications becomes easy. Most REST applications also have some kind of admin tool going on along with it. With MVC 1.0 we can actually build on the exact same technology used by the REST application, because with MVC we just add some flavors to JAX-RS and then we’re good to go.

Cameron McKenzie: So is MVC the new UI framework for container-based applications?

Ivar Grimstad: Definitely. I mean, if you’re creating a containerized service that also has some kind of UI to it, it makes sense to use MVC. If you have developers that are on JAX-RS platform and know Java EE and you’re building on that infrastructure, I see MVC as a very good fit there.

Cameron McKenzie: Now, you are also an expert on JSR-375, the new security API that’s going into Java EE. What can you tell us about that?

Ivar Grimstad: This is a brand new security API for Java EE 8.

I think it’s an important specification because it bridges some of the gaps that were lacking in previous versions. We introduce a common terminology, so we are kind of talking about the same thing, no matter, when you’re talking about security such as the authentication mechanism. We also have the more application developer-managed support. So you can, with annotations, easily add security, and you don’t need to do any container or vendor-specific configuration to get it up and running.

Standardized security with JSR-375

Cameron McKenzie: Now, when I read the JSR-375 spec, I kinda say to myself, you know, “Really? Have we not standardized a lot of this stuff already?” I guess a lot of the stuff like custom user registry APIs and how we connect to user repositories, stuff that’s been managed by the vendor in the past. So the developer really hasn’t had to think about it. But, yeah, I mean, do you not get that impression, “Jeez, how did we get to 2017 and not have this stuff standardized already?”

Ivar Grimstad: Yeah, that’s true. And we have the same feeling. But now it’s there, and that’s a good thing. And it’s definitely a good foundation to build upon.

Cameron McKenzie: So what is it about JSR-375, the Java security spec 1.0, that makes it so conducive to working with microservices?

Ivar Grimstad: You do the security in the application, so you don’t need to configure it from the outside. So it’s contained in your application, the security configuration.

Cameron McKenzie: So what are the big topics you see going forward into 2018?

Ivar Grimstad: Since I’m moving around in the Java EE world, I think that one of the main topics we are gonna discuss is the Java SE 9 move to the Eclipse Foundation. And there’s also a lot of discussion already on Twitter about the naming because they released the name for it to be Eclipse Enterprise for Java, and people of course have opinions about that. So I think that’s gonna be discussed a lot.

Java: A curse or a blessing

Cameron McKenzie: Now, here is a question I have been asking a number of people lately. It’s this: looking past, over the past six or seven years, do you think being the steward of the Java Platform has been a blessing or curse for Oracle?

Ivar Grimstad: I think they are making big money on Java, So I think it’s been pretty good for them. So I don’t think it’s been a curse. I think the handling of EE 8 in 2016 was not good. And we saw the community react to that with the Java EE guardians and the MicroProfile which grew out of that. But the turn they have now taken to open-source things, like open-source NetBeans to Apache and the EE to Eclipse Foundation and also open-sourcing more of the JDK tooling, they’re taking a step in the right direction. I think it’s gonna be positive reception.

November 23, 2017  2:15 AM

The impact of Java SE 9 on operations and development teams

cameronmcnz Cameron McKenzie Profile: cameronmcnz

Just prior to JavaOne, TheServerSide spoke with ZeroTurnaround’s Simon Maple about all of the things going on with Java SE 9 and the greater Java ecosystem. A couple of interesting articles eluted from the conversation, so we thought it might be worthwhile publishing the interview in its entirety.

Cameron McKenzie: There’s a million things going on in the world of Java these days. What are the topics you believe to be of the most importance when it comes to Java and Java SE 9?

Simon Maple: Well, let’s start with what’s happening in Java. There are so many interesting things happening in Java right now. Java is being pushed over to Eclipse, Java SE is going forward being driven first with open JDK, the cadence of Java SE releaeses is now every six months; there are some really, really interesting things happening.

If you look at Java SE 9, you’ll see there are some interesting things going on with the JDK. Obviously, it was delayed by an extra year, so it’s been three years in the making. But one of the things which JDK 9 delivers is the module system. For me, it’s something which developers aren’t really gonna get involved with too much. People who really want modularity are gonna be using OSGi or something similar. People who think, “Yeah. Okay, it’s gonna be an okay idea,” aren’t necessarily chewing at the bit to get it anyway. So I’m not sure people are going to be jumping at modularity.

It does have big benefits in terms of the future of Java, how we can reduce the footprint of Java, how we can develop modules that can be incubated like HTTP/2 and things like that. So it provides a lot of promise, but it’s just not there yet. I think it’s gonna take a long time for the industry and the ecosystem to really embrace modules. Because obviously all frameworks, libraries, tools, vendors of access and things like that, it’s gonna take them time to support the other application developers can use of these frameworks and tools alongside their average developments. We believe that the adoption of Java 9 is gonna be as big as any of the previous releases really.

Who benefits from Java modularity?

Cameron McKenzie: Now, this year at JavaOne, project Jigsaw and modularity is a huge topic. But who benefits the most from modularity? Is this something that only the tool vendors are really gonna start using or is it something that the typical everyday enterprise software developer can start using and leverage in the code that they develop?

Simon Maple: I think it actually benefits a number of different people but not everyone in a massive way. But across the board when it helps a number of different divisions, like operations or development or the business, then when it helps everyone it might be a good choice. Let’s take them one at a time. If you look at development team, how it benefits them largely is, if you have a large distributed team where you have different developers writing different components of an application, on a large application, it’s actually a great way to make sure that other teams using your components in your APIs are using them as you’d expect them to use it. So this gives you much greater power to then make changes to your code knowing you’re not gonna break any people who use your code. Because with modularity, you can effectively say, “These are the APIs I want to expose, everything else I wanna hide.”

So from a developer point of view, that’s actually really beneficial. You can also, as a developer, deliver your update quicker because you couldn’t, for an typical Java application that doesn’t use modularity, just upgrade a single module at a time.

Let’s take Java as an example? We’ve seen in the past huge releases containing many, many things and the reason they get pushed out a year, six months, two years, is because largely we have been waiting for different features. So Java 8 was delayed because of lambdas, Java 9 was delayed because of Jigsaw. In fact, Java 8 largely was also delayed because of Jigsaw which was later pulled. So, everything else, all the other benefits in Java, you can’t get hold of. And it’s because you’re constantly waiting for this one big drop. If you’re looking at something more modular, you could actually upgrade some modules without needing to upgrade the whole application at once.

So from an operations and a development point of view, and in fact from the business point of view as well, you’re much more reactive in terms of how quick you could fix bugs, how quick you can move in terms of your feature planning and things like that. So from that point of view, it’s very, very powerful for the business and very powerful to actually push your features to market. And that’s purely from the business point of view.

From the operations point of view, it’s actually quite nice. Well, it can be a pain and it can be good. It can be good because when you are dealing with individual modules, you’re far more isolated in terms of where your code is, where your applications are modular. However, it can also be slightly more tricky because you have dependencies. The actual deployment can be trickier.

But modularity is not a silver bullet for everyone, but individually different people will find different values for modules.

The timing of Java SE 9

Cameron McKenzie: Now why is it that all of these announcements that pertain to Java SE 9 and the Java platform came out just before JavaOne? Is that just coincidental or is this a matter of Oracle just getting better at doing PR before the big conference?

Simon Maple: So, let’s take each announcement, because I think each announcement there’s no one reason why they came out when they did other than maybe lining them up for JavaOne 2017. But I think each announcement has different drivers. You know, I’m not getting any information directly from Oracle so I’m speculating. But in my opinion, let’s look at Java EE. Obviously, Oracle has the year of drought where they weren’t talking about Java EE specs. Personally, I believe Oracle was pushed into a corner to say, “Hey let’s progress Java EE 8 and 9.” If you actually look at what happened over the last year of Java EE in terms of delivering features to 8, a lot of it’s been pushed over to 9, and personally I think moving Java EE to the Eclipse foundation benefits both Oracle and Java EE because I think it relieves Oracle of the burden that Java EE has bestowed upon them.

From that point of view, they don’t need to worry about it anymore. From the Java EE point of view and the community point of view, I think they’re gonna have a lot more ownership of Java EE obviously it being now part of the Eclipse foundation and it can now go at the speed at which the community wants to drive it. So I think it’s a move whereby everyone’s happy. Oracle doesn’t look like the bad guy anymore, they’re not going to hold it up and the community can push it as far as they wanna push it. So it’s gonna be interesting to see over the next years how much effort Oracle will put in in terms of supporting the projects in Eclipse, in terms of how many developers are they gonna be providing to support each of the specs, not just delivering code but in pushing the specifications forward. So I think that’s really gonna show how much Oracle planned to invest in Java EE, but I think it’s that beneficial.

In terms of them pushing the cadence to six months now, I think there are two reasons for that. The first reason is because everyone is getting a little bit fed up of Java slipping constantly. We’ve seen releases three, four, five years in delay. Obviously, the five years…was it five or the six years? That was really because the stuff in Oracle moved. But since Java 9 is now being pushed out and we have a module system, we can develop much faster and we can provide smaller features quicker. So it does make sense for Java, now it’s modularized, to make use of that and to say, “Right, now we’re gonna be pushing out different pieces of different modules when they’re ready. So every six months what’s ready to be pushed out let’s make it available.” So I think that’s really, really good for Java.

From the ecosystem, it’s gonna be hard work. I think it’s probably gonna be harder work in the ecosystem than it is for Oracle to maintain. Because for Oracle they just need to continue developing and when a feature is ready to go into the main branch they push it into the main branch and we’re good to go. But for the ecosystem, we’re now dealing with…if we look at just next year, we’ve got…what we’ve got Java 9 to the ecosystem’s support, and next year we’re gonna have 18.3 and 18.9 to support. Java 9 is gonna be a well-supported release. Java 18.9 is gonna be a long-term support release. Java 8 is now gonna be commercially supported till 2025. So there are a large number of releases that tools are gonna have to support and it’s no good for tools just to say, “We’re only gonna support the long-term support releases.” Because they’re gonna lose customers.

So they’re gonna have to support every single release of Java, frameworks the same, application servers are more likely gonna support the main long-term support versions. Libraries are gonna have to support all the versions of Java. So for the ecosystem, it’s gonna be a lot more work, a lot more testing. It’s gonna be interesting to see how they keep up particularly for those libraries and frameworks which don’t have large communities and large numbers of committers to do this work, so that’s gonna be extremely interesting. And the third thing was Oracle pushing or putting OpenJDK first. They’re obviously gonna still have their own commercial kind of support branch, which is fine.

What’s new at ZeroTurnaround?

Cameron McKenzie: Now I notice ZeroTurnaround has a booth on the exhibitors’ floor. What’s going on with ZeroTurnaround at JavaOne? What are the big things that you guys are working on? Are there any big product announcements and what are you guys doing to draw people into your booth in the exhibition hall?

Simon Maple: We are obviously working very, very hard to support Java 9 and JRebel which is gonna be a big, big talk obviously because JRebel is so deeply connected to the low-level parts of the JVM. We’re looking to release support for that very soon after the Java 9 release. Yes, we are making some big moves in and around the developmental productivity…I’m sorry, the developmental performance market. We already have XRebel as you know. So we’re gonna be making some announcements over the next week or so and it’s gonna be extremely interesting because we’re gonna be very disruptive in the performance management space. That’s gonna be extremely interesting to me over the next few weeks as well.

Cameron McKenzie: So to get more insights from Mr. Maple, you can always follow him on Twitter @sjmaple. And for that matter if you wanna learn more about some product announcements that are coming out from ZeroTurnaround, you might wanna follow them on Twitter as well, @zeroturnaround.

You can follow Cameron McKenzie on Twitter: @cameronmcnz

Page 1 of 2012345...1020...Last »

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: