Coffee Talk: Java, News, Stories and Opinions

Page 1 of 2012345...1020...Last »

July 3, 2017  11:26 AM

Advancing JVM performance with the LLVM compiler

cameronmcnz Cameron McKenzie Profile: cameronmcnz

The following is a transcript of an interview between TheServerSide’s Cameron W. McKenzie and Azul Systems’ CTO Gil Tene.

Cameron McKenzie: I always like talking to Gil Tene, the CTO of Azul Systems.

Before jumping on the phone, PR reps often send me a PowerPoint of what we’re supposed to talk about. But  with Tene, I always figure that if I can jump in with a quick question before he gets into the PowerPoint presentation, I can get him to answer some interesting questions that I want the answers to. He’s a technical guy and he’s prepared to get technical about Java and the JVM.

Now, the reason for our latest talk was Azul Systems’ 17.3 release of Zing, which includes an LLVM-based, code-named Falcon, just-in-time compiler. Apparently, it’s incredibly fast, like all of Azul Systems’ JVMs typically are.

But before we got into discussing Azul Systems’ Falcon just-in-time compiler, I thought I’d do a bit of bear-baiting with Gil and tell him that I was sorry that in this new age of serverless computing and cloud and containers, and a world where nobody actually buys hardware anymore, that it must be difficult flogging a high-performance JVM when nobody’s going to need to download one and to install it locally. Well, anyways, Gil wasn’t having any of it.

Gil Tene: So, the way I look at it is actually we don’t really care because we have a bunch of people running Zing on Amazon, so where the hardware comes from and whether it’s a cloud environment or a public cloud or private cloud, a hybrid cloud, or a data center, whatever you want to call it, as long as people are running Java software, we’ve got places where we can sell our JVM. And that doesn’t seem to be happening less, it seems to be happening more.

Cameron McKenzie: Now, I was really just joking around with that first question, but that brought us into a discussion about using Java and Zing in the cloud. And actually, I’m interested in that. How are people using Java and JVMs they’ve purchased in the cloud? Is it mostly EC2 instances or is there some other unique way that people are using the cloud to leverage high-performance JVMs like Zing?

Gil Tene: It is running on EC2 instances. In practical terms, most of what is being run on Amazon today, it is run as virtual instances running on the public cloud. They end up looking like normal servers running Linux on an x86 somewhere, but they run on Amazon, and they do it very efficiently and very elastically, they are very operationally dynamic. And whether it’s Amazon or Azure or the Google Cloud, we’re seeing all of those happening.

But in many of those cases, that’s just a starting point where instead of getting server or running your own virtualized environment, you just do it on Amazon.

The next step is usually that you operationally adapt to using the model, so people no longer have to plan and know how much hardware they’re going to need in three months time, because they can turn it on anytime they want. So they can empower teams to turn on a hundred machines on the weekend because they think it’s needed, and if they were wrong they’ll turn them off. But that’s no longer some dramatic thing to do. Doing it in a company internal data center? It’s a very different thing from a planning perspective.

But from our point of view, that all looks the same, right? Zing and Zulu run just fine in those environments. And whether people consume them on Amazon or Azure or in their own servers, to us it all looks the same.

Cameron McKenzie: Now, cloud computing and virtualization is all really cool, but we’re here to talk about performance. So what do you see these days in terms of bare iron deployments or bare metal deployments or people actually deploying to bare metal and if so, when are they doing it?

Gil Tene: We do see bare metal deployments. You know, we have a very wide mix of customers, so we have everything from e-commerce and analytics and customers that run their own stuff, to banks obviously, that do a lot of stuff themselves. There is more and more of a move towards virtualization in some sort of cloud, whether it’s internal or external. So I’d say that a lot of what we see today is virtualized, but we do see a bunch of the bare metal in latency-sensitive environments or in dedicated super environments. So for example, a lot of people will run dedicated machines for databases or for low-latency trading or for messaging because they don’t want to take the hit for what the virtualized infrastructure might do to them if they don’t.

But having said that, we’re seeing some really good results from people on consistency and latency and everything else running just on the higher-end Amazon. So for example, Cassandra is one of the workloads that fits very well with Zing and we see a lot of turnkey deployments. If you want Cassandra, you turn Zing on and you’re happy, you don’t look back. In an Amazon, that type of cookie-cutter deployment works very well. We tend to see that the typical instances that people use for Cassandra in Amazon with or without us is they’ll move to the latest greatest things that Amazon offers. I think the i3 class of Amazon instances right now are the most popular for Cassandra.

Cameron McKenzie: Now, I believe that the reason we’re talking today is because there are some big news from Azul. So what is the big news?

Gil Tene: The big news for us was the latest release of Zing. We are introducing a brand-new JIT compiler to the JVM, and it is based on LLVM. The reason this is big news, we think, especially in the JVM community, is that the current JIT compiler that’s in use was first introduced 20 years ago. So it’s aging. And we’ve been working with it and within it for most of that time, so we know it very well. But a few years ago, we decided to make the long-term investment in building a brand-new JIT compiler in order to be able to go beyond what we could before. And we chose to use LLVM as the basis for that compiler.

Java had a very rapid acceleration of performance in the first few years, from the late ’90s to the early 2000s, but it’s been a very flat growth curve since then. Performance has improved year over year, but not by a lot, not by the way that we’d like it to. With LLVM, you have a very mature compiler. C and C++ compilers use it, Swift from Apple is based on its, Objective-C as well, the RAS language from Azul is based on it. And you’ll see a lot of exotic things done with it as well, like database query optimizations and all kinds of interesting analytics. It’s a general compiler and optimization framework that has been built for other people to build things with.

It was built over the last decade, so we were lucky enough that it was mature by the time we were making a choice in how to build a new compiler. It incorporates a tremendous amount of work in terms of optimizations that we probably would have never been able to invest in ourselves.

To give you a concrete example of this, the latest CPUs from Intel, the current ones that run, whether they’re bare metal or powered mostly on Amazon servers today, have some really cool new vector optimization capabilities. There’s new vector registers, new instructions and you could do some really nice things with them. But that’s only useful if you have some optimizer that’s able to make use of those instructions when they know it’s there.

With Falcon, our LLVM-based compiler, you take regular Java loops that would run normally on previous hardware, and when our JVM runs on a new hardware, it recognizes the capabilities and basically produces much better loops that use the vector instructions to run faster. And here, you’re talking about factors that could be, 50%, 100%, or sometimes 2 times or 3 times faster even, because those instructions are that much faster. The cool thing for us is not that we sat there and thought of how to use the latest Broadwell chip instructions, it’s that LLVM does that for us without us having to work hard.

Intel has put work into LLVM over the last two years to make sure that the backend optimizers know how to do the stuff. And we just need to bring the code to the right form and the rest is taken care of by other people’s work. So that’s a concrete example of extreme leverage. As the processor hits the market, we already have the optimizations for it. So it’s a great demonstration of how a runtime like a JVM could run the exact same code and when you put it on a new hardware, it’s not just the better clock speed and not just slightly faster, it can actually use the instructions to literally run the code better, and you don’t have to change anything to do it.

Cameron McKenzie: Now, whenever I talk about high-performance JVM computing, I always feel the need to talk about potential JVM pauses and garbage collection. Is there anything new in terms of JVM garbage collection algorithms with this latest release of Zing?

Gil Tene: Garbage collection is not big news at this point, mostly because we’ve already solved it. To us, garbage collection is simply a solved problem. And I do realize that that often sounds like what marketing people would say, but I’m the CTO, and I stand behind that statement.

With our C4 collector in Zing, we’re basically eliminating all the concerns that people have with garbage collections that are above, say, half a millisecond in size. That pretty much means everybody except low-latency traders simply don’t have to worry about it anymore.

When it comes to low-latency traders, we sometimes have to have some conversations about tuning. But with everybody else, they stop even thinking about the question. Now, that’s been the state of Zing for a while now, but the nice thing for us with Falcon and the LLVM compiler is we get to optimize better. So because we have a lot more freedom to build new optimizations and do them more rapidly, the velocity of the resulting optimizations is higher for us with LLVM.

We’re able to optimize around our garbage collection code better and get even faster code for the Java applications running it. But from a garbage collection perspective, it’s the same as it was in our previous release and the one before that because those were close to as perfect as we could get them.

Cameron McKenzie: Now, one of the complaints people that use JVMs often have is the startup time. So I was wondering if there’s anything that was new in terms of the technologies you put into your JVM to improve JVM startup? And for that matter, I was wondering what you’re thinking about Project Jigsaw and how the new modularity that’s coming in with Java 9 might impact the startup of Java applications.

Gil Tene: So those are two separate questions. And you probably saw in our material that we have a feature called ReadyNow! that deals with the startup issue for Java. It’s something we’ve had for a couple of years now. But, again, with the Falcon release, we’re able to do a much better job. Basically, we have a much better vertical rise right when the JVM starts to speed.

The ReadyNow! feature is focused on applications that basically want to reduce the number of operations that go slow before you get to go fast, whether it’s when you start up a new server in the cluster and you don’t want the first 10,000 database queries to go slow before they go fast, or whether it’s when you roll out new code in a continuous deployment environment where you update your servers 20 times a day so you rollout code continuously and, again, you don’t want the first 10,000 or 20,000 web request for every instance to go slow before they get to go fast. Or the extreme examples of trading where at market open conditions, you don’t want to be running your high volume and most volatile trades in interpreter Java speed before they become optimized.

In all of those cases, ReadyNow! is basically focused on having the JVM hyper-optimize the code right when it starts rather than profile and learn and only optimize after it runs. And we do it with a very simple to explain technique, it’s not that simple to implement, but it’s basically we save previous run profiles and we start a run assuming or learning from the previous run’s behavior rather than having to learn from scratch again for the first thousand operations. And that allows us to run basically fast code, either from the first transaction or the tenth transaction, but not from the ten-thousandth transaction. That’s a feature in Zing we’re very proud of.

To the other part of your question about startup behavior, I think that Java 9 is bringing in some interesting features that could over time affect startup behavior. It’s not just the Jigsaw parts, it’s certainly the idea that you could perform some sort of analysis on code-enclosed modules and try to optimize some of it for startup.

Cameron McKenzie: So, anyways, if you want to find out more about high-performance JVM computing, head over to Azul’s website. And if you want to hear more of Gil’s insights, follow him on Twitter, @giltene.
You can follow Cameron McKenzie on Twitter: @cameronmckenzie

April 11, 2018  9:14 PM

Borderless blockchain collaboration to change how software is developed

DeanAnderson Profile: DeanAnderson
Gaming

The gaming industry has, despite its roots in technology, so far failed to adopt a collaborative model. But that may be about to change.

Gamers are known as among the most passionate of the world’s digital communities and more games are being released than ever before. This wealth of releases brings a major problem: it is extremely difficult for indie studios and developers to rival the influence of multinational companies and bring attention to new games.

Blockchain technology could take advantage of the popularity of borderless online communities and be the catalyst that transforms this situation. New platforms, such as Gamestatix, will soon provide a level playing field for all developers, no matter how small. With more sophisticated technology, we can expect to see borderless collaboration continue, on a far wider and more advanced scale.

Rewards for co-creation?

When blockchain and cryptocurrencies were in their infancy, there was no feasible way to financially reward gamers on mass for co-creating games. Now these technologies are established, a model where co-creators are guaranteed financial rewards is possible. Now community creators will quickly become third party developers and gain financial rewards for their work.

Gamers who play and review games could receive cryptocurrency. Developers will in-turn have access to millions of gamers throughout the entire creation process, and will even be able to license user generated content, sell, and market the games through the platform. All of this could happen with blockchain collaboration. And this new model comes ahead of a key transition in the global labor market.

While it is widely known AI and automation could render many traditional jobs obsolete, it is also true new careers will arise. For some, this could mean part-time passions may become full-time careers. But blockchain collaboration is different from the current sharing economy, as the livelihoods of players and developers will be able to bypass economic turmoil in their countries of residence.

Accessing world-wide talent

With blockchain facilitated peer-to-peer cryptocurrency payments and transactions, employers are now able to pay anyone anywhere in the world, and ultimately tap into a global talent pool.

In the UK, according to TIGA’s 2018 business survey, more than two-thirds (68 percent) of video games firms plan to increase their workforce. But 29 percent of developers are concerned about Brexit’s impact on their ability to recruit the right talent. “In order to grow and thrive, the UK video games industry will need to continue to recruit talent on a global level,” said Dr Richard Wilson, TIGA CEO, in a statement.

The ability to call on a dedicated community with a wealth of knowledge and expertise means developers will be able to efficiently generate, curate and promote their content.

Meanwhile make choices based on what they hear about from the communities they trust, rather than traditional brand placements. This allows developers to take advantage of the multiplier effect: the wider a gamer’s reach, the greater the awareness and interest among newer audience groups.

Deeper community involvement will yield game co-creation and will allow developers to work directly with community moderators and creators to formally co-craft, publish and sell add-on content that resonates with their communities.

The games industry could use blockchain collaboration technology and secure cryptocurrencies to lead the way, recognize talents and encourage real innovation and creativity, all while power and wealth are distributed.

A work revolution?

With blockchain involved in work, everything from recruitment to collaborative labor, finances and contracts can be far more productive, efficient, and democratic.

The video game industry should be optimistic. With a blockchain-facilitated, collaborative global workforce of gamers and developers on the horizon the industry is primed to revolutionize the world of work for its own people. It will influence other sectors as well.


Dean Anderson is the Co-Founder of Gamestatix

Gamestatix is an upcoming social platform for the co-creation of PC games that recognises, encourages and rewards user contribution, whilst giving game developers access to global pool of talent to efficiently generate, curate and promote their content. The company was founded in 2016 by Dean Anderson, Visar Statovci and Valon Statovci.

Gamestatix is currently preparing to launch an ICO (Initial Coin Offering) – a form of crowdfunding for cryptocurrency and blockchain projects.
www.gamestatix.com


April 10, 2018  8:41 PM

Using Agile for hardware development to deliver products faster

Daisy.McCarty Profile: Daisy.McCarty

When metal and plastic are manipulated instead of ones and zeros, is it possible to pursue an Agile development process? Or is the idea of Agile for hardware development an misnomer?

The fact is, more and more organizations have given waterfall the could shoulder and turned to scrum, lean and kanban based models as they make Agile for hardware development a reality. A combination of rapid prototyping, modular design, and simulation testing makes Agile for hardware development a real possibility for modern manufacturing in technology.

From Agile software to Agile hardware

With Agile software development, creative processes are the cornerstone of the framework and constant change is the order of the day. It’s expected that alterations will happen continuously. Relatively speaking, there’s a low cost associated with constant change in software since it requires only time, knowledge, and the ability to type some code. However, the physical world is not as amenable to alteration as the digital world. Hardware changes are associated with high costs and significant resistance.

Yet according to Curt Raschke, a Product Development Project Manager and guest lecturer at UT Dallas, Agile for hardware development isn’t really a fresh idea. In fact, Agile had its roots in the manufacturing world to begin with. “These ideas came out of lean and incremental product development,” he said. In that sense, it is no surprise  the concept has come full circle and Agile has appeared in hardware development. It’s simply requires a fresh look at existing best practices.

For Shreyas Bhat, CEO of FusePLM, a full-cloud product lifecycle management system that uses a cards-based approach to the development process, the question isn’t if Agile for hardware development can be done, but instead, how? “Hardware tends to be self-contained and has too many dependencies,” he said. “How do you split it into deliverable sections? You need to be able to partition the project and come up with milestones for the customer.”

Prototyping and Agile hardware development

Rapid prototyping is one part of the answer to the question of how. “These days, you can build a prototype cheaply and early on to give a customer a feel for the eventual functionality,” Bhat said. “With 3D printing, the cost of prototyping has gone down significantly.” Contract manufacturers have turned prototyping into a commodity in an industry where cost is always a driving factor. Being able to turn to a prototype provider for quick, inexpensive modeling saves both time and money for the actual manufacturer later.

As Bhat explained, the requirements for hardware have to be stable since changing tooling increases costs. Prototyping helps pin down the proper tooling early on. “You have a better idea of what is needed for production. That way, you’re getting the right tooling for your manufacturing line up front rather than having to retool down the road.”

But that doesn’t mean prototypes can solve all problems. Raschke explained one of the prime limitations. I “It works for product development from a form factor point of view,” he said. “You can use it to get feedback ahead of time on look and feel. But you can’t use 3D printing alone for stuff with moving parts that require assembly.”

Practical thinking in Agile hardware design

One answer to the issue of Agile for hardware development is the nature of the design itself. The more modular an item is, and the fewer dependencies are involved within any given component, the easier it will be to make changes. Also, choosing the fewest material types possible to get the job done is key aspect to move hardware in a more flexible direction. Again, these are well-known practices in the manufacturing sector and lend themselves well to implementation within an Agile framework.

When simulation or other types of iterative hardware evaluation are part of the process, it is a smart bet to design with testing in mind. Test driven development (TDD) can readily be incorporated into hardware as long as the limitations and capabilities of the simulating environment are known. Cross-functional teams are the best fit for developing hardware in this way, with an eye toward validating design functionality through iterative testing.

In fact, Raschke revealed  this is one of the prime challenges facing hardware where agility is concerned, particularly in terms of pretesting and system integration. “Design, development, and delivery are not always done by the same team,” he said. “Even if pieces can be developed by the same team, they are different subject matter experts. In a hardware product, there are electrical, mechanical, and software engineers along with firmware specialists, and so on.” These diverse experts would all have to be brought together in order to create a faster, more iterative approach to hardware development. That would be quite a feat for any Scrum Master to pull off.

The future of Agile for hardware development

Startups and fast-growing companies with bright ideas are primed to take on the Agile for hardware development challenge. While it may be risky, there is the chance of a high payoff. In Bhat’s view, “Where there is innovation, there is uncertainty. That’s in the sweet spot for Agile. You can do a lot of ‘what if’ scenarios to try out ideas and quickly release them to the customer.”

However, Agile does have one obvious limitation that Raschke addressed. “Hardware products are developed incrementally, but they can’t be released that way,” he explained. That’s certainly true. Given the high percentage of customers who ignore factory recalls and upgrades on everyday consumer goods, it’s hard to imagine them happy about the constant need to update their hardware, particularly if it involves a trip to a local provider or installation of a kit received in the mail. So, while Agile for hardware development and design may play an increasingly important role in the industry, delivery will remain a fixed point with little room for trial and error.


March 9, 2018  1:38 AM

Acts of discrimination lets gender inequality in technology go unresolved

Daisy.McCarty Profile: Daisy.McCarty
Uncategorized

Coming up in tech over the past few decades wasn’t easy. A successful entrepreneur told me a story of how she landed her first tech job as a sales rep for a telecom agency. This was in the early days of deregulation, long before gender inequality in technology was an issue organizations were willing to address. She had a background in sales but knew nothing about the industry. She did well in the interview, but the hiring manager contacted her with bad news. The word from the VP of sales was, “We just hired our first woman and we’re not going to hire another until we see if she works out.”

This vivacious woman wasn’t about to take that as a final answer. She told the hiring manager in no uncertain terms: “I want you to get me an interview with the sales VP.” During that meeting, she talked her way into a job. The VP was reluctant, and warned, “I’m going to give you a chance, but I’m going to be watching you.”

“I hope you do!” she answered.

The VP got to watch as she went on to fight gender inequality in technology and become a top performer. She learned as she went and hopefully made it easier for the next few females who tried to join the sales force.

A woman who is now an executive at a software firm revealed how dreadful it was to work at a different company many years ago as the sole female coder in her department.

“I was the only woman there, and I guess this was before people knew how to deal with us,” she said. The harassment was overtly sexual and very distressing. “As just one example, when they would have a company dinner and start telling off color jokes, my name was always the one inserted as the butt of the joke.”

She took these issues to her boss who simply advised, “Don’t tell your fiancé about it.” After all, she wouldn’t want to upset her husband-to-be with such trivialities when it was all just in good fun. When she couldn’t stand the harassment any longer, she left without even having another job lined up. Sadly, the recent Uber scandal reveals sexism is alive and well in some organizations in the tech industry. While it may be in vogue to be politically correct on the surface, a culture of male entitlement only serves to advance gender inequality in technology, while wreaking havoc in the lives and careers of women in tech.

Perhaps the second most annoying expression of sexism is the dismissal of the opinions of women—even those who have credentials and experience that put them in a position to be exceptionally knowledgeable. A highly educated woman in the analytics field discovered that being heard takes real effort.

“I’m the only female in this role within my organization,” she said. “I have to push a lot to be taken seriously. I communicate in writing with clients. Having a PhD can help with respect and it’s definitely easier to navigate the situation because of having that education level. It shouldn’t be that way, but when gender inequality in technology prevails, it is. I’ve still found I have to get pushy verbally and give evidence that I am right. It’s like trying to swim in mud.”

Women in tech are there to win

An entrepreneur who started and still runs a thriving business found that she faced hurdles in getting venture capital.

“In one meeting I remember the men looking at us with veiled amusement,” she said. “They didn’t come out and say it but the sense we got was that they felt, ‘You have a great idea, but you are women! Come back when you have a male CEO.’ Based on the questions they were asking, the VCs couldn’t wrap their minds around the fact that we had long term goals. They obviously thought we wouldn’t be around long.”

It’s not just men who can make life hard for women in tech. Female leaders can also contribute to gender inequality in technology related fields without even realizing it. One woman told me about getting passed over for one promotion after another because her female bosses simply assumed she wanted to stay home with her children. A failure to take female ambition seriously means that many organizations are missing out on the opportunity to invest in developing and mentoring high potential women.

The unpaid second shift

For some women, the environment doesn’t have to be sexist in any overt way to create an obstacle to advancement. It just has to ignore the responsibilities that fall on women’s shoulders on the home front.

According to an expert speaker on female empowerment in the workplace, “Women in technology are often faced with having to work in an environment where they don’t feel comfortable. Silicon Valley in particular has a startup culture. Working in a startup environment or a new department within a company requires intense, long work hours. A lot of younger women leave. They can’t adequately manage being a wife, mother, and so on.”

Women are still being forced to choose career versus taking care of children and aging parents.

“Even with all the focus on placing women in senior positions, when you see women who are highly successful they are often unmarried or have a stay-at-home husband who relocated to support them in their career,” this expert said.

She pointed out that men are facing hard choices as well in their careers. But women are usually the ones who end up stepping in to take on the unpaid work while their plans for promotion are sacrificed.

Individual encounters can rankle

According to a mid-level manager who manages a sizeable team, it pays to be choosy about where to work. Women really do research a company’s culture before accepting a position. They need to know if they are getting into a bad situation.

“I’ve worked in the military, automotive, and construction arenas,” the manager said. “The tech company where I am now does not tolerate harassment. It has a good reputation, and I made sure of that before taking the job.”

Yet she certainly knows what it is like to run into men who are jarred by the presence of a woman in their midst. They aren’t going to welcome an outsider with open arms.

“Some people don’t think women should be in that role. You can work to make that relationship professional, but it is not going to be cordial.”

There are also women who say they’ve never noticed any harassment or gender-based discrimination in a conventional corporate tech environment. According to one woman who worked in a larger organization for many years, “Either I’m extraordinarily dense, or I wasn’t being discriminated against.” Yet as a consultant who now deals directly with clients, she can easily think of that one person who routinely makes her feel uncomfortable. Not surprisingly, he’s an older gentleman who holds views that can most kindly be called “highly traditional.”

“He has told me to my face that ‘Women shouldn’t be in the workforce’ and ‘The reason women get married is so men can protect them,'” she said.

Even the client’s condescending greeting of “It’s so good to see your pretty face!” rightfully irritates this successful and accomplished business owner. She certainly can’t imagine a male counterpart being treated in a similar way.

Coping with a male-dominated workplace

As I’ve talked with women over the past year about what it takes to make things work as part of a team or as a leader, I have noticed an interesting trend. Women in the Baby Boomer generation are much more likely to say they had to “act like one of the guys” or simply treat gender as irrelevant in order to survive in their careers. They have learned to lead and collaborate in a masculine way to fit in and be seen as an equal.

Younger women in the 30 to 45 age-range are more likely to say they consciously bring female aspects and qualities to the table to help them succeed. Things like emotional intelligence, good communication, multi-tasking, and the ability to collaborate are traits they use to their advantage.

Both strategies certainly have merit and can be used depending on the workplace culture, a woman’s career goals, and her natural aptitudes. It will take persistence, experimentation, and courage to find the right mix for each individual. Here is some additional advice from the women in my tech network for how to make it work at work.

  • On fitting in: “If you are a minority in a group it’s partly your responsibility to fit in. As an example, I’m not a big follower of sports, but I will skim the headlines before going into the CEO staff meeting. We have to see what we can contribute to the conversation.”
  • On work/life balance: “We find that work-life balance is a day-by-day thing. Some days, works gets most of you. Other days, you have to make the decision to put family first.”
  • On earning respect: “You can’t be a shy or retiring person and get your voice heard. You have to speak up and step up. You can’t be uncertain, you need to have knowledge and come across as knowledgeable.”
  • On getting promoted: “Be bold. Let people know what’s important to you and go after it.”

While the fight against gender inequality in technology remains ongoing, most women see a positive outcome as inevitable. It’s just a matter of persistence, patience, and boldness.


February 27, 2018  11:27 PM

Re-introducing Jakarta EE: Eclipse takes Java EE back to its roots

DarrylTaft Profile: DarrylTaft
Uncategorized

The Eclipse Foundation has chosen Jakarta EE as the new name for the technology formerly known as Java EE.

Last September, the Eclipse Foundation announced that Oracle would hand over the Java Platform, Enterprise Edition (Java EE), to Eclipse to foster industrywide collaboration and advance the technology. Since then, Eclipse has established the top-level Eclipse Enterprise for Java (EE4J) project to oversee the future of enterprise Java at the Eclipse Foundation. EE4J is a solid and even semi-cool name for that project, but it was never meant to replace the Java EE brand.

Oracle’s Java trademark

However, the new name for the brand – or for the top-level project, for that matter — could not be Java EE, or even start with Java. Oracle’s trademark usage guidelines require that they are the only entity that can use Java at the beginning off the name, so the phrase “for Java” was a requirement from the start, wrote Mike Milinkovich, executive director of the Eclipse Foundation, in a September 2017 blog post.

After a series of trademark reviews to secure a proper name that passed legal and trademark searches,   Eclipse came up with a few possible names. Jakarta EE won 64.4 percent of the Java community’s vote, while Enterprise Profile captured 35.4 percent.

“I am personally very happy that the community selected Jakarta EE,” Milinkovich said, happy because Jakarta EE is a simple name with potential for future technology, and a nod to the past. “This is just one small step in the overall migration, but the community reaction has been universally positive.”

Alternatives, including Enterprise Profile, were deemed too boring, and the word “enterprise” is viewed by many to mean stodgy and unapproachable to younger developers.

“The Jakarta EE name is a reasonable compromise, considering the objectives of the community and the objections that Oracle has raised,” said Cameron Purdy, CEO of xqiz.it, a stealth mode startup, and former vice president of development at Oracle.

Apache Jakarta project

Other Java thought leaders said the new name, which comes from an old Apache Software Foundation project, might actually be better for enterprise Java.

Jakarta EE is named after Apache Jakarta, which was founded in 1999 and retired in 2011. “Those were roughly the dates when Java EE mattered,” said Rod Johnson, CEO of Atomist and creator of the Spring Framework. “The Java EE brand is tired now, so while a new name will have less recognition, it might also escape some negative associations.”

It’s also a nudge to the ‘JEE’ shortened form, and taps into Jakarta‘s reputation in the Java ecosystem,” said Martijn Verburg, co-founder of the London Java Community and CEO of jClarity, a London-based provider of software that helps developers solve Java performance problems with machine learning.

The renaming of Java EE seems to fit the ongoing saga of Java, said Sacha Labourey, CEO of CloudBees and former CTO of JBoss. That product’s original name was “EJBoss: Enterprise Java Beans Open Source Server,” but after a cease-and-desist letter from Sun Microsystems’ lawyers over the use of ‘EJB,’ JBoss founder and CEO Marc Fleury pragmatically dropped the E.

Reminiscing, Labourey said the Java community has had to “learn how to dance with its trademark and community process [Java Community Process (JCP)], so what else could we expect?” He noted that the JavaPolis conference had to be renamed to Devoxx because of Java trademark issues. “We would have been so disappointed not to have one more chapter to add to the Java trademark book,” he said.


February 21, 2018  6:36 PM

Clear software development governance needed in this polyglot world

cameronmcnz Cameron McKenzie Profile: cameronmcnz

New architectures composed out of language agnostic software containers have made polyglot programming a new reality. But out of this newfound freedom chaos can ensue if clear software development governance policies do not exist that describe when, where and why certain programming languages can be used, and which other languages cannot.

When advocates talk about the various things organizations should be doing to prepare for the move towards modern middleware servers like Amazon and OpenShift, the increased use of serverless systems and all of the headaches that go along with DevOps adoption, a topic that far too often gets overlooked is software development governance. Organizations should address how they are going to manage these new software development stacks and deployment targets rather than discuss technology adoption or how to continuously deploy.

“If you look at all of the Cloud platforms, you know, open shared Cloud Foundry, Amazon, Google, Azure, OpenShift, they’re polyglot,” said RedHat Product Manager Rich Sharples. “They support multiple languages.” And that’s great for organizations that want to put together a tapistry of best-of-breed applications where each one is built with the language that best suits the application’s purpose. But best-of-breed environments can be incredibly difficult to manage over the long term, especially when an application written in a language that has fallen out of favor needs fixed, enhanced or just generally maintained.

Flexible software development governance

How can software development governance address the problems a poloyglot programming shop might encounter? One way is to simply make the organization monoglot and settle on a highly proven and capable programming language like Java. “Java is clearly the best language for running large, complicated, transactional applications in the traditional, long-lived server model,” said Sharples. “It is the active standard for enterprises building those kind of applications.”

But an inflexible and rigid governance model is exactly what caused the colonies to revolt and create a union of states. While settling on Java as a fundamental standard is a good start, a software development governance model should also allow for alternate languages to be used when certain extenuous conditions are met. For example, an organization might specify Scala to be the language of choice if it is understood that an application might have an exceptional requirement for parallelism or list processing speed. Similarly, Kotlin might be agreed upon for the development of mobile applications that require support for the latest Java version.

It’s understood that Java might not always be the right language for every situation, especially in new arenas where containers and microservices dominate. “Java must prove itself when it comes to building smaller microservices on a small stack. It’s got to proove itself in serverless where the environment is even more dense and constrained. And it’s got to compare with languages like GO, which is super lightweight.”

Containing the polyglot chaos

Ask a group of programmers to choose the best coding language and you’ll find it’s a great way to start a fight. To ensure peace and harmony on the development team, a software development governance policy needs to be well defined so programmers  know which languages are permissible, and under which conditions alternates can be used. Setting clear guidelines avoids the chaos that can ensue when every program is written in a different language. Furthermore, clear expectations in a software development governance model ensure egos don’t get bruised when a developer’s choice of programming options is limited to those the organization has standardized upon.

In this new age of poloyglot programming, DevOps adoption and serverless systems, the potential for chaos is a real threat to the wellness of the IT department. Software developers with a host of different languages from which to choose are understandably excited about developing microservices and deploying them into language-agnostic, container based architectures. But for the long term health of the IT department, software development governance models should never be agnostic about the languages they permit.

 

 


January 30, 2018  11:21 PM

Developers, learn from the iPhone battery glitch

MarkSpritzler Profile: MarkSpritzler
Uncategorized

Apple is forcing all of its older iPhone and iPad users to buy new devices by throttling their batteries as they get older. Countries that perceive this as unsavoury forced obsolescence have demanded Apple executives explain themselves. Class action lawsuits were launched, as Apple finds its business practices coming under an unprecedented amount of scrutiny. Apple has always been a lightening rod for both passion and angst, and there have no doubt been times in the past where Apple’s business practices were worthy of the public lashings it may have taken. But when it comes to the controversy over the legacy iPhone battery glitch, the backlash and public derision the company has suffered is not warranted. But, it is an important situation for developers to keep in mind.

The new iPhone battery update for legacy phones is actually a great feature. It’s a feature that we have not only in phones, but also on our desktop and laptop computers. On desktops it’s simply called Power Save. It’s that feature that enables your computer to reduce the power consumption to increase the amount of time your battery lasts on a single charge. Windows has it, Linux has it and Mac OS has it. Even Android phones have it too.

Will it slow down your computer? Yes it will. Does it give you less functionality than when you are not in power mode? Yes it does. But it is a powerful feature to help in situations where the battery is about to die.

So how is this new iPhone battery “glitch” any different from the corresponding one that Linux and Windows desktop users already enjoy?

I know what you are thinking: Power Save mode is optional on the desktop, and it can be turned on and off, and it is just to make a single charge last longer, rather than  about the life of my battery. And it is only on rare occasions that I would ever use it. Power Save mode is just that. However, what if you wanted to make your battery life last longer over many charges — wouldn’t it be a great feature to have too? It is just an extension of this design model.

Let us say you have a battery that lasts two years and allows 500 charges. Batteries have different ranges, but let’s assume they are all the same for this discussion.

If you have an Android phone and never use Power Save mode, at the end of two years, your phone battery dies. You now have to go get a new phone.

If you have an iPhone and never use Power Save mode and Apple did not code special protection for the battery, at the end of two years, you too will have a dead phone and battery which you now have to replace.

Now, let’s take the special Power Save mode Apple has developed. As the battery is getting closer to its two-year end of life, Apple will put it into this special Power Save mode so that the battery/phone now lasts longer than two years. Now you are getting a better longer lasting phone and reducing any planned obsolescence. How is that a bad thing? It’s really not an iPhone battery glitch.

It is a bad thing in one way and only one way — performance. That is the trade off Apple chose to take here. We can give up some speed nearer to the two year time span and make the phone last longer. Is it a bad thing that the speed is slower? Is it a great thing that the phone actually lasts longer?

It all depends on your perspective. Extend the battery life at the expense of performance near the end of life of the battery, or keep the battery life as is and cause earlier upgrades — which would you choose? I think that determines what your perspective will be towards Apple.

As a software developer, we make these trade off choices everyday. For instance, I am always asking questions like: do I write an SQL query to grab the data that might create a very complex SQL String with many joins that is extremely fast; or maybe use a couple of simple queries to gather some Java Entity objects that will be easier to loop through and do work on, are slower than the query, but much easier to understand, maintain and write. This happens a lot when writing reports in Jasper. I could write a 300-500 line SQL query inside the Jasper Report file, or write 20 lines of code to gather a List of Entities I can pass to the report.

From my perspective, I would have chosen to make the batteries last longer and therefore allow users to use the devices for longer. Performance is also very subjective; sometimes what appears to be slow might appear very fast to someone else.

But you can’t say that extending the battery life is a form of planned obsolescence or an iPhone battery glitch. And in recent news, Apple says in an upcoming release of iOS that it will become an option to turn it on and off, making it exactly like Power Save mode on computers.


January 23, 2018  9:00 PM

Choosing the right container orchestration tool for your project

Daisy.McCarty Profile: Daisy.McCarty

The use of software containers like Docker has been one of the biggest industry trends this past decade. Hitherto, the focus has been to educate the development community on the capabilities of container technology and how enterprises might use container orchestration tools effectively. But it would appear that an event horizon in terms of container awareness has been breached, as familiarity has increased to the point where organizations have moved beyond simply entertaining the thought of bringing containers into the data center and have moved forward into actual adoption. But before companies actually deploy software into containers, the question remains about which software container, and complimentary set of container orchestration tools, an organization should adopt. Options such as Swarm, Cloud Foundry and Mesos make strong arguments for themselves so the container technology landscape is a difficult one to navigate.

Container orchestration with Kubernetes

Of course, Craig McLuckie, CEO of Heptio, is confident in the horse his organization is betting on to create cloud-native applications. “Kubernetes over the last three and a half years has emerged as the standard.” Mark Cavage, VP of Product Development at Oracle, also gives Kubernetes the thumbs up. “It’s the right open source option that abstracts away clouds and infrastructure, giving you a distributed substrate to help you build resilient microservices.”

Kubernetes has been compared favorably with other container orchestration solutions before, with one of its big selling points the vendor-agnostic nature of its open source availability. McLuckie poins out the lack of vendor lock-in as an appealing benefit. Other experts have praised Kubernetes for working well with Docker while requiring only a one-tier architecture. Because of its flexibility, developers who use Kubernetes may be less likely to face the situation of writing applications to match the expectations of a vendor-driven container orchestration platform rather than writing applications in the way that makes the most sense for what their organization hopes to accomplish.

Simplify operations

McLuckie speaks highly of Kubernetes’ ability to simplify ops with containers that are a hermetically sealed, highly predictable and portable unit of deployment. But deployment is just where the real work begins. Keeping the containers running and automating a container based architecture without a hiccup is where Kubernetes shines. “By using a dynamic orchestrator, you can rely on an intelligent system to deal with all of those operational mechanics like scaling, monitoring, and reasoning about your application.”

Increased uptime and resiliency are obvious benefits of having a well-orchestrated container system. And since the self-contained units run the same way on a laptop as they do in the cloud, they can make development and testing easier for transitioning DevOps teams. End users also enjoy the perks of Kubernetes without realizing it, since the orchestration tooling can perform automatic rollout and rollback of services live without impacting traffic, even in the event of a failure of the new version.

Container orchestration and middleware

One interesting point McLuckie makes pertains to how middleware is being “teased apart” as a result of containerization. “Systems like Kubernetes are emerging as a standard way to run a lot of the underlying distributed system components, effectively stitching together a large number of machines that can operate as a single, logical, ideal computing fabric.” As a result, “A lot of the other functionality is getting pushed up into application level libraries that provide much of the immediate integration you need to run those applications efficiently.”

What might the outcome of this fragmentation of middleware be? According to McLuckie, it could open Java apps to a polyglot world by making it simpler to peacefully coexist with other languages. In addition, it would make dependencies easier to manage, even supporting the ability to run something like Cassandra DB on the same core infrastructure and underlying orchestrator as the Java app that relies on that database.

Cutting down on system complexity could be what makes Kubernetes and containerization itself attractive to larger organizations in the long run. “This could address enterprise concerns for governance, risk management, and compliance with a single, consistent, underlying framework.” And enterprises that prefer on-premise or hybrid clouds could use this approach just as readily as those that are fully cloud-reliant.

With container orchestration tools becoming more sophisticated with time, don’t expect the hype around container technology to die down in 2018. It will only intensify, with the focus moving away from education and awareness and instead towards adoption and implementation.


January 12, 2018  5:36 PM

Four wise pieces of advice for women in technology

Daisy.McCarty Profile: Daisy.McCarty

One of my favorite things about interviewing women in technology has been hearing all their helpful tips and insights. Many of these women spent decades in the tech world, moved up the career ladder, innovated in their areas of expertise, started new businesses, and created more opportunities for the next generation of women. Here are some of  takeaways that resonated.

Tip #1: You can end up in tech from just about anywhere

Tanis Cornell, principal of TJC Consulting, offered this insight to young women and teens, “You don’t have to be an engineer to have a job in technology. What I discovered in my own career was an affinity for absorbing a technical concept, grasping the advantages and disadvantages, and becoming fairly technically adept quickly.” Her educational background in communications ended up translating well into the tech sector, where the ability to speak about complex ideas in terms business decision-makers can understand is a highly valued skill.

Jen Voecks, a former event planner and current CEO of the bride-to-vendor matchmaking service Praulia, found that jumping into technology gave her an advantage as a brand new startup owner. “I learned the ropes one step at a time, front end and back end.” she said. “For a while I was stuck in a learning curve, wearing all the hats and trying to build a product while running a business. Once I got into the ‘developer brain’ mode, I realized that it’s like a puzzle with a lot of pieces and things got a lot easier. But I also learned to appreciate what developers do and how to communicate with devs and engineers.” Now she has enough experience to make smart decisions about hiring specialists to handle the various aspects of her technology needs and can focus her own efforts on strategic growth. Her advice: “Just stay at it. Most women I’ve talked with are experiencing hurdles. We should stick together and push through. We are powerful.”

Tip #2: Communicate calmly and clearly

CeCe Morken, EVP and general manager of the ProConnect Group at Intuit, spoke highly of Raji Arasu, who is the senior vice president, platform and core services CTO. “She is an excellent leader, and she leads a lot of men,” she said. “She’s always very calm and never changes her demeanor. Raji takes in information and handles it elegantly even in high stakes situations.” Morken also pointed out that Arasu speaks in a language everyone can understand, translating concepts from software architects to business leaders in a way that makes sense.

Charlene Schwindt, a software business unit manager at Hilti, agreed that being able to customize communication for a given audience is critical for success. “You really need to be able to transition your communication style and tailor it to the level of understanding of your audience,” she explained. “When I’m talking with developers, I can be highly technical. With customer support, I talk about things at a higher, summary level. Business leaders want a conversation that’s results oriented, but I might drill down to more detail if questions are asked.”

Of course, being confident as a communicator doesn’t always mean you have to be right about everything. In fact, it’s completely OK to pivot as you grow. Mary McNeely, Oracle database expert and owner of McNeely Technology Solutions, shared this sage advice. “Don’t be afraid to change your mind. You have to decide something in order to move forward. But once you have more information and time to think, anything could change. You might reach a different conclusion. It’s OK to reconsider your decisions, perspectives, and opinions.” And when you do change your mind, remember to let people know!

Tip #3: Believe in yourself. Seriously.

What would it be like to grow up in a culture that simply accepts that women are awesome? Candy Bernhardt, head of design and UX for Capital One’s auto lending division, recounted her experience of being raised by her grandmother who is from the Philippines. “It’s a very matriarchal society, so I didn’t know any better. I just thought women ruled the world and our opinions mattered. I challenged authority because I thought it was my right.” That pluck and boldness served her well when she landed in a career she thoroughly enjoys.

For those who grew up a little less sure of themselves, it’s not too late to gain the confidence to grasp the brass ring. Meltem Ballan, a data scientist with a PhD in complex systems and brain sciences, encouraged this can-do attitude as well. “Get out in front and show that a female can do it,” she said. “Ask for mentorship and stand for your own rights. Women must support women.” For her part, she found that giving a presentation on short notice, even though she didn’t feel completely ready, was a turning point in her speaking skills. “It’s important to go out and have that moment where we leave our comfort zone. Then our comfort zone gets larger.”

Tip #4: Never stop improving yourself and helping others grow

Julie Hamrick, COO and founder of Ignite Sales, spoke about the value of continuous improvement for success as an entrepreneur. “The harder I work, the luckier I get,” she offered. “At first I had to work hard to make my product good. Now, I am still thinking every day about how to deliver even better results for my customers.”

What about lending a helping hand to other women? How can managers and leaders do better in this area? Morken offered this advice for those who want to be effective mentors: “Focus on developing the person first and then the goals.” It’s important to understand what energies someone because that is what fuels growth.

My own final tip is this: Grow your network starting today. There’s a wealth of information and insight available from the women in tech all around you, and they are happy to share. Start leveraging these resources to take you farther than you ever thought possible!


January 6, 2018  12:29 AM

Requirements management and software integration: Why one can’t live without the other

RebeccaDobbin Profile: RebeccaDobbin
Uncategorized

With the complexity of products being built today, no one person can understand all the pieces. Yet everyone involved is still responsible for ensuring their work is compatible with co-workers and aligns with the company’s strategy. The only way to keep everyone moving toward the same goal is to have a shared understanding of what’s being built and why. This is where requirements management and software integration comes in.

Requirements define what a product should be. Equally important, though, they’re the single shared communication bridge between all disciplines. Everything involved in the pursuit of product development can be tied back to the definition of what you’re building — otherwise there’s misalignment somewhere. The goals of the market-facing teams translate into product requirements, making abstract desires achievable. The effort of design, development, and testing teams are driven by requirements. Product documentation, software integration, marketing, and spec all tie back to requirements at the end of a project, all derived from the same set of goals.

However, the relationship between requirements and everyone else’s work is not always easily visible. There might be degrees of separation, such as many levels of decomposition from market goals down to tasks. Information may be kept in multiple task-specific tools. While a shared understanding — in the form of requirements — is essential for alignment, it’s not sufficient in many cases. Companies with complex, multi-level, cross-tool product data run the risk of creating silos and disconnects from the original product goal.

This is why software integration and requirements management work hand-in-hand to realize the full potential of collaboration. When managed effectively, requirements are dynamic. Thoughtful integrations facilitate timely communication between those defining the product goals and those building the product, so those changes on either side are visible and part of continuous decision making.

Aligning teams with value stream networks

While it’s tempting to think of product delivery as a linear process of sequential steps, in reality it’s much more like a chaotic, organic network. Activities occur in tandem, spanning many parts of the organization. Requirements can change after implementation has begun, and modifications made to one part of the process can have far-reaching impacts to several different teams. It should be no surprise, then, that the root cause of failure in product development can be traced to an under-connected value stream network.

A value stream is a notion borrowed from lean manufacturing; it’s the sequence of activities an organization undertakes to deliver a customer request, focusing on both the production flow from raw material to end-product, and on the design flow from concept to realization. Counter to our common understanding, product management should be viewed as a value stream network, rather than simply as a value stream, as the process truly consists of many activities that occur in conjunction, overlap, and influence one another. In order to be truly successful, your value stream network must have a high level of connectivity between each node (or team) within your organization.

Given that requirements are central to driving actions and decisions throughout the product lifecycle, key challenges arise when they’re not integrated and easily accessible to all teams throughout the organization. Despite relying on separate, purpose-driven tools, each team must be able to access the same information in order to work cohesively. And that information isn’t static. Because of the dynamic nature of requirements, it’s essential that each team be able to review and absorb any modifications made to the requirement in real time. That’s where integration comes in.

While each organization is unique, our experience has allowed us to uncover several common software integration patterns. In many cases, these patterns can build on one another to create a cohesive network of connection throughout an organization.

When getting started, we recommend that organizations first identify the scenarios that are causing the most immediate pain, and enable the associated integration patterns to ameliorate that pain. Over time, organizations can adopt and deploy more and more of these integration patterns, to work toward a wholly unified and cohesive value stream network.

Requirements management and Agile planning alignment

Often, business analysts and product managers create and manage requirements in a requirements management tool, such as Jama Software or IBM DOORS. If their organization uses agile methods, the development team may expect to see those requirements within an agile planning tool, such as JIRA or CA Agile Planning, where they can manage them and further break them down into user stories for execution.

In this pattern, requirements flow from the requirements tool to the agile planning tool as epics or features. They can then be broken down into user stories or sub-tasks within the agile planning tool. If desired, teams can maintain the relationship between the epic and its subtasks when flowing that data back to the requirements tool.

Aligning requirements with agile planning

Figure 1: Aligning requirements with agile planning.

Software integration and test planning alignment

In order to create their test plans, QA teams must have access to the requirements for each feature. The problem is that requirements and user stories are usually created by business analysts and product managers in a tool such as Jama or IBM DOORS, while QA teams create their test plans, test cases, and test scripts in another set of tools, such as HPE QC or Tricentis Tosca. By flowing requirements from the product team’s requirements tool to the QA team’s testing tool as test cases, organizations are able to keep these two teams aligned, while allowing each to use their own tool of choice. This eliminates the risk of miscommunication or bottlenecks as teams are able to seamlessly flow information on acceptance criteria, test status, and more between each tool.

aligning testing with requirements.

Figure 2: Test planning alignment.

Tracing requirements

Requirements, epics, and user stories span multiple disciplines and tools in the product development and delivery process. No matter where they’re created and managed, there are many distinct stakeholders who may need access to them. For this reason, “requirements traceability” can be viewed as a composite pattern, composed of several smaller integration patterns.

For example, a project manager may manage project deliverables within a PPM tool, which then flow to a business analyst’s requirements management tool. Once the business analyst has determined the scope and details of the deliverable, that information flows to the developer’s tool as an epic, which is further broken down into user stories and tasks. That epic then flows to the QA team’s testing tool to ensure that the testers understand the requirements that they’re testing. In this way, four separate teams are able to share information and collaborate, all within their own purpose-built tools and practices. By making use of each team’s existing processes and practices, they’re able to eliminate wasted ramp-up time needed to gain access and proficiency in each tool, while allowing each team to use the systems that best facilitate their unique goals.

Value stream network in action

Figure 3: The value stream network in action.

While there’s always been a need for integration, managing complex, dynamic requirements makes it all the more necessary. In order to demonstrate product compliance and auditability, practitioners must relate their features, test cases, and other work items back to the original compliance requirements.

Without integration, practitioners must spend valuable time on status calls, communication via e-mail, and on double-entry between systems, leaving them no time to do the work they were actually hired to do: the design, development, and quality assurance tasks needed to create and ship their product.

Harnessing requirements management and software integration

Integration serves as the glue between these separate disciplines, people, and tools. And when the integrations are domain-aware, they can do seemingly magical things like translate between tools that have conceptual and technical mismatches. Smart integrations are able to translate different work item types and field values between tools in order to facilitate communication between teams, regardless of differences in tool architecture.

Recognizing the inherent network that connects teams in an organization is the first step to enhancing product development. Once you’re able to identify the key areas where teams must share information, you’ll be able to navigate areas in which communication breakdowns are occurring. By implementing key software integration patterns, you’ll increase connectivity between each node of your organizational network. And as connectivity increases and, with it, each team’s ability to access key requirements, you’ll see enhanced success within your organization.



Co-author Rebecca Dobbin is a Product Analyst with Tasktop Technologies (@rebecca__dobbin)
Co-author Robin Calhoun is a Product Manager for Jama Software


December 13, 2017  12:04 AM

Is there a hidden threat embedded in the Management Engine of your Intel chip?

George Lawton Profile: George Lawton

A couple of years ago, Intel invited me to a press luncheon to talk about how great their new chips were. They had new chips that were faster and used less power, and they were selling like hot cakes. The food was good and the new machines were smaller and ran a few minutes longer on batteries than last year’s models. Almost in passing I heard one of their product managers describing a secret operating system buried on enterprise computers, called the Management Engine (ME). They called it a feature, and all I could see was a hidden threat.

They said it only ran on “enterprise computers,” and I remember sleeping a little better at night imagining that this little gremlin did not run inside my consumer laptop at the time. I just found out they have a new test for this hidden threat that can determine if your computer is infested with this incurable disease. Yep, I have it. You probably have it too, along with most of the cloud servers keeping trillions of dollars of enterprise apps secure.

They have also released a so called cure for the symptoms, which is thus far only available from Lenovo. But it is not really a cure in the way an antibiotic eradicates an infection. Its more like those $50,000/year cocktails that manages AIDs, but leaves its hosts at risk of communicating it to others. The fundamental problem is that Intel has thus far not shared much about how this hidden threat works, or whether it can in fact be eradicated. They have just patched some of the vulnerabilities, which thus far are probably not a great danger to cloud apps since someone must physically insert a USB drive to compromise them.

All systems are vulnerable

The fundamental problem in other words is not the news that someone found a vulnerability and patched it. The problem is that Intel has relied on a very flawed theory that something running on virtually every enterprise and cloud server out there is protected because no one outside of Intel knows how it works. This was the same theory that the utility industry relied upon until the US and Israel figured out how Stuxnet could be used to take out the Iranian nuclear program and perhaps an Iranian power plant. But once this attack was shared, all the power infrastructure in the world became vulnerable to Stuxnet’s progeny.

I am sure Intel’s greatest minds did a great job of identifying and mitigating every vulnerability they could dream up at the time. So did the folks that developed SSL, and none of the craftiest minds in the security industry recognized that hidden threat until after the code had been in the public domain for two years.

One of the key developments over the last couple of years has been a move towards DevSecOps, which assumes that all code has vulnerabilities. It’s just that no one has figured out how to exploit them at the time of deployment. Therefore, a mechanism must be in place to quickly and automatically find and update these systems smoothly when a new patch is required. DevSecOps breaks down when it relies on 3rd parties like Lenovo, Dell, and HP to tune the update to their particular configurations.

Its not clear how bad this whole episode will end up being for Intel. Thus far, they have done a pretty good PR job of suggesting that these attacks requiring physical access are not a big deal. This whole thing might blow over by the time they release a new series of chips that leave the little demon out.

The keys to the hidden threat

But then again, the final impact of Intel’s foray into security by obscurity will have to get past the test of the NSA and Joe. The NSA because it seems credible that Intel decided it was important to share such important details to protect American cyber security. We all know that the NSA has the best resources and commitment to protecting these secrets from foreign states, angry contractors, and Wikileaks, so they obviously will never let the secret get out.

No, the real threat is probably someone like Joe. ME runs in a kind of always on mode that allows it to communicate on a network even when the power is off, as long as the computer is plugged in. It is protected by an encryption key. I would like to imagine that the only key to all the Intel computers in the world is locked inside a secret vault with laser beams protecting it from mission impossible style attacks.

It would not be surprising if the reality was much more mundane. Its probably on a little security token that Joe took home one day to debug a few components of the ME server. Joe is probably well meaning, but made a copy of this key one day when management was pushing him to meet an unrealistic software delivery target. Joe’s a good guy and would never do anything deliberately to hurt the company, much less all Intel users around the world.

Unfortunately for the rest of us, Joe has been trading Bitcoins lately. No one will come looking for the key to all the Intel computers when they penetrate his workstation trying to steal his Bitcoin wallet. But some nefarious hacker may see this discovery as a divine omen of his destiny to create a business around penetrating the most sensitive cloud servers in the world by exploiting this hidden threat. And maybe, just maybe, if Joe happens to be reading this, he’ll have the foresight to delete the keys before its too late.


Page 1 of 2012345...1020...Last »

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: