Coffee Talk: Java, News, Stories and Opinions

Page 1 of 1612345...10...Last »

July 3, 2017  11:26 AM

Advancing JVM performance with the LLVM compiler

cameronmcnz Cameron McKenzie Profile: cameronmcnz

The following is a transcript of an interview between TheServerSide’s Cameron W. McKenzie and Azul Systems’ CTO Gil Tene.

Cameron McKenzie: I always like talking to Gil Tene, the CTO of Azul Systems.

Before jumping on the phone, PR reps often send me a PowerPoint of what we’re supposed to talk about. But  with Tene, I always figure that if I can jump in with a quick question before he gets into the PowerPoint presentation, I can get him to answer some interesting questions that I want the answers to. He’s a technical guy and he’s prepared to get technical about Java and the JVM.

Now, the reason for our latest talk was Azul Systems’ 17.3 release of Zing, which includes an LLVM-based, code-named Falcon, just-in-time compiler. Apparently, it’s incredibly fast, like all of Azul Systems’ JVMs typically are.

But before we got into discussing Azul Systems’ Falcon just-in-time compiler, I thought I’d do a bit of bear-baiting with Gil and tell him that I was sorry that in this new age of serverless computing and cloud and containers, and a world where nobody actually buys hardware anymore, that it must be difficult flogging a high-performance JVM when nobody’s going to need to download one and to install it locally. Well, anyways, Gil wasn’t having any of it.

Gil Tene: So, the way I look at it is actually we don’t really care because we have a bunch of people running Zing on Amazon, so where the hardware comes from and whether it’s a cloud environment or a public cloud or private cloud, a hybrid cloud, or a data center, whatever you want to call it, as long as people are running Java software, we’ve got places where we can sell our JVM. And that doesn’t seem to be happening less, it seems to be happening more.

Cameron McKenzie: Now, I was really just joking around with that first question, but that brought us into a discussion about using Java and Zing in the cloud. And actually, I’m interested in that. How are people using Java and JVMs they’ve purchased in the cloud? Is it mostly EC2 instances or is there some other unique way that people are using the cloud to leverage high-performance JVMs like Zing?

Gil Tene: It is running on EC2 instances. In practical terms, most of what is being run on Amazon today, it is run as virtual instances running on the public cloud. They end up looking like normal servers running Linux on an x86 somewhere, but they run on Amazon, and they do it very efficiently and very elastically, they are very operationally dynamic. And whether it’s Amazon or Azure or the Google Cloud, we’re seeing all of those happening.

But in many of those cases, that’s just a starting point where instead of getting server or running your own virtualized environment, you just do it on Amazon.

The next step is usually that you operationally adapt to using the model, so people no longer have to plan and know how much hardware they’re going to need in three months time, because they can turn it on anytime they want. So they can empower teams to turn on a hundred machines on the weekend because they think it’s needed, and if they were wrong they’ll turn them off. But that’s no longer some dramatic thing to do. Doing it in a company internal data center? It’s a very different thing from a planning perspective.

But from our point of view, that all looks the same, right? Zing and Zulu run just fine in those environments. And whether people consume them on Amazon or Azure or in their own servers, to us it all looks the same.

Cameron McKenzie: Now, cloud computing and virtualization is all really cool, but we’re here to talk about performance. So what do you see these days in terms of bare iron deployments or bare metal deployments or people actually deploying to bare metal and if so, when are they doing it?

Gil Tene: We do see bare metal deployments. You know, we have a very wide mix of customers, so we have everything from e-commerce and analytics and customers that run their own stuff, to banks obviously, that do a lot of stuff themselves. There is more and more of a move towards virtualization in some sort of cloud, whether it’s internal or external. So I’d say that a lot of what we see today is virtualized, but we do see a bunch of the bare metal in latency-sensitive environments or in dedicated super environments. So for example, a lot of people will run dedicated machines for databases or for low-latency trading or for messaging because they don’t want to take the hit for what the virtualized infrastructure might do to them if they don’t.

But having said that, we’re seeing some really good results from people on consistency and latency and everything else running just on the higher-end Amazon. So for example, Cassandra is one of the workloads that fits very well with Zing and we see a lot of turnkey deployments. If you want Cassandra, you turn Zing on and you’re happy, you don’t look back. In an Amazon, that type of cookie-cutter deployment works very well. We tend to see that the typical instances that people use for Cassandra in Amazon with or without us is they’ll move to the latest greatest things that Amazon offers. I think the i3 class of Amazon instances right now are the most popular for Cassandra.

Cameron McKenzie: Now, I believe that the reason we’re talking today is because there are some big news from Azul. So what is the big news?

Gil Tene: The big news for us was the latest release of Zing. We are introducing a brand-new JIT compiler to the JVM, and it is based on LLVM. The reason this is big news, we think, especially in the JVM community, is that the current JIT compiler that’s in use was first introduced 20 years ago. So it’s aging. And we’ve been working with it and within it for most of that time, so we know it very well. But a few years ago, we decided to make the long-term investment in building a brand-new JIT compiler in order to be able to go beyond what we could before. And we chose to use LLVM as the basis for that compiler.

Java had a very rapid acceleration of performance in the first few years, from the late ’90s to the early 2000s, but it’s been a very flat growth curve since then. Performance has improved year over year, but not by a lot, not by the way that we’d like it to. With LLVM, you have a very mature compiler. C and C++ compilers use it, Swift from Apple is based on its, Objective-C as well, the RAS language from Azul is based on it. And you’ll see a lot of exotic things done with it as well, like database query optimizations and all kinds of interesting analytics. It’s a general compiler and optimization framework that has been built for other people to build things with.

It was built over the last decade, so we were lucky enough that it was mature by the time we were making a choice in how to build a new compiler. It incorporates a tremendous amount of work in terms of optimizations that we probably would have never been able to invest in ourselves.

To give you a concrete example of this, the latest CPUs from Intel, the current ones that run, whether they’re bare metal or powered mostly on Amazon servers today, have some really cool new vector optimization capabilities. There’s new vector registers, new instructions and you could do some really nice things with them. But that’s only useful if you have some optimizer that’s able to make use of those instructions when they know it’s there.

With Falcon, our LLVM-based compiler, you take regular Java loops that would run normally on previous hardware, and when our JVM runs on a new hardware, it recognizes the capabilities and basically produces much better loops that use the vector instructions to run faster. And here, you’re talking about factors that could be, 50%, 100%, or sometimes 2 times or 3 times faster even, because those instructions are that much faster. The cool thing for us is not that we sat there and thought of how to use the latest Broadwell chip instructions, it’s that LLVM does that for us without us having to work hard.

Intel has put work into LLVM over the last two years to make sure that the backend optimizers know how to do the stuff. And we just need to bring the code to the right form and the rest is taken care of by other people’s work. So that’s a concrete example of extreme leverage. As the processor hits the market, we already have the optimizations for it. So it’s a great demonstration of how a runtime like a JVM could run the exact same code and when you put it on a new hardware, it’s not just the better clock speed and not just slightly faster, it can actually use the instructions to literally run the code better, and you don’t have to change anything to do it.

Cameron McKenzie: Now, whenever I talk about high-performance JVM computing, I always feel the need to talk about potential JVM pauses and garbage collection. Is there anything new in terms of JVM garbage collection algorithms with this latest release of Zing?

Gil Tene: Garbage collection is not big news at this point, mostly because we’ve already solved it. To us, garbage collection is simply a solved problem. And I do realize that that often sounds like what marketing people would say, but I’m the CTO, and I stand behind that statement.

With our C4 collector in Zing, we’re basically eliminating all the concerns that people have with garbage collections that are above, say, half a millisecond in size. That pretty much means everybody except low-latency traders simply don’t have to worry about it anymore.

When it comes to low-latency traders, we sometimes have to have some conversations about tuning. But with everybody else, they stop even thinking about the question. Now, that’s been the state of Zing for a while now, but the nice thing for us with Falcon and the LLVM compiler is we get to optimize better. So because we have a lot more freedom to build new optimizations and do them more rapidly, the velocity of the resulting optimizations is higher for us with LLVM.

We’re able to optimize around our garbage collection code better and get even faster code for the Java applications running it. But from a garbage collection perspective, it’s the same as it was in our previous release and the one before that because those were close to as perfect as we could get them.

Cameron McKenzie: Now, one of the complaints people that use JVMs often have is the startup time. So I was wondering if there’s anything that was new in terms of the technologies you put into your JVM to improve JVM startup? And for that matter, I was wondering what you’re thinking about Project Jigsaw and how the new modularity that’s coming in with Java 9 might impact the startup of Java applications.

Gil Tene: So those are two separate questions. And you probably saw in our material that we have a feature called ReadyNow! that deals with the startup issue for Java. It’s something we’ve had for a couple of years now. But, again, with the Falcon release, we’re able to do a much better job. Basically, we have a much better vertical rise right when the JVM starts to speed.

The ReadyNow! feature is focused on applications that basically want to reduce the number of operations that go slow before you get to go fast, whether it’s when you start up a new server in the cluster and you don’t want the first 10,000 database queries to go slow before they go fast, or whether it’s when you roll out new code in a continuous deployment environment where you update your servers 20 times a day so you rollout code continuously and, again, you don’t want the first 10,000 or 20,000 web request for every instance to go slow before they get to go fast. Or the extreme examples of trading where at market open conditions, you don’t want to be running your high volume and most volatile trades in interpreter Java speed before they become optimized.

In all of those cases, ReadyNow! is basically focused on having the JVM hyper-optimize the code right when it starts rather than profile and learn and only optimize after it runs. And we do it with a very simple to explain technique, it’s not that simple to implement, but it’s basically we save previous run profiles and we start a run assuming or learning from the previous run’s behavior rather than having to learn from scratch again for the first thousand operations. And that allows us to run basically fast code, either from the first transaction or the tenth transaction, but not from the ten-thousandth transaction. That’s a feature in Zing we’re very proud of.

To the other part of your question about startup behavior, I think that Java 9 is bringing in some interesting features that could over time affect startup behavior. It’s not just the Jigsaw parts, it’s certainly the idea that you could perform some sort of analysis on code-enclosed modules and try to optimize some of it for startup.

Cameron McKenzie: So, anyways, if you want to find out more about high-performance JVM computing, head over to Azul’s website. And if you want to hear more of Gil’s insights, follow him on Twitter, @giltene.
You can follow Cameron McKenzie on Twitter: @cameronmckenzie

August 14, 2017  8:17 PM

Implementing a custom user registry to consolidate LDAP servers and active directories?

cameronmcnz Cameron McKenzie Profile: cameronmcnz

Should you implement a custom user registry to help mitigate access to your various LDAP servers in order to simplify security tasks such as authentication and group association? The answer to that question is a resounding ‘no.’

The simple beauty of the custom user registry

On the surface, implementing a custom user registry is simple. While it differs slightly from one application server to the next, to implement a custom user registry, you typically only have to write a Java class or two that provides an implementation for half a dozen or so methods that do things like validate a password, or indicate whether a user is a part of a given group. It’s easy peasy.

For example, to create a custom user registry for WebSphere, here is the IBM WebSphere UserRegistry interface that needs to be implemented, along with the 18 methods you need to code:

com.ibm.websphere.security.UserRegistry

1. initialize(java.util.Properties)
2. checkPassword(String,String)
3. mapCertificate(X509Certificate[])
4. getRealm
5. getUsers(String,int)
6. getUserDisplayName(String)
7. getUniqueUserId(String)
8. getUserSecurityName(String)
9. isValidUser(String)
10. getGroups(String,int)
11. getGroupDisplayName(String)
12. getUniqueGroupId(String)
13. getUniqueGroupIds(String)
14. getGroupSecurityName(String)
15. isValidGroup(String)
16. getGroupsForUser(String)
17. getUsersForGroup(String,int)
18. createCredential(String)

Now remember, the goal here is not to invent a system for storing users. When implementing a custom user registry, there is typically an underlying data store in which the application is connecting. So perhaps the purpose of the custom user registry is to combine access to a combined LDAP server and a database system that has user information. Or perhaps there are three different LDAP servers that need to have consolidated access. Each of those systems will already have mechanisms to update a password or check if a user is part of a given group. Code for a custom user registry simply taps into the APIs of those underlying systems. There’s no re-inventing the wheel with a custom user registry. Instead, you just leverage the wheels that the underlying user repository already provides.

So it all sounds simple enough, doesn’t it? Well, it’s not. And there are several reasons why.

Ongoing connectivity concerns

First of all, just connecting to various disparate systems can be a pain. There’s the up front headache of getting credentials, bypassing or at least authenticating through existing firewalls and security systems that are already in place. Just getting initial connectivity to disparate user registry systems can be a pain, let alone maintaining connectivity as SSL certificates expire, or changes are made in the network topology. Maintaining connectivity is both an up-front and a long term pain.

LDAP server optimization

And then there’s the job of optimization. Authenticating against a single user repository is time consuming enough, especially at peak login times. Now imagine there were three or four underlying systems against which user checks were daisy chained through if..then…else statements. It’d be a long enough lag to trigger a user revolt. So even after achieving the consolidation of different LDAP servers and databases, there is time that needs to be invested in figuring out how to optimize access. Sometimes having a look-aside NoSQL database where users ids are mapped to the system in which they are registered can speed things up, although a failed login would likely still require querying each subsystem. Performance optimization becomes an important part of building the user registry, as every user notices when logging into the system takes an extra second or two.

Data quality issues

And if there are separate subsystems, ensuring data quality becomes a top priority as well. For example, if the same username, such as cmckenzie, exists in two sub-systems, which one is the record of truth? Data integrity problems can cause bizarre and difficult behavior to troubleshoot. For example, cmckenzie might be able to log in during low usage times, but not during peak usage times, because during peak usage times, overflow requests get routed to a different sub-system. And even though the problems may stem from data quality issues in the LDAP server subsystems, it’s the developers maintaining the custom user registry code who will be expected to troubleshoot the problem and identify it.

LDAP failure and user registry redundancy

Failover and redundancy is another important piece of the puzzle. It’s good to keep in mind that if the custom user registry fails, nobody can log into anything from anywhere. That’s a massive amount of responsibility for anyone developing software to shoulder. Testing how the code behaves when a given user registry is down, or figuring out how to make the custom user registry resilient when weird corner-cases happen is pivotally important when access to everything is on the line.

Ownership of the custom user registry

From a management standpoint, a custom user registry is a stressful piece of technology to own. Any time the login process is slow, or problems occur after a user logs into the system, the first place fingers will point is to the custom user registry piece. When login, authentication, authorization or registration problems occur, the owner of the custom user registry piece typically first has to prove that it is not their piece that is the problem. And of course, there certainly are times when the custom user registry component is to blame. Perhaps a certificate has been updated on a server and nothing has been synchronized with the registry, or perhaps someone has updated a column in the home grown user registry database, or maybe an update was made to the active directory? The custom user registry piece depends on the stability of the underlying infrastructure to which it connects, and that is a difficult contract to guarantee at the best of times.

So yes, on the surface, an custom user registry seems like a fairly easy piece of software to implement, but it is fraught with danger and hardship at every turn, so it is never recommended. A better option is to invest time into consolidating all user registries into a single, high performance LDAP server or active directory, and allow the authentication piece of your Oracle or WebSphere applications server to connect into that. For small to medium size enterprises, that is always the preferred option. That way you can concentrate on using the software and hardware that hosts the user records to be optimized and tuned for redundancy and failover, rather than trying to handle such problems in code that has been written in house. It also allows you to point your finger at the LDAP server or active directory vendor, rather than pointing fingers at the in-house development team when things go wrong.

Inevitably, there will be times when a custom user registry is required, and it has to be written, despite all of the given reservations. If that’s the case, I wish you the best of luck, and I hope your problems are few. But if it can be avoided, the right choice is to avoid, at all costs, the need to implement a custom user registry of your own.


August 14, 2017  3:43 PM

Gender and ethnic parity is not equivalent to workplace diversity

cameronmcnz Cameron McKenzie Profile: cameronmcnz

Former Google employee James Damore’s recently leaked memo about his old employer’s employment activities has brought the discussion about IT hiring practices to the fore. After reading a vast number of articles written on the topic, it would appear that many believe the terms workplace diversity and gender representation are interchangeable. They of course are not, and doing so is not only intellectually dishonest, but it’s incendiarily disingenuous to the point that doing so actual hinders the progression of the important goal of balanced gender and ethnic representation in the workforce.

How do you define diversity?

I ran for president of my University Student Council twenty-five years ago. One of the other candidates was an enlightened progressive whose main platform plank was to promote and improve diversity in all areas of the university. It was a message that was well received in the social sciences, law and humanities buildings, but it ran into a brick wall when it was trucked into engineering.

In compliance with all preconceived stereotypes, gender parity in the engineering department was a little lacking back then, but a few of those future train conductors were getting a bit tired of constantly being beaten with the ‘lack of diversity’ stick. A student stepped up to the microphone during question period and asked the candidate if she felt the engineering department lacked diversity. After the candidate stumbled in her effort to provide a diplomatic answer, the student followed up with something more rhetorical.

“The leader of the school’s Gay and Lesbian committee is an engineer. Our representative to the student council is from India. Three of the five students who are on full scholarships are second generation Chinese, and even my friends with paler complexions, who you believe lack diversity, are here on Visas from countries like Australia, Russia, Israel and eastern Europe. So how can you possibly stand there and tell me we are not diverse?” The student was mad, and he had every right to be.

The engineering faculty was indeed diverse in a variety of beautiful even inspirational ways. Gender parity was certainly lacking, and I can think of a few minority groups that were under-represented, but for someone to stand in front of that group of students and tell them they weren’t diverse was an undeserved and unmitigated insult.

Confronting intellectual dishonesty

Even twenty-five years later, that exchange still resonates with me. Not just because it was so enjoyable to see a social justice warrior be so thoroughly destroyed intellectually, but because the student wasn’t wrong. He had every right to stand up and object to the insults and the derision that were constantly being thrown at the faculty to which he was proud to be a part.

With a history of participating in medium-term consulting engagements, I can say that I have worked on an admirable number of projects in a wide array of cities. I can’t remember any engagement in which the project room looked like a scene out of the 1950’s based TV series Mad Men, where every programmer was a white male, and every developer was a product of a privileged background. In fact, I was on a Toronto based project a number of years ago where my nickname on a team of over thirty individuals was ‘the white guy.’

I’m proud of all of those projects I’ve worked on over the years, and I’ve made friends with people who come from a more diverse set of backgrounds than I could possibly have ever imagined. And the friends I’ve made include a number of incredible female programmers, although I will admit that all of those project teams on which I worked lacked in terms of gender parity. But it would be an insult to me and to everyone I’ve worked with to tell me that the teams I’ve worked on weren’t made up of a diverse set of people, because they were. I have seen great diversity in the workforce. I have not seen great gender parity. There is a difference.

There is certainly an issue in the technology field in terms of an under-representation of both women and certain visible minorities. But gender and ethnic parity is not the same thing as workplace diversity. Arguing that they are is disingenuous, and perpetuating this type of insulting intellectual dishonesty will do more to hinder the goal of achieving balanced gender and ethnic representation in the workplace than it ever will to enhance it.


August 8, 2017  7:15 PM

Big-Data is helping in wildlife conservation

shwati12 Profile: shwati12
Uncategorized

Objective

Big data is on the boom these days. It has been helping every field. Let us see few of the projects of Big Data in Wildlife Conservation that has used Big data and Machine Learning as their key components.
Big Data in Wildlife Conservation

2. Big Data in Wildlife Conservation

In this section, various projects are discussed below which shows the aid of Big Data in Wildlife Conservation.

2.1. The Great Elephant Census

 In Africa alone, more than 12,000 elephants have been killed each year since 2006 and if this goes on, that day is not far when there will not be any elephant left on this planet. The protection of ecosystem is vital not only to wildlife but the communities around them to complete the ecosystem cycle and Big Data is helping in the same. In 2014, a survey The Great Elephant Census was launched by Microsoft co-founder Paul Allen to achieve a greater understanding of elephants number in Africa. 90 researchers traversed over 285,000 miles of the African continent, over 21 countries to conduct this research.

One of the largest raw data sets was created in this survey. The survey has shown that African elephant numbers has become only 352,271 in 18 countries and has gone down by 30% in seven years. This highlighted the need for on-going monitoring to make ensure better response times to emergency situations. Big Data is having a huge impact on conservation efforts that is going to help protect the Elephant population of Africa.

2.2. eBird

This project was launched in 2002. It is an app that helps users’ in recording bird sightings as they find any and input this data into the app. The app was created with a target to help create usable Big Data sets that could be of value to professional and recreational bird watchers. These data sets are then being shared with professionals like teachers, land managers, ornithologists, biologists and conservation workers who have used this data to create BirdCast, a regional migration forecast giving real-time predictions of bird migration for the first time ever. This uses machine learning to predict migration and roosting patterns of different species of birds. This will provide benefits by providing more accurate intelligence for land planning and management and allowing necessary preparations for areas prone to roosting bird gatherings.
Read Complete Article>>


August 8, 2017  7:13 PM

C# vs. Java: 5 Irreplaceable C# features we’d kill to have in Java

OverOps Profile: OverOps
Uncategorized
The perfect programming language doesn’t exist. I hope we can agree on that, if nothing else. New languages are often developed in response to the shortcomings of another, and each is inevitably stronger in some ways and weaker in others.
 
C# and Java both stemmed from C/C++ languages, and they have a lot in common beyond both being Object-oriented. In addition to some structural similarities between Java’s JVM and C#’s .NET CLR, each advanced on its own path with their respective development teams focused on different visions of what the language should be.
 
We don’t want to get lost in the argument of which language is better than the other, we just want to outline some of the features that developers in C# are using that we don’t have available to us in Java.
 


August 8, 2017  7:13 PM

The Top 5 Disadvantages of Not Implementing an Exception Inbox Zero Policy

OverOps Profile: OverOps
Uncategorized
Inbox zero is a concept that has been around for a while, and one that tries to help you keep a clear email inbox and a focused mind. Now imagine, what if you could take this concept, and apply it to your exception handling process? If this question made you raise your eyebrow, keep on reading.
 
In the following post we’ll try and tackle the inbox zero concept from a new perspective, and see how it can be incorporated into the world of production monitoring. Let’s go clear some errors.


August 8, 2017  6:58 PM

Are you going to JavaOne 2017? Book your San Francisco hotel now.

cameronmcnz Cameron McKenzie Profile: cameronmcnz

It’s likely not advice a veteran of JavaOne conferences needs to hear, but if you’ve got your ticket for JavaOne 2017, and you’re attending this OracleWorld affiliated event for the first time, I’m telling you not to do any last minute searching for a San Francisco hotel.

San Francisco is a city completely ill equipped for handling an event of OracleWorld and JavaOne 2017’s magnitude. In fact, San Francisco is so small, it’s ill equipped to handle events of any magnitude. The two million square foot Moscone Center, named after the San Francisco Mayor whose assassination was portrayed in the Sean Penn movie Milk, is a fine conference venue, but there are simply not enough hotels to accommodate all of the guests and speakers who will be in attendance.

Cutting the stay short

Many attendees would love to spend the entire week in San Francisco, but the per-night hotel cost just becomes far too prohibitive. The conference is still almost two months away, yet discounted three and four star hotels available through the JavaOne 2017 website are already pricing at between $285 and $585 a night. And I’d be happy to bet that those $285 a night hotels won’t be available by time September rolls around. In fact, about a month before the conference, Oracle usually takes down the option to book a hotel through their website, as all of the available rooms have been booked.

As a long time consultant who worked largely in the US north-east, I rarely booked accommodations more than a month out, and typically would search for a hotel two weeks before a gig would start. The first time I attended JavaOne, I applied the same strategy and suffered greatly for it. I found very expensive accommodation at low-budget hotel on Lombard Street. The $350 a night motel didn’t have any air conditioning, and it was an unusually hot week in the city, making the stay particularly uncomfortable.

javaone-hotel

Never too close for comfort

Furthermore, the location was well beyond walking distance to the event, but given the complete lack of cabs in the city, I had to make the sweaty and uncomfortable hike myself. Uber has helped address the transportation problem in the city, but at an event like JavaOne, you want to be close to the shenanigans. It’s nice to be able to get to the opening events without having to get up ridiculously early, and it’s also nice to be able to rest in your hotel in the late afternoon before walking back and attending some of the evening events. Cabbing back and forth to a hotel tends to be both expensive and unnecessarily inconvenient.

So this is my final word of warning to people attending OracleWorld or JavaOne 2017. Make sure you’ve got your hotel booked. Do it right now if you haven’t done it already. Otherwise you’ll be spending way too much money on accommodations, and the only hotels available will be 30 miles away in Burlingame, or even worse, in Oakland. And trust me, you don’t want to be staying there.


July 20, 2017  5:49 PM

The top 100 Java libraries in 2017 – Based on 259,885 source files

OverOps Profile: OverOps
Uncategorized
It feels like only yesterday we were scraping data from GitHub to discover what are the top Java libraries of 2016, and all of a sudden another year has passed. This year, we’re kicking this data crunch up a notch and introducing Google BigQuery into the mix to retrieve the most accurate results.
For this year’s data crunch, we’ve changed the methodology a bit, and thanks to Google BigQuery. First, we pulled the top 1,000 Java repositories from GitHub by stars. Now that we had the most popular Java projects on Github, we filtered out Android and focused only on 477 pure Java projects.
After filtering the projects, we counted the unique imports within each of them and summed it all together. A deeper walkthrough of the research process is available at the bottom of this post.
Without further adieu, it’s time to see who are the winners and bloomers of 2017 most popular Java libraries. Who will sit on the Java throne?


July 20, 2017  4:39 PM

How women in IT influence today’s workforce and tomorrow’s technology

Daisy.McCarty Profile: Daisy.McCarty

What would the tech world look like without leaders, visionaries, and entrepreneurs like Satya Nadella, John Ive, or Elon Musk? What about the contributions of the other seven men who complete the list of “The 10 Most Influential Leaders in Tech Right Now” according to Juniper Research? Would the world be a poorer place without these powerful, intelligent, and insightful men bringing their minds to bear on the problems facing the world today? I think so.

Now imagine a world in which at least half of the names on that list were female. That’s a day that many women in the technology sector look forward to with anticipation. In my interviews with women across the tech spectrum, I certainly heard stories of obstacles and discouragement. But the overwhelming outlook is positive. It’s only a matter of time until the full impact of women in tech begins to be felt at all levels, adding depth and richness to a sector that is geared for an incredibly exciting decade.

I asked my interviewees to tell me about women they admire in their industry, what they believe women have to offer the tech world, and what the future will look like as our influence grows. Here’s what I found out. First, women aren’t tearing one another down. They are definitely cheering each other on.

Who do women look up to in tech?

It’s great to have role models at top levels of leadership in the technology field. Meg Whitman was a name that came up more than once in conversation. Julie Hamrick, Founder and COO of Ignite Sales, pointed to Meg’s early success at the helm of the world’s leading auction site. “For me, it’s the fact that she grew eBay to become a household name.” But it’s not just the wins that people find compelling about Whitman. It’s her attitude about adversity and challenges. CeCe Morken, EVP and General Manager of ProConnect at Intuit, also spoke about her admiration for the current CEO of Hewlett Packard Enterprise. “She so embraces learning from failure. One of the things she told us is that she now celebrates failure as much as she celebrates success in her all-hands meetings. These are just fast failures, experiments they learn from.”

But most of the women I spoke with didn’t choose a big name as a “shero” they look up to the most. They told me story after story of women they know personally who have inspired them. Charlene Schwindt, a software business unit manager at Hilti, put it simply. “I most admire some of the women I see and work with every day. When they complete a successful project, have big wins, get major status or an executive position on a board, that’s a huge achievement.”

Julie mentioned Valerie Freeman, CEO at Imprimis, as a role model. “She is one of those people who is doing well in business and doing good in the community.” Mary McNeely, Oracle Database expert and owner of McNeely Technology Solutions, spoke highly of peer advisory facilitator and talent development consultant Tanis Cornell as someone who showed that hard work and self-belief really can pay off. “She didn’t start out in tech, but she moved to technology sales, pulled herself up by the bootstraps, and overcame barriers to succeed.”

Jen Voecks is the founder and CEO of the tech startup Praulia, an online service that matches brides with wedding vendors. For her, the most inspiring thing to see is other women creating something new in the industry. She pointed to Molly Cain, former Executive Director of Tech Wildcatters, as an inspiration. “She built a lot of things herself.” Today, Cain is the acting Deputy Director of Digital Innovation and Solutions/Venture Relations at the DHS. Quite a remarkable achievement and certainly one that will make her a role model for many more women throughout her career.

How do women change the game within tech organizations?

There’s simply no substitute for having more perspectives for both innovation and problem solving. Charlene has seen the benefit of a diverse team in determining how to develop the projects under her direction. “What women bring to the table can be different. Often, consideration of how people work with technology is not really coming into play as it should during the development process. Even if you have people talking to the customer about what they want, everything is based on interpretation. With a cross gender team, you get a different result by having multiple views on the same thing.”

This is something Julie found true as well. “I’ve noticed when we have women on our teams we have better follow through and more creativity. They are good at filling in the gaps. Amidst all the ones and zeros, women see more of the gray, more depth.” That’s not just good for short term improvement. It’s also essential for long term viability. Tanis Cornell pointed out that economic and financial experts are catching on to the fact that women are good for business. “It’s been shown in study after study now that companies with a better gender balance on the management team perform better financially. Meryl Lynch and other firms are starting to pay attention. They are investing in and recommending companies with more balanced leadership at the top. It’s simply a good business decision.”

How will women influence the future of technology?

Women are bringing their power to bear in leadership, innovation, entrepreneurship, and more. The days when tech was developed through a primarily male lens are fading fast. That shift is bound to have an impact on what happens in the next five to ten years. Many women I spoke with mentioned the subtle but potent effect the female touch may have on the direction of tech. According to Julie, “I think things will become more friendly and useful. They will have more care to them, even in technology. Tech is more utilized by everyone these days. Going forward, there will be even more self-service, but the experience will have a more satisfying, human feel.” Mary echoed this sentiment, in terms of what it will take to succeed in the tech field and the world in general. “As the world becomes more roboticized, there’s also going to be a counter trend. Good intuition and people skills will become even more critical.”

CeCe Morken offered this advice for the current and coming generations of female innovators. “Look ahead and be aware of what’s coming. It’s changing faster than ever before and you need to find a way to grasp it.” Morken put her money where her mouth is recently by purchasing the latest virtual reality tech for employees to experience at work. Intuit is not looking to launch any products using that technology right now, but CeCe wants her people to be familiar with what’s available so they aren’t playing catch up later as innovation continues to accelerate.

Jen highlighted the importance of tech for changing the future of women as well. “Tech gives you a new platform. It allows you to reach a broader audience. As an inventor or business owner, you have the opportunity to grow faster and meet partners.” In essence, tech is democratizing the entrepreneurial space even more than before, ensuring that women can advance on their own terms even if the corporate world continues to change more slowly.

Women in tech must keep reaching for their dreams

Data scientist Dr. Meltem Ballan has faced her share of challenges in building a career in tech. But she offered encouragement to other women in their quest to rise to the top. “It’s not insurmountable. There is no ceiling. Just keep on going out there and doing it. Learn to network well, and have the courage to take that next step.” Mary McNeely agreed that the future is there for the taking. “What we get next is whatever we want. We are educated and empowered. Our star is rising.”


July 14, 2017  7:40 PM

The importance of developing Virtual Reality applications

George Lawton Profile: George Lawton

In some respects, Virtual Reality (VR) and Augmented Reality (AR) applications have been around for a couple of decades. But these never really went mainstream because of the cost and limits of existing technology. However, this is starting to change with the recent release of new VR headsets and AR glasses, and the development tools and ecosystems to support them.

At the O’Reilly Design Conference in San Francisco, Jody Medich, director of design for Singularity University Labs, argued that VR and AR are already being developed in mainstream applications, and will play a significant impact in web application development soon. She said, “Developers and designers need to think about how to enable their organizations to use these when they come.” Games are proving to be an early adopter, but more significantly she sees the use of VR in improving travel experience, education, sales, communication, and improved office productivity.

Understand the landscape

The Oculus Rift and HTC Vibe are getting the most press, owing to their high-performance VR rendering in a modestly priced package. Other efforts like Google Goggles have a more cost-efficient option that can bring virtual worlds to high-end smart phones. These are not just being used for games. One surgeon, Dr. Richard Burke at Nicklaus Children’s Hospital in Miami was able to use his Google Cardboard to visualize and execute a complex heart surgery quickly that would not have been otherwise possible.

Medich argues that VR is a subset of augmented reality in which a view of the outside world is occluded. High-end AR adds a layer of new information on top of the existing world, which is a little more challenging to line up. Early version of AR involves simply overlaying information from the real world onto real-time maps using GPS. She said, “The reason we don’t think of it that way is because the developer burdens the user with connecting the dots. As a result, the user has to hold all of the function in their brains to make the transition.”

This could be as simple as Uber showing a user nearby cars, or as complex as the rich gaming environment created for Pokémon Go. New interfaces like the Microsoft’s HoloLens and Magic Leap are just around the corner, while the Epson Moverio is already being used for high-end industrial applications.

Meanwhile, Google’s Project tango intends to embed better AR capabilities into high-end smart phones like the Lenovo Phab 2 Pro. It’s already being used by Wayfair to allow consumers to measure their room, virtually place furniture before purchasing. Medich said this improves customer satisfaction, and reduce returns.

Improving education

VR and AR hold a lot of promise in improving educational experiences of all kinds. Stanford has been doing research with Stryver to allow football players to practice out game plays to improve their muscle memory. Highly specialized doctors are finding that VR makes it easier to bring a much wider audience of students to their operating theaters than is possible in real life. Meanwhile, students in Africa are using Google Goggles to visit places that their schools didn’t otherwise have the budget for.

Airbus is training technicians on how to perform complicated repairs on expensive equipment where it is cheap and safe until they become experts. This has led to a huge improvement in productivity and cost.

It’s not just for teach students either. Amnesty international created a visceral experience of the bombings in Syria that was shown to people on the streets of London. This raised the campaigns contribution rate by 20% in one afternoon.

Reducing the user burden

The real promise of VR and AR lie in reducing the burden of users in connecting the dots between real and virtual worlds. With most GPS applications, users have to do a lot of context switching between application or between applications and the physical world. There is considerable work on building repair applications that guide technicians on complex repairs without having to look away at a physical manual.

Microsoft and Autodesk are working on developing a workflow for the HoloLens that reduces the translation required between property owners, architects, builders, and inspectors. In the traditional workflow, architects must create 2D diagrams that can confuse developers. After a building is approved, builders must translate these diagrams into an actual building. Medich said, “A lot gets lost in the translation. If they build it they can inspect to see if something lines up or not, and then later down the road they have an easier way to fix it.”

AR could also radically transform office apps. Medich noted that the average user can spend hours a day switching contexts with the traditional keyboard and mouse user interface. A new generation of VR enabled office apps could interpret the context of what a user is doing to reduce the number of clicks and keyboard shortcuts required to do office work. She said, “These new technologies do a lot of translation and add something for humans.”

VR and AR are still in their stages, and now is the time for developers to learn more about the technologies and practical implementation. Medich said, “It is not too late to get started. We still have a couple of years until saturation. The next couple of years will be a little disappointing. We are trained to think in linear ways where things change a little gradually. But especially around technology we see a doubling every two years. At first this is disappointing because these changes don’t match up with our linear experience. But when the technology reaches an inflection point then we will see a complete explosion.”


July 11, 2017  8:45 PM

DevOps development is a software developer’s burden

cameronmcnz Cameron McKenzie Profile: cameronmcnz

The committed use of a continuous delivery pipeline inevitably puts a far greater onus on the software developer than systems that used more traditional methods of moving code into product.

A DevOps approach to software development means software developers must be much more diligent in terms of the unit tests they write, and the test code coverage they provide. With continuous integration and continuous delivery, testing can no longer be an aspect of software development to which the developer simply pays lip service. A project folder containing a set of hastily written unit tests that serve little more purpose than to pass a software development audit is no longer good enough.

Due diligence and the DevOps developer

Of course, being diligent in the tests one writes is an expectation, so developers can’t complain about the rigorous requirement to do their job properly. But quick fixes and fast patches that skip the test phase are a thing of the past when CI and CD is used, as anything untested programatically that goes into production and fails points a finger directly back to the developer who didn’t thoroughly test their software. Unit tests must be well thought out, methodical and extensive.

Successful DevOps developers can take solace in the fact that the burden of assembling a continuous delivery pipleline is not one that falls entirely upon their shoulders. The tooling that has developed within the continuous delivery space is impressive. The CI tools themselves can then hook in to a variety of other tools that help move code through the software development lifecycle. The CI server can read Maven POM files in order to download required libraries and invoke Gradle scripts to perform builds. More importantly, the continuous delivery pipeline will hook into various verification tools that will run test suites. Jenkins and Hudson dominate the continuous integration server space, although there are many competitors, including Concourse CI, which are upping the ante in terms of scalability and the simplicity with which CI and CD pipelines are defined.

DevOps tooling advances

Open source tools such as JUnit and Mockito can to be called upon to run unit tests and mocks. Static code analyzers such as SonarCube or HP’s Fortify will inspect the code and flag and rate the severity of potential bugs, vulnerabilities and just general code smell. DBUnit and H2 are often called upon to stub out a database and allow integration tests to take place within an isolated environment. LoadRunner or Apache Jmeter can be used to ensure the new release can handle peak loads, and results from a performance scanning tool such as XRebel can ensure that there aren’t any outstanding performance issues that need to be addressed. When every test cycle provides the continuous delivery pipeline with a passing grade, new code gets moved into production.

But by removing all of the manual checks, and expecting every red flag to be triggered by a test that has been coded into the system, organizations embracing a DevOps based approach to software development are placing a much greater onus on the shoulders of their software developers by not only making them responsible for the code they write, but also responsible for all of the checkpoints that exist to ensure that only bug free code is put into production. It’s a heavy burden to bare, not to mention a new one for developers who have never worked in a truly DevOps based environment.


Page 1 of 1612345...10...Last »

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: