The following is a transcript of an interview between TheServerSide’s Cameron W. McKenzie and Azul Systems’ CTO Gil Tene.
Cameron McKenzie: I always like talking to Gil Tene, the CTO of Azul Systems.
Before jumping on the phone, PR reps often send me a PowerPoint of what we’re supposed to talk about. But with Tene, I always figure that if I can jump in with a quick question before he gets into the PowerPoint presentation, I can get him to answer some interesting questions that I want the answers to. He’s a technical guy and he’s prepared to get technical about Java and the JVM.
Now, the reason for our latest talk was Azul Systems’ 17.3 release of Zing, which includes an LLVM-based, code-named Falcon, just-in-time compiler. Apparently, it’s incredibly fast, like all of Azul Systems’ JVMs typically are.
But before we got into discussing Azul Systems’ Falcon just-in-time compiler, I thought I’d do a bit of bear-baiting with Gil and tell him that I was sorry that in this new age of serverless computing and cloud and containers, and a world where nobody actually buys hardware anymore, that it must be difficult flogging a high-performance JVM when nobody’s going to need to download one and to install it locally. Well, anyways, Gil wasn’t having any of it.
Gil Tene: So, the way I look at it is actually we don’t really care because we have a bunch of people running Zing on Amazon, so where the hardware comes from and whether it’s a cloud environment or a public cloud or private cloud, a hybrid cloud, or a data center, whatever you want to call it, as long as people are running Java software, we’ve got places where we can sell our JVM. And that doesn’t seem to be happening less, it seems to be happening more.
Cameron McKenzie: Now, I was really just joking around with that first question, but that brought us into a discussion about using Java and Zing in the cloud. And actually, I’m interested in that. How are people using Java and JVMs they’ve purchased in the cloud? Is it mostly EC2 instances or is there some other unique way that people are using the cloud to leverage high-performance JVMs like Zing?
Gil Tene: It is running on EC2 instances. In practical terms, most of what is being run on Amazon today, it is run as virtual instances running on the public cloud. They end up looking like normal servers running Linux on an x86 somewhere, but they run on Amazon, and they do it very efficiently and very elastically, they are very operationally dynamic. And whether it’s Amazon or Azure or the Google Cloud, we’re seeing all of those happening.
But in many of those cases, that’s just a starting point where instead of getting server or running your own virtualized environment, you just do it on Amazon.
The next step is usually that you operationally adapt to using the model, so people no longer have to plan and know how much hardware they’re going to need in three months time, because they can turn it on anytime they want. So they can empower teams to turn on a hundred machines on the weekend because they think it’s needed, and if they were wrong they’ll turn them off. But that’s no longer some dramatic thing to do. Doing it in a company internal data center? It’s a very different thing from a planning perspective.
But from our point of view, that all looks the same, right? Zing and Zulu run just fine in those environments. And whether people consume them on Amazon or Azure or in their own servers, to us it all looks the same.
Cameron McKenzie: Now, cloud computing and virtualization is all really cool, but we’re here to talk about performance. So what do you see these days in terms of bare iron deployments or bare metal deployments or people actually deploying to bare metal and if so, when are they doing it?
Gil Tene: We do see bare metal deployments. You know, we have a very wide mix of customers, so we have everything from e-commerce and analytics and customers that run their own stuff, to banks obviously, that do a lot of stuff themselves. There is more and more of a move towards virtualization in some sort of cloud, whether it’s internal or external. So I’d say that a lot of what we see today is virtualized, but we do see a bunch of the bare metal in latency-sensitive environments or in dedicated super environments. So for example, a lot of people will run dedicated machines for databases or for low-latency trading or for messaging because they don’t want to take the hit for what the virtualized infrastructure might do to them if they don’t.
But having said that, we’re seeing some really good results from people on consistency and latency and everything else running just on the higher-end Amazon. So for example, Cassandra is one of the workloads that fits very well with Zing and we see a lot of turnkey deployments. If you want Cassandra, you turn Zing on and you’re happy, you don’t look back. In an Amazon, that type of cookie-cutter deployment works very well. We tend to see that the typical instances that people use for Cassandra in Amazon with or without us is they’ll move to the latest greatest things that Amazon offers. I think the i3 class of Amazon instances right now are the most popular for Cassandra.
Cameron McKenzie: Now, I believe that the reason we’re talking today is because there are some big news from Azul. So what is the big news?
Gil Tene: The big news for us was the latest release of Zing. We are introducing a brand-new JIT compiler to the JVM, and it is based on LLVM. The reason this is big news, we think, especially in the JVM community, is that the current JIT compiler that’s in use was first introduced 20 years ago. So it’s aging. And we’ve been working with it and within it for most of that time, so we know it very well. But a few years ago, we decided to make the long-term investment in building a brand-new JIT compiler in order to be able to go beyond what we could before. And we chose to use LLVM as the basis for that compiler.
Java had a very rapid acceleration of performance in the first few years, from the late ’90s to the early 2000s, but it’s been a very flat growth curve since then. Performance has improved year over year, but not by a lot, not by the way that we’d like it to. With LLVM, you have a very mature compiler. C and C++ compilers use it, Swift from Apple is based on its, Objective-C as well, the RAS language from Azul is based on it. And you’ll see a lot of exotic things done with it as well, like database query optimizations and all kinds of interesting analytics. It’s a general compiler and optimization framework that has been built for other people to build things with.
It was built over the last decade, so we were lucky enough that it was mature by the time we were making a choice in how to build a new compiler. It incorporates a tremendous amount of work in terms of optimizations that we probably would have never been able to invest in ourselves.
To give you a concrete example of this, the latest CPUs from Intel, the current ones that run, whether they’re bare metal or powered mostly on Amazon servers today, have some really cool new vector optimization capabilities. There’s new vector registers, new instructions and you could do some really nice things with them. But that’s only useful if you have some optimizer that’s able to make use of those instructions when they know it’s there.
With Falcon, our LLVM-based compiler, you take regular Java loops that would run normally on previous hardware, and when our JVM runs on a new hardware, it recognizes the capabilities and basically produces much better loops that use the vector instructions to run faster. And here, you’re talking about factors that could be, 50%, 100%, or sometimes 2 times or 3 times faster even, because those instructions are that much faster. The cool thing for us is not that we sat there and thought of how to use the latest Broadwell chip instructions, it’s that LLVM does that for us without us having to work hard.
Intel has put work into LLVM over the last two years to make sure that the backend optimizers know how to do the stuff. And we just need to bring the code to the right form and the rest is taken care of by other people’s work. So that’s a concrete example of extreme leverage. As the processor hits the market, we already have the optimizations for it. So it’s a great demonstration of how a runtime like a JVM could run the exact same code and when you put it on a new hardware, it’s not just the better clock speed and not just slightly faster, it can actually use the instructions to literally run the code better, and you don’t have to change anything to do it.
Cameron McKenzie: Now, whenever I talk about high-performance JVM computing, I always feel the need to talk about potential JVM pauses and garbage collection. Is there anything new in terms of JVM garbage collection algorithms with this latest release of Zing?
Gil Tene: Garbage collection is not big news at this point, mostly because we’ve already solved it. To us, garbage collection is simply a solved problem. And I do realize that that often sounds like what marketing people would say, but I’m the CTO, and I stand behind that statement.
With our C4 collector in Zing, we’re basically eliminating all the concerns that people have with garbage collections that are above, say, half a millisecond in size. That pretty much means everybody except low-latency traders simply don’t have to worry about it anymore.
When it comes to low-latency traders, we sometimes have to have some conversations about tuning. But with everybody else, they stop even thinking about the question. Now, that’s been the state of Zing for a while now, but the nice thing for us with Falcon and the LLVM compiler is we get to optimize better. So because we have a lot more freedom to build new optimizations and do them more rapidly, the velocity of the resulting optimizations is higher for us with LLVM.
We’re able to optimize around our garbage collection code better and get even faster code for the Java applications running it. But from a garbage collection perspective, it’s the same as it was in our previous release and the one before that because those were close to as perfect as we could get them.
Cameron McKenzie: Now, one of the complaints people that use JVMs often have is the startup time. So I was wondering if there’s anything that was new in terms of the technologies you put into your JVM to improve JVM startup? And for that matter, I was wondering what you’re thinking about Project Jigsaw and how the new modularity that’s coming in with Java 9 might impact the startup of Java applications.
Gil Tene: So those are two separate questions. And you probably saw in our material that we have a feature called ReadyNow! that deals with the startup issue for Java. It’s something we’ve had for a couple of years now. But, again, with the Falcon release, we’re able to do a much better job. Basically, we have a much better vertical rise right when the JVM starts to speed.
The ReadyNow! feature is focused on applications that basically want to reduce the number of operations that go slow before you get to go fast, whether it’s when you start up a new server in the cluster and you don’t want the first 10,000 database queries to go slow before they go fast, or whether it’s when you roll out new code in a continuous deployment environment where you update your servers 20 times a day so you rollout code continuously and, again, you don’t want the first 10,000 or 20,000 web request for every instance to go slow before they get to go fast. Or the extreme examples of trading where at market open conditions, you don’t want to be running your high volume and most volatile trades in interpreter Java speed before they become optimized.
In all of those cases, ReadyNow! is basically focused on having the JVM hyper-optimize the code right when it starts rather than profile and learn and only optimize after it runs. And we do it with a very simple to explain technique, it’s not that simple to implement, but it’s basically we save previous run profiles and we start a run assuming or learning from the previous run’s behavior rather than having to learn from scratch again for the first thousand operations. And that allows us to run basically fast code, either from the first transaction or the tenth transaction, but not from the ten-thousandth transaction. That’s a feature in Zing we’re very proud of.
To the other part of your question about startup behavior, I think that Java 9 is bringing in some interesting features that could over time affect startup behavior. It’s not just the Jigsaw parts, it’s certainly the idea that you could perform some sort of analysis on code-enclosed modules and try to optimize some of it for startup.
Cameron McKenzie: So, anyways, if you want to find out more about high-performance JVM computing, head over to Azul’s website. And if you want to hear more of Gil’s insights, follow him on Twitter, @giltene.
You can follow Cameron McKenzie on Twitter: @cameronmckenzie
Digital transformation and cloud offer opportunities, not redundancy, for CIOs and software developers, said California Department of Conservation CIO Catherine Kendall. “It’s a great time to be a developer,” she said during an Oracle OpenWorld customer panel on digital transformation projects this week. Thanks to cloud services, she said, it’s a good time to be a CIO, too.
“IT is an untapped resource of business knowledge,” said Kendall. “We sometimes know the business rules, regulations and laws better than anybody because we’re partnering in it.”
Kendall’s positive attitude about IT pros’ changing roles was a highlight of the informal press room discussion. Here’s how she sees IT playing an important role in digital transformation projects.
Some IT pros see digital transformation as a top-down project and a possible job security threat. So, said Kendall, it’s important to convey developers’ their importance to the business and the opportunities coming from digitization.
“Digital is about the business and IT together as one, regardless of the methodology, whether you’re using Agile, waterfall or [other] methodologies,” Kendall said. “IT needs to be part of the business plan, not just the secondary budget.”
Holistic teamwork is critical in digital transformation, because it is all about the end user, about the customer. “We need to get the developers out [front] more, because they offer a wealth of information,” said Kendall, a former programmer herself. Developers are the technology enablers who help businesses reach customers in ways that are usable and frictionless.
Developers know more about the business than other stakeholders at the project table, Kendall said in our brief conversation after the session. They have gathered customer requirements, studied user experience and user interface design and made sure that technologies don’t clash with governance and compliance issues.
Making IT a fundamental player at the digital transformation project table is a huge cultural change. “IT has always been the one-off,” Kendall said. “The IT guys are on a lower floor. They don’t come out a lot.”
Developers need to step up to their new responsibilities in business initiatives. “We [have to] come further across the table to learn the business,” said Kendall. “I think the developer now has to be more of a hybrid, instead of code-slinging.” She wants her IT team to feel comfortable at the table to speak up and suggest other approaches.
Developers aren’t the only ones who now have a seat at the digital transformation project table. Cloud computing, said Kendall, “is giving CIOs a seat at the table from a business perspective.” CIOs will have to reinvent themselves, using cloud’s capabilities as an initial piece in transforming business.
When cloud first came out, Kendall thought it was just a rebranding of on-demand. She quickly discovered its value. For example, just the storage capacity offered by cloud services is a game-changer for CIOs. “We can pull data from the USGS, weather services, economic data…and we don’t have to worry about infrastructure investment. We are now in a position where we are not constrained,” she said. “I don’t have to sit there and say, ‘No. No, we can’t do that.’ Now it’s about imagining possibilities, imagining what we can do.”
For all players in IT, digital transformation is a means to focus on delivering the best results to customers. “That’s what has transformed us,” she concluded. “That’s the opportunity.”
Editor’s note: Catherine Kendall was also a co-presenter in a general session, “Data and Analytics Power Your Success,” at Oracle OpenWorld 2017.
“In a digital transformation project, what are pain points commonly felt by the development group?” That’s the question I put to Tata Consultancy Services’ (TCS) Global Head Sunder Singh prior to his Oracle OpenWorld/JavaOne session, “Building Smarter Enterprises.” Here’s his response, in which he identifies and discusses five digital transformation challenges.
1. “Simplification comes at a price,” said Singh. Developers and development teams who do not have the ability to unlearn and learn quickly will not thrive in a digital transformation project.
“It’s the mantra to move away from complex, time-consuming processes to simplification,” said Singh. DevOps teams must have the self-realization that the current complex way of doing things is costing time and money and will eventually cost the company its very existence. Singh warns that developers must learn to “simplify or perish.”
2. It’s not easy for IT pros to look at the organization holistically, which is a must in digital transformation projects. Digital transformation impacts the entire organization from IT to line-of-business, said Singh. “Some people think digital is cloud while others think digital is for the front office only,” he explained. Certainly, digital transformation can be enabled by cloud, but it’s not the only component, and neither should the front office be the only focus.
Developers can’t work separately from the organization anymore. A digital transformation project encompasses social, mobile, cloud, analytics, AI, IoT and more and all in an integrated fashion. “All employees must be brought into the change and be provided with the right skills, behaviors and resources to accept and embrace the change,” Singh said.
Failure to do a digital transformation project holistically throughout the organization will lead to “continually playing the catch-up game,” Singh said. “We will keep doing it in parts, building and rebuilding.”
In digital transformation, traditional IT roles will evolve from operational focused to an approach focused on how the application will deliver business value. “There needs to be change management support for implementing new KPIs to help enable the adoption of the new role,” Singh said. “This operational change management should extend from the board room to every employee impacted by the change.”
3. Executing a digital transformation project in an agile manner is key to success, said Singh. “Adoption of agile, not in a silo but enterprise-wide, [is needed for] responsiveness in a manner to get the products and offerings out to get the first mover advantage is the key,” he said.
In conclusion, Singh noted that resistance to change is futile. Digital disruption is now and will remain the norm for businesses. “The organization’s culture and way of working for decades is being disrupted,” Singh said. Developers must go with this flow and find their roles in the new digital business paradigm.
As things get better, they often get slower, making better things worse. Far too often, that’s how things work in the tech sector, which is why I’m glad to see the architects of Java SE 9 bucking this trend in the latest full version release.
When I think about application performance, I think back to the days when I played my Atari 2600 as a kid. I’d shove in the Space Invaders cartridge and the only delay between clicking the on switch and engaging with the aliens was the speed of light traveling between the television set and my eyes. I stopped playing video games when Sony improved the gaming console so much that any time my Dig Doug died, I’d have to wait a minute and a half while the CD-ROM spun and the Fygars reloaded. Video games got better, and because of that they got slower, which made them worse than they were before. Even today, I long for the performance of an Atari 2600.
It’s a demoralizing cycle that asserts itself in a variety of areas of the tech sector. My 8 gig Android phone is unusable after the latest OS update. Windows 10 won’t even install on my old Lenovo laptops which run just fine with XP. And even if I bought a new phone, a new desktop and a new laptop with the most expensive hardware Fry’s Electronics is wiling to sell me, none of it would boot up as fast as my old Atari.
I doubt that an Atari 2600 was the inspiration as Oracle’s language architects worked on Java SE 9, but it may as well have been, because Java SE 9’s new module system is making Atari-like performance a real possibility.
Atari-esque performance and Java SE 9
The highlight of every JavaOne keynote happens when Oracle’s chief language architect Mark Reinhold takes the stage. Reinhold doesn’t talk in superlatives as do most other keynote speakers. Reinhold talks Java and always shoots straight about where we are in the evolution of the language. At JavaOne 2017, Reinhold demonstrated Java SE 9’s evolution beyond the simple classpath model and into an age of module isolation. It’s easy to tell that the Java language team is proud of this achievement.
The evolution of Java SE 9
Now there’s a plethora or reasons to be excited about modularity’s introduction, but in my opinion, Project Jigsaw’s greatest contribution to Java SE 9 is the fact that it not only makes software development with the JDK better, but it makes the applications we develop faster as well.
During his JavaOne 2017 keynote, Reinhold engaged in a little live coding in which a simple, module based Java SE 9 application was created. The whole thing was deployed to Docker and when the 261 meg container was run, the compulsory Hello World message was displayed. That was impressive in itself, but what ensued immediately following this little demonstration can only be technically described as witchcraft.
After the first Docker build, Reinhold remade the container but employed the new Java SE 9 tool JLink. “Java finally has a linker,” said Reinhold. “It’s an optional step, but it’s a very important one.” Using JLink, any of the 26 modules that the JDK has been divided into that aren’t used by the application get pruned away. The resulting recompilation using JLink created a new container that impressively tipped the scales at just under 39 megs.
With Java SE 9, Reinhold has not only delivered a better JDK, but he’s also delivered a system that can be configured to be faster and have a much smaller footprint as well. They haven’t just improved the functionality of the Java SE 9 platform, but at JavaOne 2017 they’ve shown how they’ve improved upon important non-functional aspects of it as well.
Prognosticating about Java SE 9 performance
Now I should be careful to draw a line between a container’s footprint and actual performance. I’ve deployed plenty of Java applications to WebSphere servers hosted on big, bare metal behemoths, and I doubt the presence of an unused Swing package sitting on the file-system inside of the JDK ever had a big impact on the performance of my e-commerce apps. But a module system does allow for a variety of tricks, such as the lazy loading of components, that developers can start taking advantage of in their code. And being able to move smaller Docker images across the network when updates happen or patches need to be applied will have a real, measurable impact on the performance of administrative and infrastructure tasks. The benefits of the Java SE 9 environment’s newfound modularity will assuredly reach far and wide.
It was an uphill battle getting Project Jigsaw finalized, ratified and packaged into the Java SE 9 platform before the Java community descended upon San Francisco for the JavaOne 2017 conference, but Reinhold and the rest of the Java language team made it happen. It’s an impressive feat, and it’s one for which they are deservedly proud.
You can follow Cameron McKenzie on Twitter: @cameronmcnz
Customers have high expectations of applications today, demanding personalized experiences, super-fast responsiveness and business value with each transaction. That pressure is one of the top drivers for businesses to initialize digital transformation projects, according to Tata Consultancy Services (TCS) experts Sunder Singh and Kallol Basu. Another is the speed of IT innovation, which has been and is enabling automation of all business processes, said the TCS consultants from Oracle OpenWorld 2017 in San Francisco this week.
In our recent conversation, the TCS duo explained the following three key reasons why businesses should start doing digital transformation projects. Kallol is the TCS Oracle Practice transformation change management consultant, and Singh is the global head, Oracle Practice, Enterprise Solutions. Here at Oracle OpenWorld/JavaOne, Singh is a co-speaker in the sessions, “Building Smarter Enterprises” and “Driving Speed, Scale and Efficiency.
Three drivers for digital transformation projects
#1: Customer expectations: Rising customer expectations include demands of more personalized experience, faster response and the desire to always feel valued at each touch point of their journey, according to Basu. Singh noted that it is now the norm for human and machine interaction to be simple, intuitive, cheap and fast. “Companies are transforming themselves to fit into the so-called norm and or de facto standards of user experience,” Singh explained.
Basu sees digital transformation placing people back at the heart of the company. “The silo mentality is not suited to digital transformation, which relies on openness and a transversal approach.
#2: Increasing reliance on and capabilities of data analytics: Analytics allow businesses to put their fingers on the pulse of the customer, said Basu. Singh adds: ”The power of machines to manage the velocity, variety and volume of data like never imagined before opens up new art of possibilities for analytics and insights in a hyperconnected world.”
#3. Computing has put business competition in hyperdrive: IT has increased a company’s ability to compete as traditional boundaries vanish allowing for new competitors to enter the market, according to Singh. Technology advancement and the availability of storage, computers and network at a throw away price – better, cheaper and faster. This has opened up a plethora of possibilities.
“With digital and cloud, one can start a business with practically nothing from anywhere at any time and usurp your very existence,” said Singh. “The entry barrier is gone. Heavy capex and years of lead time to start business is gone.”
What about your business?
Are there other reasons why your company has started or is planning a digital transformation project? Or, is digital transformation not on your agenda? Or, have your companies automation projects already made it what is known as a Digital Business?
Last year’s JavaOne conference generated quite a bit of excitement with the discussion of many of the new Java SE 9 features. But this year’s event is already proving to be more groundbreaking. From making every aspect of Oracle’s Java EE open source to introducing Functions as a Service, each speaker in the opening keynote brought a little more excitement to the crowds gathered in San Francisco, California.
An open Java SE 9
The biggest announcements during the keynote were the intention to make the Eclipse Foundation the new steward of Java EE. All the elements of the commercial version of the Oracle JDK will become available in the Open JDK as well, giving developers unprecedented access to features that were previously available only to the enterprise elite. In addition, Oracle committed to stepping up the speed of releases. According to Mark Reinhold, Chief Architect of Java, the new timeline of releasing every six months instead of every few years accomplishes a couple of goals. “It helps us move forward and do so faster.” But speed isn’t the only focus. “Features go in only when they are ready. If a feature misses a current release, that’s OK. Because it’s only six months to the next one. It’s fast enough to deliver innovation at a regular pace, and slow enough to maintain high levels of quality.”
A nimble Java SE 9
According to Mark Cavage, VP of Product Development at Oracle, Java SE and Java SE 9 offer over 100 new features and streamline the JVM with better support for containers that will allow the platform to evolve in new ways. “You can get just enough Java and just enough JVM to right-size the JVM for a cloud world.” Niklas Gustavsson, Principal Architect at Spotify, spoke about how his organization has gradually shifted more and more of its services to Java as the need to scale its cloud-based offering has grown with its user base.
With 140 million active users and 3 billion streaming songs per day, the service had to handle 4 million requests to the backend per second. Over time, Spotify shifted more and more of its services from Python to Java. Better stability and scalability were just two benefits. But transparency was just as important. With the JVM, “We could observe what was happening in runtime in two ways: collecting runtime metrics on the platform itself or profiling the service while running in production.” Spotify deliberately used a microservices architecture to make it easier to shift to Java piece by piece as it made sense to do so. This approach allowed them to scale each service separately to meet the needs of a wide range of user behaviors and ensured that any outages were well-contained.
Containers and serverless architecture
Kubernetes was championed by Cavage as the optimal open-source container option for the Java community. Heptio CEO Craig McLuckie spoke in more detail about the ability of containers to simplify operations “Containers are hermetically sealed, highly predictable units of deployment with high portability.” With the use of dynamic orchestration technology, much of the work of operations can be automated. Craig also pointed out that containers, in a sense, may spell the demise of middleware as it currently exists, separating it into two different layers with containers on one side and application level libraries on the other. And flexibility is inherent. As well as containers and the cloud work together, McLuckie pointed out that this pairing is optional since Kubernetes could just as easily be deployed on premises.
On the developer side, going serverless was highlighted by Mark as “a Compute abstraction that takes away all notion of infrastructure from the user/developer.” It could be applied to many different use cases from compute to DB to storage, allowing developers to focus on functions and services that meet business needs.
Functions as a Service
FaaS was showcased in the form of the Oracle FN project headed by VP of Product Development, Chad Arimura. This three-pronged technology starts with the FaaS platform which should allow developers to build, deploy, and scale in a multi-cloud environment—while running FN locally on their laptop. The Function Development Kit (FDK) was the second part of the puzzle, “It allows developers to easily boostrap functions and has a data binding model to bind the input to your functions to common Java objects and types.” The FDK is Lambda compatible and has Docker as its only dependency. The FN Flow system is the final piece, enabling developers to build higher level work flows and orchestrate functions in complex environments. Arimura showed off Oracle’s commitment to open source with a few mouse clicks at the end of his presentation, providing the whole world with access to the project.
More to come…but hard to top this year
The keynote ended with a review of some of the same features discussed in 2016, with Jigsaw and Project Panama receiving substantial attention. The Amber project for right-sizing language ceremony was mentioned and will no doubt be showcased at next year’s JavaOne. Another contender is the Loom project which is still in the discussion phase. While each new conference reveals fresh features, it will be difficult to beat the excitement of having unlimited access to every aspect of Java SE 9.
What’s trending at JavaOne 2017? A simple way to tell is to search through the conference catalog and take note of the various sessions that are overbooked and no longer adding attendees to a wait-list. Taking that approach, here’s a quick look at a few of the sessions that JavaOne 2017 attendees will be missing out on if they weren’t savvy enough to register early for a seat.
Lambdas still loom large at JavaOne 2017
A few years ago, when Java 8 came around, everyone was excited about the fact that Lambdas were finally being shoehorned into a full version release. This year, it looks like everyone is getting around to actually using them, as not even an 8:30am start on a Monday morning is scaring people away from Java Champion José Paumard’s Free Your Lambdas session.
Introductory and advanced reactive design
Every time I’ve talk to my good friends at either Payara or Lightbend, they’re flogging the merits of reactive development and design. Continuing to spread the word, Lightbend’s Duncan DeVore will be joining up with IBM’s Erin Schnabel to provide an Introduction to Reactive Design, while Payara’s Ondrej Mihalyi and Mike Croft will be stepping it up a notch by tag-teaming a hands-on lab entitled Traditional Java EE to Reactive Microservice Design.
Interestingly, these hands-on labs are taking place at the Hilton by Union Square, a good ten-minute walk from the main conference grounds located in the Moscone Conference Center. In years past, the whole conference took place in a cluster of hotels next door and across the street from the Hilton. This year, everything but the hands-on labs takes place alongside Oracle OpenWorld at Moscone.
Romancing the Java 9 stone
The conference is called JavaOne, so it comes as no surprise to discover that a session entitled JDK 9 Hidden Gems would play to a packed house. Back in the Moscone West building, Oracle’s JVM Architect Mikael Vidstedt and Intel Corporation’s Senior Staff Software Engineer Sandhya Viswanathan will avoid the Java 9 hype by skipping over big ticket items like Project Jigsaw and Java 9’s multi-jar deployment capabilities and instead, according to the syllabus, “talk about JDK 9 optimizations spanning support for larger vectors with enhanced vectorization, optimized math libraries, cryptography and compression acceleration, compact strings, new APIs with associated optimized implementation, and many more features that help big data, cloud, microservices, HPC, and FSI applications.” To a Java aficionado, a session description like that is more tempting than candy is to a baby. Here’s hoping they can get through as many of those topics as possible in the time allotted.
Keeping Roy Fielding’s dream alive
Java developers continue to take a keep interest in developing RESTful web services, as e-Finance’s enterprise architect Mohamed Taman’s session entitled The Effective Design of RESTful APIs will be running at capacity. Speaking about more than just the development of RESTful APIs in the enterprise sphere, Taman’s session addresses how to create multi-channel RESTful web services that interact seamlessly with IoT components, embedded devices, microservices and even mobile phones. Roy Fielding would no doubt be pleased.
Boyarsky demystifies JUnit 5
And finally, it should be noted that if you want to meet with popular CodeRanch Marshall Jeanne Boyarsky, you’re not going to be able to do it by walking in at the last minute on her hands-on session about solid software testing practices, because that Hilton attraction is emphatically overbooked. Co-presenting with enterprise software architect Steve Moyer, this hands-on session is entitled Starting Out with JUnit 5.
I’m actually surprised that a session on JUnit would go to max capacity. It’s hard enough to get developers to write good JUnit tests at the best of times, let alone attend a technical session on the topic at a time when the beer garden calls. I’m postulating that Boyarsky’s reputation and online persona is responsible for packing the house. Or, it could be due to the fact that the session’s syllabus reads more like a fear inducing warning than a simple overview: “The difference between JUnit 4 and 5 is far bigger than the difference between 3 and 4. JUnit 5 is almost up to the GA release, so it is high time to learn about next-generation JUnit.”
So that’s what’s trending today at JavaOne 2017. It’s largely what you’d expect from a group of forward thinking software engineers. It’s a matter of learning about new topics like reactive design, getting the most out of the language features from both JDK 8 and Java 9, learning how to write RESTful APIs that will integrate multi-channel devices, and finally, learning how to write tests to make sure that any code that gets written is reliable and robust. As I said, it’s pretty much what you’d expect from this JavaOne 2017 crowd.
You can follow most of these speakers on Twitter, and you probably should:
- Java Champion José Paumard’s: @josepaumard
- Lightbend’s Duncan DeVore: @ironfish
- IBM’s Erin Schnabel: @ebullientwork
- Payara’s Ondrej Mihalyi (@omihalyi) and Mike Croft (@croft)
- Oracle’s Mikael Vidstedt: @MikaelVidstedt
- e-Finance’s Mohamed Taman: @_tamanm
- The CodeRanch’s Jeanne Boyarsky (@jeanneboyarsky)
You can follow me, Cameron McKenzie, too: @cameronmcnz
When static code analysis tools identify a bug in the production code, there are two approaches organizations can take. The sensible one is to put a software developer or two on the problem and implement an immediate bug fix. The other option is to assemble the software team, debate the relative risk of not addressing the problem, and then choose not to do anything about the issue because the reward associated with doing so isn’t commensurate with the risk. You’d be surprised how often teams choose the latter approach.
The dangers of risk assessment
“Many organizations have an effective process for identifying problems, but no process for remediation,” said Matt Rose, the global director of application security strategy at Checkmarx. “Organizations do a lot of signing off on risk. Instead of saying ‘let’s remediate that’ they say ‘what’s the likelihood of this actually happening?'”
Sadly, the trend towards cloud-native, DevOps based development hasn’t reversed the this trend towards preferring risk assessment over problem remediation. The goal of any team that is embracing DevOps and implementing a system of continuous delivery is to eliminate as many manual processes as possible. A big part of that process is integrating software quality and static code analysis tools into the continuous integration server’s build process. But simply automating the process isn’t enough. “A lot of times people just automate and don’t actually remediate,” said Rose.
The bug fix benefit
There are very compelling reasons to properly secure your applications by implementing a bug fix. The most obvious is that your code has fewer identifiable issues, giving software quality tools less to complain about. “It doesn’t matter whether a bug is critical or non-critical. A bug is a bug is a bug. If you don’t act upon it, it’s not going to go away.”
“Many organizations have an effective process for identifying problems, but no process for remediation. Organizations do a lot of signing off on risk.”
-Matt Rose, the global director of application security strategy at Checkmarx.
The other benefit is the fact that the process of addressing a problem and coding a bug fix is actually an educational experience. Developers get informed of the problem, realize how a given piece of code may have created a vulnerability, and then they are given the opportunity to re-write the given function so that the issue is eliminated. “Working on vulnerabilities that are in your application and are real-world to you is going to teach you how not to make the same mistakes over and over again.”
So skip the risk assessments. If there’s a problem in your code, implement a bug fix. That will eliminate the risk completely.
If JavaOne 2017 is your first time attending the conference, it will serve you well to follow some advice and insights from a veteran attendee of the JavaOne and OpenWorld conferences.
The first piece of advice, for which it is currently far too late to act upon, is to make sure you’ve got your hotel booked. Barry Burd wrote a JavaOne article for TheServerSide a couple of years ago that included some insights on how to find a last minute hotel in San Francisco that isn’t obscenely far from the venue, although given the limited availability when I did a quick search on Expedia earlier this week, I’d say you’d be lucky to find a hotel in Oakland or San Jose for a reasonable price, let alone San Francisco.
Schedule those JavaOne 2017 sessions
For those who have their accommodation booked, the next sage piece of conference advice is for attendees to log on to the JavaOne 2017 session scheduler and reserve a seat in the sessions you wish to attend. Adam Bien’s session on microservices, Java EE 8 and the cloud is already overbooked. The Java platform’s chief architect Mark Reinhold’s talks on Jigsaw and Java modules already has a wait list, and the ask the Java architects session with Oracle’s Brian Goetz and John Rose is at capacity. The longer you wait to formulate your schedule, the fewer the sessions you’ll have to choose from.
When choosing session, I find the speaker to be a more important criteria for discernment than the topic. Most speakers have a video or two up on YouTube of them doing a presentation. Check those videos out to see if the speaker is compelling. An hour can be quite a long time to sit through a boring slide show. But an exciting speaker can make an hour go by in an instant, and if you’re engaged, you’re more likely to learn something.
Skip the Oracle keynotes
One somewhat contrarian piece of advice I’m quick to espouse is for attendees to skip the Oracle keynotes, especially the morning ones. That’s not to say the keynotes are bad. But getting to the keynotes early enough to get a seat is a hassle, and you can’t always hear everything that’s being said in the auditorium. A better alternative is to stream the keynote from your hotel room, or better yet, watch the the video Oracle uploads to their YouTube channel while you’re eating lunch.
But here’s why keynotes can take away from your JavaOne 2017 conference experience. For example, if you attend Thomas Kurian’s Tuesday morning keynote on emerging technologies and intelligent cloud applications, you’d miss Josh Long and Mark Heckler’s session on reactive programming with Spring 5. Actually, there’s a bunch of other sessions going on at that time, ranging from Martijn Verburg’s talk on surviving Java 9 to Stuart Marks’ talk on Java collections. If anything interesting gets said about new trends or technologies in a keynote, it’ll be covered extensively by the tech media. The same can’t be said for the nuggets of understanding that can be panned from attending a good JavaOne session.
Enjoy the party
The other big piece of advice? Enjoy San Francisco, especially if it’s your first time in the city. It’s the smallest alpha city in the world, but it is an alpha city. There are plenty of parties, meet-ups and get-togethers you’ll find yourself invited to, and it’s worth taking up any offers you manage to get. Having said that, keep an eye on how much gas you have left in the tank at the end of the day, because you want to be able to make it to all of the morning sessions you’ve scheduled for yourself.
If it’s your first time attending, I assure you that you’ll have a great time at JavaOne 2017, and with the new layout bringing JavaOne 2017 closer to the Oracle OpenWorld conference, this event should be better than any of the others in the memorable past. San Francisco is a great city, and the greatest minds in the world of modern software development will be joining you in attendance.
There’s really nothing new under the sun when it comes to addressing security vulnerabilities in code. While there has been a great shift in terms of how server side application are architected, including the move to the cloud and the increased use of containers and microservices, the sad reality is that the biggest security vulnerabilities found in code are typical caused by the most common, well-known and mundane of issues, namely:
- SQL injection and other interpolation attack opportunities
- The use of outdated software libraries
- Direct exposure of back-end resources to clients
- Overly permissive security
- Plain text passwords waiting to be hacked
SQL injection and other interpolation attacks
SQL injections are the easiest way for a hacker to do the most damage.
Performing an SQL injection is simple. The hacker simply writes something just a tad more complicated than DROP DATABASE or DELETE * FROM TABLE into an online form. If the input isn’t validated thoroughly, and the application allows the unvalidated input to become embedded in an otherwise harmless SQL statement, the results can be disastrous. With an SQL injection vulnerability, the possible outcomes are that the user will be able to read private or personal data, update existing data with erroneous information, or outright delete data, tables and even databases.
Proper input validation and checking for certain escape characters or phrases can completely eliminate this risk. Sadly, too often busy project managers push for unvalidated code into production, and the opportunity for SQL injection attacks to succeed exist.
The use of outdated software libraries
Enterprises aren’t buying their developers laptops running Windows XP. And when updates to the modern operating system that are using do become available, normal software governance policies demand applying a given patch or fix pack as soon as one comes along. But how often to software developers check the status of the software libraries their production systems are currently using?
When a software project kicks off, a decision is made about which open source libraries and projects will be used, and which versions of those projects will be deployed with the application. But once decided, it’s rare for a project to revisit those decisions. But there are reasons why new versions of logging APIs or UI frameworks are released, and it’s not just about feature enhancements. Sometimes an old software library will contain a well known bug that has gets addressed in subsequent updates.
Every organization should employ a software governance policy that includes revisiting the various frameworks and libraries that production applications link to, otherwise they face the prospect that a hidden threat resides in their runtime systems, and they only way they’ll find out about it is if a hacker finds the vulnerability first.
Direct exposure of back-end resources to clients
When it comes to performance, layers are bad. The more hoops a request-response cycle has to go through in order to access the underlying resource it needs, the slower the program will be. But the desire to reduce clock-cycles should never bump up against the need to keeps back-end resources secure.
The exposed resources problem seems to be most common when doing penetration testing against RESTful APIs. With so many RESTful APIs trying to provide clients an efficient service that accesses back-end data, the API itself is often little more than a wrapper for direct calls into a database, message queue, user registry or software container. When implementing a RESTful API that provides access to back-end resource, make sure the REST calls are only accessing and retrieving the specific data they require, and are not providing a handle to the back-end resource itself.
Overly permissive security
Nobody ever sets out intending to lower their shields in such a way that they’re vulnerable to an attack. But there’s always some point in the management of the application’s lifecycle in which a new feature, or connectivity to a new service, doesn’t work in production like it does in pre-prod or testing environments. Thinking the problem might be access related, security permissions are incrementally reduced until the code in production works. After a victory dance, the well intended DevOps personnel who temporarily lowered the shields in order to get things working are sidetracked and never get around to figuring out how to keep things running at the originally mandated security levels. Next thing you know, ne’er-do-wells are hacking in, private data is being exposed, and the system is being breached.
Plain text passwords waiting to be hacked
Developers are still coding plain text passwords into their applications. Sometimes plain text passwords appear in the source code. Sometimes they’re stored in a property file or XML document. But regardless of their format, usernames and passwords for resources should never appear anywhere in plain text.
Some might argue that the plain-text password problem is overblown as a security threat. After all, if it’s stored on the server, and only trusted resources have server access, there’s no way it’s going to fall into the wrong hands. That argument may be valid in a perfect world, but the world isn’t perfect. A real problem arises when another common attack, such as source code exposure or a directory traversal occurs, and the hands holding the plain text passwords are no longer trusted. In such an instance, the hacker has been given an all-access-pass to the back-end resource in question.
At the very least, passwords can should be encrypted when stored on the filesystem and decrypted when accessed by the application. Of course, most middleware software platforms provide tools such as IBM WebSphere’s credential vault for securely storing passwords, which not only simplifies the art of password management, but it also relieves the developer from any responsibility if indeed any source code was exposed, or a directory traversal were to happen.
The truth of the matter is, a large number of vulnerabilities exist in production code not because hackers are coming up with new ways to penetrate systems, but because developers and DevOps personnel simply aren’t diligent enough about addressing well-known security vulnerabilities. If best practices were observed, and software security governance rules were properly implemented and maintained, a large number of software security violations would never happen.
You can follow Cameron McKenzie on Twitter: @cameronmcnz
Shopping for CRM software system can be daunting. Many platforms come with bells, whistles, add-ons and integrations that you never considered — not to mention a high price tag. Adding a complicated, expensive software to your business is not a decision to be made lightly.
How can you determine which CRM software system is right for you? Ask these questions, and then take your top CRM candidates for a test drive! Many systems offer a free trial period. The best way to see if a system is the right fit is to put it to work for you. Here’s where to start to make your decision simple.
What do you want to accomplish?
Get your CRM strategy in order before shopping around for a system. Take the time to be clear about what your goals are for capturing your customer relationships. Are you going to use this information for sales, marketing, customer service, or all of the above? What details will you need to consistently report to get the big-picture data you need? Understand the variety of reporting options that come standard with each CRM platform: customer data matters, but that data must drive action. How strong are the reporting capabilities of each CRM?
Reporting is just one piece of the puzzle. Think about what other processes your CRM might need to manage. What tasks do you want to automate? Many CRM systems can automate email alerts for important events, escalate uncompleted issues, and streamline workflows by directing traffic among your teams.
Additionally, consider where your business might grow in the future. Many CRM platforms offer add-ons that you may not need today, but are worth considering as you start to see your company take off. You need a customer solution service right now, but next quarter you might be ready for some online marketing and social media monitoring. Companies like Hubspot and Zoho have marketing and social media capabilities. Others, like Microsoft, will offer project management tools and organizational supplements.
Who will use the system?
What teams will need access to your CRM system? How many accounts will you need? Most CRM platforms, like Salesforce, offer pricing based on the number of users. Factor in things like continuity and mobility: do you have a mobile salesforce? Do you have some team members who cover multiple roles?
Some platforms will also allow you to set different features and access levels for different teams. For example, you might make certain reports available to your senior management team, or limit who has access to sales leads. Consider the existing workflows within your organization. If you plan to grow your business rapidly within the next year, make sure you get a system that can accommodate many new accounts (and ensure continuity and consistent service among your team members).
Should it be cloud-based or on-premise?
Of course, cost is a big factor in choosing whether or not your CRM is on-site or cloud-based. An on-premises CRM solution is often less expensive, but keep in mind the maintenance costs will add up. Upgrades, IT maintenance, and support costs might end up making a cloud-based system a better investment. You might also need a new server to keep your on-site system up and running.
Likewise, if you choose a cloud-based CRM solution, you’ll need the network resources to support the product. How much bandwidth will it use? Will your internet speeds be fast enough for a cloud-based system? Save yourself hours of frustration and internet down-time by running some speed tests. As you add accounts, make sure your CRM won’t crash your entire network.
Typically, cloud-based systems come with quicker installation and regular, easily accessible updates and improvements. You’ll also need to factor in data security to your decision.
Does it integrate with your existing systems?
Just because you’re getting ready to shell out some cash on a new system doesn’t mean you should have to replace your existing software. CRM software can integrate with lots of other parts of your business, including POS software, accounting tools, marketing platforms, and more.You shouldn’t have to manually export and import data between platforms — as long as your new CRM is compatible with the apps you already use. Make sure all your systems will coordinate by asking customer support and double-checking with the vendor before making a commitment.
What is your budget?
Finally, the biggest question of all: what are you willing to spend on a CRM platform? There is quite a range on what a CRM might cost, from freemium offerings to price tags in the millions for enterprise-sized corporations. Mostly, you can expect to pay on a per-user, per-month basis, though some vendors charge a flat monthly fee for a set number of users.
Factor in how many people are going to use your platform, as well as how much customization is required. More customization and more usually lead to a higher price point and higher maintenance costs.
Realistically, a CRM system is a great investment. The ability to capture customer interactions and valuable sales leads: priceless.