Coffee Talk: Java, News, Stories and Opinions

Page 1 of 2412345...1020...Last »

July 3, 2017  11:26 AM

Advancing JVM performance with the LLVM compiler

cameronmcnz Cameron McKenzie Profile: cameronmcnz

The following is a transcript of an interview between TheServerSide’s Cameron W. McKenzie and Azul Systems’ CTO Gil Tene.

Cameron McKenzie: I always like talking to Gil Tene, the CTO of Azul Systems.

Before jumping on the phone, PR reps often send me a PowerPoint of what we’re supposed to talk about. But  with Tene, I always figure that if I can jump in with a quick question before he gets into the PowerPoint presentation, I can get him to answer some interesting questions that I want the answers to. He’s a technical guy and he’s prepared to get technical about Java and the JVM.

Now, the reason for our latest talk was Azul Systems’ 17.3 release of Zing, which includes an LLVM-based, code-named Falcon, just-in-time compiler. Apparently, it’s incredibly fast, like all of Azul Systems’ JVMs typically are.

But before we got into discussing Azul Systems’ Falcon just-in-time compiler, I thought I’d do a bit of bear-baiting with Gil and tell him that I was sorry that in this new age of serverless computing and cloud and containers, and a world where nobody actually buys hardware anymore, that it must be difficult flogging a high-performance JVM when nobody’s going to need to download one and to install it locally. Well, anyways, Gil wasn’t having any of it.

Gil Tene: So, the way I look at it is actually we don’t really care because we have a bunch of people running Zing on Amazon, so where the hardware comes from and whether it’s a cloud environment or a public cloud or private cloud, a hybrid cloud, or a data center, whatever you want to call it, as long as people are running Java software, we’ve got places where we can sell our JVM. And that doesn’t seem to be happening less, it seems to be happening more.

Cameron McKenzie: Now, I was really just joking around with that first question, but that brought us into a discussion about using Java and Zing in the cloud. And actually, I’m interested in that. How are people using Java and JVMs they’ve purchased in the cloud? Is it mostly EC2 instances or is there some other unique way that people are using the cloud to leverage high-performance JVMs like Zing?

Gil Tene: It is running on EC2 instances. In practical terms, most of what is being run on Amazon today, it is run as virtual instances running on the public cloud. They end up looking like normal servers running Linux on an x86 somewhere, but they run on Amazon, and they do it very efficiently and very elastically, they are very operationally dynamic. And whether it’s Amazon or Azure or the Google Cloud, we’re seeing all of those happening.

But in many of those cases, that’s just a starting point where instead of getting server or running your own virtualized environment, you just do it on Amazon.

The next step is usually that you operationally adapt to using the model, so people no longer have to plan and know how much hardware they’re going to need in three months time, because they can turn it on anytime they want. So they can empower teams to turn on a hundred machines on the weekend because they think it’s needed, and if they were wrong they’ll turn them off. But that’s no longer some dramatic thing to do. Doing it in a company internal data center? It’s a very different thing from a planning perspective.

But from our point of view, that all looks the same, right? Zing and Zulu run just fine in those environments. And whether people consume them on Amazon or Azure or in their own servers, to us it all looks the same.

Cameron McKenzie: Now, cloud computing and virtualization is all really cool, but we’re here to talk about performance. So what do you see these days in terms of bare iron deployments or bare metal deployments or people actually deploying to bare metal and if so, when are they doing it?

Gil Tene: We do see bare metal deployments. You know, we have a very wide mix of customers, so we have everything from e-commerce and analytics and customers that run their own stuff, to banks obviously, that do a lot of stuff themselves. There is more and more of a move towards virtualization in some sort of cloud, whether it’s internal or external. So I’d say that a lot of what we see today is virtualized, but we do see a bunch of the bare metal in latency-sensitive environments or in dedicated super environments. So for example, a lot of people will run dedicated machines for databases or for low-latency trading or for messaging because they don’t want to take the hit for what the virtualized infrastructure might do to them if they don’t.

But having said that, we’re seeing some really good results from people on consistency and latency and everything else running just on the higher-end Amazon. So for example, Cassandra is one of the workloads that fits very well with Zing and we see a lot of turnkey deployments. If you want Cassandra, you turn Zing on and you’re happy, you don’t look back. In an Amazon, that type of cookie-cutter deployment works very well. We tend to see that the typical instances that people use for Cassandra in Amazon with or without us is they’ll move to the latest greatest things that Amazon offers. I think the i3 class of Amazon instances right now are the most popular for Cassandra.

Cameron McKenzie: Now, I believe that the reason we’re talking today is because there are some big news from Azul. So what is the big news?

Gil Tene: The big news for us was the latest release of Zing. We are introducing a brand-new JIT compiler to the JVM, and it is based on LLVM. The reason this is big news, we think, especially in the JVM community, is that the current JIT compiler that’s in use was first introduced 20 years ago. So it’s aging. And we’ve been working with it and within it for most of that time, so we know it very well. But a few years ago, we decided to make the long-term investment in building a brand-new JIT compiler in order to be able to go beyond what we could before. And we chose to use LLVM as the basis for that compiler.

Java had a very rapid acceleration of performance in the first few years, from the late ’90s to the early 2000s, but it’s been a very flat growth curve since then. Performance has improved year over year, but not by a lot, not by the way that we’d like it to. With LLVM, you have a very mature compiler. C and C++ compilers use it, Swift from Apple is based on its, Objective-C as well, the RAS language from Azul is based on it. And you’ll see a lot of exotic things done with it as well, like database query optimizations and all kinds of interesting analytics. It’s a general compiler and optimization framework that has been built for other people to build things with.

It was built over the last decade, so we were lucky enough that it was mature by the time we were making a choice in how to build a new compiler. It incorporates a tremendous amount of work in terms of optimizations that we probably would have never been able to invest in ourselves.

To give you a concrete example of this, the latest CPUs from Intel, the current ones that run, whether they’re bare metal or powered mostly on Amazon servers today, have some really cool new vector optimization capabilities. There’s new vector registers, new instructions and you could do some really nice things with them. But that’s only useful if you have some optimizer that’s able to make use of those instructions when they know it’s there.

With Falcon, our LLVM-based compiler, you take regular Java loops that would run normally on previous hardware, and when our JVM runs on a new hardware, it recognizes the capabilities and basically produces much better loops that use the vector instructions to run faster. And here, you’re talking about factors that could be, 50%, 100%, or sometimes 2 times or 3 times faster even, because those instructions are that much faster. The cool thing for us is not that we sat there and thought of how to use the latest Broadwell chip instructions, it’s that LLVM does that for us without us having to work hard.

Intel has put work into LLVM over the last two years to make sure that the backend optimizers know how to do the stuff. And we just need to bring the code to the right form and the rest is taken care of by other people’s work. So that’s a concrete example of extreme leverage. As the processor hits the market, we already have the optimizations for it. So it’s a great demonstration of how a runtime like a JVM could run the exact same code and when you put it on a new hardware, it’s not just the better clock speed and not just slightly faster, it can actually use the instructions to literally run the code better, and you don’t have to change anything to do it.

Cameron McKenzie: Now, whenever I talk about high-performance JVM computing, I always feel the need to talk about potential JVM pauses and garbage collection. Is there anything new in terms of JVM garbage collection algorithms with this latest release of Zing?

Gil Tene: Garbage collection is not big news at this point, mostly because we’ve already solved it. To us, garbage collection is simply a solved problem. And I do realize that that often sounds like what marketing people would say, but I’m the CTO, and I stand behind that statement.

With our C4 collector in Zing, we’re basically eliminating all the concerns that people have with garbage collections that are above, say, half a millisecond in size. That pretty much means everybody except low-latency traders simply don’t have to worry about it anymore.

When it comes to low-latency traders, we sometimes have to have some conversations about tuning. But with everybody else, they stop even thinking about the question. Now, that’s been the state of Zing for a while now, but the nice thing for us with Falcon and the LLVM compiler is we get to optimize better. So because we have a lot more freedom to build new optimizations and do them more rapidly, the velocity of the resulting optimizations is higher for us with LLVM.

We’re able to optimize around our garbage collection code better and get even faster code for the Java applications running it. But from a garbage collection perspective, it’s the same as it was in our previous release and the one before that because those were close to as perfect as we could get them.

Cameron McKenzie: Now, one of the complaints people that use JVMs often have is the startup time. So I was wondering if there’s anything that was new in terms of the technologies you put into your JVM to improve JVM startup? And for that matter, I was wondering what you’re thinking about Project Jigsaw and how the new modularity that’s coming in with Java 9 might impact the startup of Java applications.

Gil Tene: So those are two separate questions. And you probably saw in our material that we have a feature called ReadyNow! that deals with the startup issue for Java. It’s something we’ve had for a couple of years now. But, again, with the Falcon release, we’re able to do a much better job. Basically, we have a much better vertical rise right when the JVM starts to speed.

The ReadyNow! feature is focused on applications that basically want to reduce the number of operations that go slow before you get to go fast, whether it’s when you start up a new server in the cluster and you don’t want the first 10,000 database queries to go slow before they go fast, or whether it’s when you roll out new code in a continuous deployment environment where you update your servers 20 times a day so you rollout code continuously and, again, you don’t want the first 10,000 or 20,000 web request for every instance to go slow before they get to go fast. Or the extreme examples of trading where at market open conditions, you don’t want to be running your high volume and most volatile trades in interpreter Java speed before they become optimized.

In all of those cases, ReadyNow! is basically focused on having the JVM hyper-optimize the code right when it starts rather than profile and learn and only optimize after it runs. And we do it with a very simple to explain technique, it’s not that simple to implement, but it’s basically we save previous run profiles and we start a run assuming or learning from the previous run’s behavior rather than having to learn from scratch again for the first thousand operations. And that allows us to run basically fast code, either from the first transaction or the tenth transaction, but not from the ten-thousandth transaction. That’s a feature in Zing we’re very proud of.

To the other part of your question about startup behavior, I think that Java 9 is bringing in some interesting features that could over time affect startup behavior. It’s not just the Jigsaw parts, it’s certainly the idea that you could perform some sort of analysis on code-enclosed modules and try to optimize some of it for startup.

Cameron McKenzie: So, anyways, if you want to find out more about high-performance JVM computing, head over to Azul’s website. And if you want to hear more of Gil’s insights, follow him on Twitter, @giltene.
You can follow Cameron McKenzie on Twitter: @cameronmckenzie

October 23, 2018  2:44 PM

Reinhold advocates adding fiber to your Java diet in Oracle Code One keynote

cameronmcnz Cameron McKenzie Profile: cameronmcnz

At last year’s JavaOne, Mark Reinhold, Java’s Chief Language Architect, introduced a forthcoming rapid release cadence in which a new version of Java would be produced every six months. The release train would always leave the station on time. If a planned feature wasn’t ready, it’d have to catch the next ride. It was an ambitious promise, especially given the fact that three years and six months separated the release of Java 8 and Java 9.

But despite being ambitious, the plan went off without a hitch.  A year has passed since the original announcement, and in his Java language keynote at  Oracle Code One, you could tell there was a bit of pride, if not subtle boasting, about the fact that the promise was kept.  Java 10 and Java 11 were both released as planned in March and September of 2018.

“A moment of silence for Java 9 please – the last final massive JDK release.”
-Mark Reinhold, Java Chief Language Architect

Reinhold only briefly discussed some of the new features that got baked into the 2018 releases, with special emphasis on the fact that the rapid releases even included a significant language change, namely the inclusion of the var keyword. “Java 10 shipped on time in March of 2018, it contained merely 12 JEPs, but these weren’t trivial,” said Reinhold. “And Java 11 shipped just a few weeks ago. It contained 17 JEPs along with many bug fixes and enhancements.”

But you could tell that what Reinhold wanted to talk about most, which mapped nicely to what the audience was there to hear about, was the new features and facilities that the future of Java has in store. Reinhold highlighted four projects in particular, namely Amber, Loom, Panama and Valhalla.

Project Amber

According to Reinhold, Project Amber is all about rightsizing language ceremony. In this new era of machine learning and data driven microservices, it’s important for a developer to be able to express themselves clearly and concisely through their code. Project Amber attempts to address that, in a more meaningful way than simply templating boilerplate code.

In his keynote, Reinhold performed a batch of live coding, demonstrating the following three Project Amber features:

  1. Local variable type inference
  2. Raw string literals that don’t require escape sequencing, which is available as a preview feature in the JDK 12 early access release
  3. Switch expressions with enums and type inferences which will enhance case based conditional logic

Project Loom

Working with threads in Java has always been a bit of a mess in Java. From their useless priority settings to the meaningless ThreadGroup, the age old API leaves plenty of room for improvement. “Threads have a lot of baggage,” said Reinhold. “They have a lot of things that don’t make sense in the modern world. In fact, they have a lot of things that didn’t make sense when they were introduced.”

Demoing a recent build of Project Loom, Reinhold demonstrated how easy it is to implement concurrent threads using Fibers, much to the delight of every software developer who has ever struggled with I/O blocking and concurrency issues. A full implementation of this new, lightweight thread construct appears to only be a release or two away.

Project Panama

Panama is the isthmus that connects North and South America, while also providing a canal that connects the Atlantic with the Pacific. Add to that, a project about connecting native code written in languages like C++ and Go with what’s running on the JVM. Panama is about improving the connection between native code, foreign data and Java.

“Many people know the pain of JNI, the Java Native Interface,” said Reinhold. Project Panama promises to make integrating with libraries written in other languages not only easier, but capable of significantly increased performance over the frustratingly throttled JNI bridge.

Project Valhalla

As everyone who codes Java knows, the language will not scale linearly. The JDK will scale, which is why languages like Scala and Kotlin are so popular. But the manner in which Java uses pointers and mutable data means throwing twice as many resources at an application under heavy load will not result in anything near a doubling of the throughput. But all of that is about to change.

“Today’s processors are vastly different than they were in 1995. The cost of a cache miss has increased by a factor 200 and in some cases 1000. Processors got faster but the speed of light didn’t.”
-Mark Reinhold, Java Chief Language Architect

Project Valhalla introduces value types, a mechanism to allow the data used by Java programs to be managed much more efficiently at runtime. Value types are pure data aggregates that have no identity. At runtime, their data trees can be flattened out into a bytecode pancake. When Project Valhalla is finally incorporated into the JDK, the whole performance landscape will change.

“Chasing pointers is costly,” said Reinhold. Objects have identity, they can be attached to a sync monitor and they have internal state. When your applications create a massive numbers of objects, as do big data systems or artificial neural networks, the impact on performance can be significant.

Like all good magic tricks, Project Valhalla’s is deceivingly simple. All it requires is a single keyword, value, to be added to the class declaration, and this small addition completely changes an instance’s data is managed. There are of course small caveats that go along with the value keyword’s use, but that’s to be expected. In a live coding demonstration, Reinhold added the value keyword to a class that performed matrix calculations, and the result was almost a threefold increase in executed instructions per cycle, along with notable changes with regards to how memory was allocated and how garbage collection routines behaved.

The future of Java holds in store a number of impressive improvements that will make programming easier and the applications we create faster. And the great thing about the new rapid release cadence is the fact that we won’t have to wait for all of these things to be finalized before we get to use them. When features are complete, they’ll trickle their way into the upcoming release cycle, with many of these features only being one or two release cycles away.


October 23, 2018  2:17 PM

Lamenting the death of JavaOne at Oracle Code One 2018

cameronmcnz Cameron McKenzie Profile: cameronmcnz

As Oracle announced this past April, JavaOne 2017 would be the last of its kind.  The Oracle Open World (OOW) conference will now refer to its developer-centric segment as Oracle Code One. The JavaOne branding has disappeared. And since Oracle owns the Java trademark, it’s unlikely anyone will ever get permission to use the name and allow the Java community to put on a JavaOne conference of its own. JavaOne is dead.

The re-branding is a disappointment, but it really isn’t much of a surprise. The way the JavaOne conference was always pushed off to a scattering of unaffiliated hotels a bus ride away from OOW at the Moscone Center was always a great metaphor of Oracle’s desire to distance itself from the Java brand. In 2017, probably the best JavaOne conference since Oracle acquired Sun Microsystems, Java sessions took place in the same building as Oracle Open World, but the attempt at inclusion was too little and too late.

The future of Java and Oracle

Oracle seems pretty intent on washing its hands of its tight links with Java. Quit frankly, I can’t blame them for wanting to do so. The Java community has never trusted Oracle as stewards of the technology, and every move Oracle made with Java was treated with cynicism and skepticism. Protecting their intellectual property by suing Google over Android’s use of their language in what seemed like a never-ending set of legal battles was probably the biggest single example of Oracle making sensible business moves that stirred up resentment and disenchantment in the community. Oracle rightfully won the case on final appeal, but being on the right side of the law left them on the wrong side of the community.

At JavaOne 2017, Oracle made a number of announcements about handing various proprietary components, including Java Flight Recorder, over to the community backed OpenJDK project. They also said they’d be working closely with IBM and Red Hat to move Java EE over to the Eclipse Foundation. All of these actions were well received by the community, although it always felt more like Oracle was simply ridding themselves of a burden, rather than offering up any truly charitable goodwill. Being the stewards of Java has always been more of a curse than a blessing.

A bigger and better JavaOne?

And so here we are, back in San Francisco in October of 2018, as the death of JavaOne begets the birth of the Oracle Code One. It is certainly feels like a step back from former Oracle Vice President of Engineering Mark Cavage’s JavaOne 2017 keynote’s “Java first, Java always” quote, but the new conference still remains the place to be to experience the biggest gathering of developers, thought leaders and Java proponents. The name has changed, but Java remains the primary focus of the developer event.

Oracle asserts that this new conference will be a “new, bigger event that’s inclusive to more languages, technologies and developer communities,” which is actually just a recognition of the direction the JavaOne conference had already been going. So in many ways, it’s the same path forward with just a different name and branding attached to it. So perhaps the name change isn’t all that big of a deal?

The problem is, I liked the old name. I bet there’s a number of JavaOne alumni out there who probably agree.

 


October 22, 2018  1:12 AM

Five Oracle Code One sessions you don’t want to miss

cameronmcnz Cameron McKenzie Profile: cameronmcnz

Going over my Oracle Code One schedule for 2018, there are five sessions that I’m particularly interesting in attending:

  1. From Monoliths to Pragmatic Microservices with Java EE
  2. Automating Your CI/CD Stack with Java and Groovy
  3. Fully Reactive: Spring, Kotlin, JavaFX, and MongoDB
  4. Jakarta EE: What Does It Mean for Enterprise Java?
  5. Is Your JVM Speaking to You?

Automating Your CI/CD Stack with Java and Groovy

TheServerSide has been running a number of Jenkins tutorials and continuous integration build examples as part of our ongoing discussion on DevOps technologies. But we’ve only scratched the surface when it comes to using the Groovy programming language and implementing pipeline as code, which makes this session on continuous integration and continuous delivery with Groovy particularly interesting to me.

The speakers for this session are also noteworthy. Both Jeanne Boyarsky and Scott Selikoff are fellow moderators at the CodeRanch, have a reputation as being entertaining speakers, and have a long history of giving back to the Java community. They also wrote a popular Java certification guide together.

Fully Reactive: Spring, Kotlin, JavaFX, and MongoDB Playing Together

While Jenkins has been enjoying quite a bit of coverage at TSS, reactive programming hasn’t been getting nearly enough. The same could be said for Kotlin. I’m hoping this session will help reverse that trend and inspire a few new articles on the subject.

The speakers for this session are Java Champions Trisha Gee and Josh Long. Gee has provided insights on a variety of topics for TheServerSide in the past, while Pivotal evangelist Long has been a bit of a slippery fish, but we’re pretty sure we will be able to get an interview with him later this year or next. Long made a few Twitter posts about feeling unwell, but everyone is hoping he is feeling well enough to attend and speak.

From Monoliths to Pragmatic Microservices with Java EE – BYOL

I don’t deny the benefits of cloud native computing, but I’m still not convinced that every enterprise solution should be rewritten as a set of microservices. I also find much of the talk about going from monoliths to microservices rather disingenuous. The discussion tends to just set up various overused straw man arguments that have been used to flog everything from EJBs to SOA. A pragmatic approach to the topic interests me, although I’m waiting to see just how pragmatic a set of microservice advocates can be on the topic.

Speaking of microservices evangelits, this talk rounds up three heavy hitters: Ivar Grimstad, Principal Consultant, Cybercom Sweden, Reza Rahman, Senior Vice President, AxonIQ and Ondro Mihalyi, Senior Engineer, Payara. There certainly won’t be any shortage of expertise on the subject of microservices, that’s for sure.

Jakarta EE: What Is It and What Does It Mean for Enterprise Java?

As the name of this site clearly implies, the central focus here is server side Java, and no other technology is more pertinent to that focus than is  enterprise Java.

The session description says “Java EE has been the dominant enterprise Java standard for well over a decade. With the release of Jakarta EE, we all have a chance to collaborate and build on the good things it inherits while working to evolve those pieces that were perhaps never quite what was needed.” This session goes directly to the heart of what TSS is all about.

Is Your JVM Speaking to You?

Kirk Pepperdine is in town again, and once again, he’s talking about talking with the JVM.

The Java Virtual Machine is at the heart of everything we do in the Java world, and few people know as much about tuning it as Pepperdine does. In this session, the focus is logging, specifically the JVM’s new unified logging system. Pepperdine will be demonstrating how to use it, how to understand it, and how to squeeze more out of it than usual by knowing how to configure the JVM to spill its little known secrets.

Those are the five sessions to which I’m most looking forward, although there’s nothing on my fully packed schedule that isn’t interesting. But I think those five really reflect not only my interests, but the key interests of those who have a passion for server side Java.


October 21, 2018  3:42 AM

How to get the most out of Oracle Code One

cameronmcnz Cameron McKenzie Profile: cameronmcnz

If Oracle Code One 2018 is your first time attending a major software conference, it will serve you well to follow some sage advice and insights from a veteran attendee of past JavaOne and Oracle Open World (OOW) conferences.

The first piece of advice, for which it is currently far too late to act upon, is to make sure you’ve got your hotel booked. Barry Burd wrote a JavaOne article for TheServerSide a couple of years ago that included some insights on how to find a last minute hotel in San Francisco that isn’t obscenely far from the venue, although given the limited availability when I did a quick search on Expedia earlier this week, I’d say you’d be lucky to find a hotel in Oakland or San Jose for a reasonable price, let alone San Francisco.

Schedule those Oracle Code One sessions

For those who have their accommodation booked, the next sage piece of conference advice is for attendees to log on to the Oracle Code One session scheduler and reserve a seat in the sessions you wish to attend. Various sessions on the Eclipse Microprofile, microservices and reactive Java are already overbooked. The longer you wait to formulate your schedule, the fewer the sessions you’ll have to choose from.

When choosing sessions, I find the speaker to be a more important criteria for discernment than the topic. Most speakers have a video or two up on YouTube of them doing a presentation. Check those videos out to see if the speaker is compelling. An hour can be quite a long time to sit through a boring slide show. But an exciting speaker can make an hour go by in an instant, and if you’re engaged, you’re more likely to learn something.

Skip the Oracle keynotes

One somewhat contrarian piece of advice I’m quick to espouse is for attendees to skip the Oracle keynotes, especially the morning ones. That’s not to say the keynotes are bad. But getting to the keynotes early enough to get a seat is a hassle, and you can’t always hear everything that’s being said in the auditorium. A better alternative is to stream the keynote from your hotel room, or better yet, watch the the video Oracle uploads to their YouTube channel while you’re eating lunch.

Enjoy the party

The other big piece of advice? Enjoy San Francisco, especially if it’s your first time in the city. It’s the smallest alpha city in the world, but it is an alpha city. There are plenty of parties, meet-ups and get-togethers you’ll find yourself invited to, and it’s worth taking up any offers you manage to get. Having said that, keep an eye on how much gas you have left in the tank at the end of the day, because you want to be able to make it to all of the next-morning sessions you’ve scheduled for yourself.

If it’s your first time attending a major conference, I assure you that you’ll have a great time at the first Oracle Code One. San Francisco is a great city, and the greatest minds in the world of modern software development will be joining you in attendance.


October 14, 2018  10:09 PM

To the brave new world of reactive systems and back

Uladzimir Profile: Uladzimir
Uncategorized

Reactivity is surely an important topic, though I believe we’ve spent too much time talking about reactive programming, while only briefly mentioning the other implementation — reactive systems. It’s time to reevaluate.

Reactive systems are of particular attention in today’s IT world, because we need complex, highly distributed applications that can handle a high load.

Before we explore reactive systems, their basic principles and practical application, let’s brush up quickly the core idea of reactivity. 

A quick warm-up on reactivity

The big idea behind reactivity is to create applications that will gracefully deal with modern data that is often fast, high volume, and highly variable.

Reactive systems are not the same thing as reactive programming. Reactive programming is used at the code level, while reactive systems deal with architecture. And they don’t imply the obligatory use of reactive programming.

In 2014, the Reactive Manifesto 2.0 boiled the basic concepts of modern reactive systems into four fundamental principles:

  • Responsiveness: Be available for users and, whatever happens (overload, failure, etc.), be ready to respond to them.
  • Resilience: Stay immune to faults, disruptions, and extremely high loads.
  • Elasticity: Use of resources efficiently and balance the machine performance — vertical scaling up or down — or easily regulate the number of machines involved — horizontal scaling up or down — depending on the load.
  • Message-driven character: Embrace a completely non-blocking communication via sending immutable messages to addressable recipients.

Though the principles often get enumerated as a list, they are closely interrelated and can be modified to read as follows:

A reactive system bases on message-driven communication that guarantees exceptionally loose coupling of the components and thus allows the system’s elasticity and resilience, both of which contribute to its high availability (responsiveness) to a user in case of overloads, disruptions, and failures.

What you need to build a reactive system

Reactive systems are about architectural decisions. There is no need to use a specific language to create a reactive application There’s also no obligatory need to call on any particular framework or tool.

There are frameworks, however, that adhere to the reactive philosophy and make a system’s implementation simpler. For example, you can leverage the benefits of Akka actors, Lagom framework, or Axon.

As we mentioned, reactive systems are based on certain design decisions, some of which are detailed in the book Reactive Design Patterns by Brian Hanafee, Jamie Allen, and Roland Kuhn. We’ll give you a taste of several popular patterns that make a system reactive.

  • Multi-level organization: Level down potentially unstable or dangerous components — ones that are often overloaded or vulnerable to frequent changes or exposed to third parties. So if failures or disruptions occur, there will always be a higher-level component to continue the work or to tell a user that something is wrong.
  • Message queues: Separate data consumers from data producers. Back pressure mechanisms allow a reactive system — not a user — to control the speed of the environment. In order not to shock the server with a massive data flow, the back pressure mechanisms allow the server to extract the messages from the queue at a convenient and safe speed. 
  • Pulse patterns: An accountable server should send health check responses to a responsible server at regular intervals. This prevents the messages from going into the void in case of an unnoticed server failure.
  • Data replication: Different data consistency patterns — active-passive, consensus-based, conflict-free — maintain the system availability in case of failures and crashes in database clusters.
  • Safety locks: Continuously track the state of servers. In case of too many breakdowns or highly increased latency, safety locks automatically exempt the server from the process and let it recover. 

When to switch to a reactive approach

Disclaimer: In the vast majority of cases, building a reactive architecture is rather costly and requires a lot of effort and time. It requires the introduction of mediator components, data replication, etc. If you choose the reactive approach, make sure your application really needs it.

Simply put, you adopt reactive architecture when you need its benefits. You turn to it when mission-critical applications can’t fail or you need to tackle extremely heavy loads. If you build an application with more than 100,000 users and want impeccable UX with increased responsiveness and high availability, then a reactive architecture may be worth it.


October 14, 2018  9:37 PM

How to choose the right virtual reality development engine

charlesdearing Profile: charlesdearing
Uncategorized

The promise of virtual 3D worlds has captivated programmers for decades. Virtual reality (VR), once a faraway fiction, is becoming a reality. Failures like Nintendo’s infamous Virtual Boy are now a distant memory, and major successes like PSVR and Google Cardboard have become the norm. In fact, Statista projects incredible growth for VR, estimating that the market will expand to $40 billion by 2020.

It’s more feasible now than ever to create your own VR applications. The cost to participate in VR, for both consumers and developers, has lowered dramatically in recent years. A plethora of tools is available for new development teams to enter the fray as well.

One of the most important elements to your VR gaming development process is the engine you use to build with. Unless you have unlimited time and resources, it’s in your best interest to use a commercial engine rather than create one yourself.

Develop VR apps with premium engines

Akin to other development environments, there are many free-to-use and open source engines at your disposal. You’ll have plenty of options to choose from, but you’ll need to educate yourself about these tools to ensure you’re making the best project in regards to your specifications.

The Unreal Engine and Unity have been used for a myriad of 3D video games and VR applications. They have been classic choices for video game development and even for mobile app development.

The Unreal Engine is free-to-use and allows development teams to create their own interactive applications at no cost. The caveat, however, is that you’ll have to share a small percentage of your profits with the Unreal team.

How to build VR apps without coding

Interestingly, you can code your entire application with simple logic through Unreal Engine’s Blueprint Visual Scripting functionality. With Blueprints, you can design programmatic actions, methods, and computer behavior without writing a single line of code.

You won’t find this design feature on any other major engine. If you have a design-heavy team, filled with more analytical designers and artists than programmers, you may see the appeal Unreal Engine.

Unity is a developer favorite

Unity, a similar if underpowered engine compared to Unreal, costs a small upfront fee, but you won’t have to pay out any royalties once you’ve finished your application. In order to use Unity, though, you’ll also need to have a team with strong C# skills.

If you don’t have a strong background in C# or the funds to bring on a more experienced C# programmer, you should strongly consider using the Unreal Engine. If your team has the programming ability and design ability, Unity can be a great and relatively low-cost option that sacrifices little in terms of quality.

There are great open source options, too

If you’re looking for the lowest cost possible, you’ll want to investigate completely free engines. Godot may be serviceable, but VR compatibility is not completely assured. You’ll have to devote more time and resources to fit the engine to your needs.

Completely open source VR-ready engines are also available for use. Apertus VR is one such example. It’s a set of embeddable libraries that can easily be inserted into existing projects. Open Source Virtual Reality (OSVR) is another VR framework that can help you begin developing your own games. Both OSVR and Apertus VR are fairly new creations, however, and you may experience bugs and other issues you wouldn’t with Unity or Unreal.

VR applications are incredibly hard work, but with a bit of persistence and some help from experienced developers, you’ll get the hang of the VR development process.

While you can’t control a great deal of what happens within the development process itself, you should make absolutely sure that you select the right engine or VR framework. Take the time to weigh the pros and cons of the available tools available. It’s the most important decision you’ll make in the development process.


September 28, 2018  1:40 PM

False ‘DevOps encompasses culture and collaboration’ myth destroyed

cameronmcnz Cameron McKenzie Profile: cameronmcnz

There are plenty of concerns organizations should have as they purchase a trip-tik and chart a path across their DevOps roadmap, but hand wringing about nurturing a DevOps culture and fostering an environment of collaboration and communication between operations and development teams isn’t one of them. If anything, it’s a time waster and a DevOps anti-pattern.

Why ‘DevOps encompasses culture and collaboration’ is false

Figuring out how DevOps encompasses culture and collaboration is a great topic of debate for witless members of the human resources team who have nothing more productive to do, but it’s poisonous if such discussions start consuming the clock cycles of developers and engineers. Those people have real jobs to do.

When anyone suggests that “DevOps encompasses culture and collaboration,” I’m always quick to point out that collaboration may indeed be part of the equation, but not in a positive way. DevOps is not about increased collaboration. DevOps is not about improving communication.

When you are doing DevOps right, there should be less collaboration, not more. The whole point of continuous delivery and continuous integration (CI/CD) — core tenets of the DevOps philosophy — is to remove as many manual processes as possible. That means communicating and collaborating less, not more.

And while I do despise all the talk about development and operations silos, the fact is that when we use DevOps tools like Chef, Jenkins, Docker and Git properly, human interactions should decrease. If I can wrap my Java app in an embedded Tomcat container, build it with Docker and deploy it to the cloud without anyone in the operations team making me fill out a form or fire off an email, the need for collaboration actually goes away. Just for this reason alone, the assertion that “DevOps encompasses culture and collaboration” is a complete and total misnomer.

It’s also worth mentioning that if your organization does have collaboration and communication issues, that’s a human resources problem, not a technical one. Don’t blame the IT department for the incompetence of the hiring manager.

Let’s debunk the false DevOps culture myth

I’ve debunked the DevOps culture myth before, but deprogramming converted DevOps cultists is a never ending battle.

Effective DevOps practices may have a long term impact on the culture of an organization, but any DevOps culture change is a result, not a prerequisite for adopting the process. Every single DevOps evangelist seems to get this simple concept wrong, and quite frankly, it’s befuddling.

Culture is defined by the various processes and procedures individuals within a group commonly perform. In Amsterdam, people tend to ride bikes. In the Ukraine, residents seem to really like cabbage. Those are cultural traits.

Why do the Dutch ride bikes? It’s because Amsterdam is as flat as a pancake. In fact, pancakes only wish they were as flat as Amsterdam. It’s that condition that makes cycling appealing to more than just a small group of spinning enthusiasts. In the Ukraine, cabbage is inexpensive and abundant, and that results in a cornucopia of tasty cabbage themed dishes. Nobody went around telling people in Amsterdam or the Ukraine that they needed to change their culture and start riding bikes and eating cabbage. Pre-existing conditions determine cultural patterns.

The same thing applies to DevOps culture. Complaining that a certain organization needs to change its culture is mindless and unproductive. If DevOps really encompasses culture and collaboration, it does so only as an output, not an input. If you want to do DevOps right, the correct approach is to focus on the inputs. If the right inputs are provided, a DevOps culture will elute.

DevOps cultural inputs

And what are those inputs that produce a DevOps culture? Giving developers tools they like is a start. If developers are allowed to use a distributed version control system like Git which provides facilities to work independently and in an isolated manner, they’ll be more likely to check code in multiple times a day. If you bring in build tools such as Apache MavenJenkins CI or Atlassian Bamboo into the mix early, running continuous code quality routines will become the norm. Throw in a good collaboration tool like JIRA and the Agile sprints will go more smoothly as well.

Another big culture change happens after organizations provide on-demand cloud computing resources. Traditionally, the operations team would make developers jump through a bunch of hoops, fill out a bunch of forms and wait for an extended period of time before provisioning a prototyping environment or a sandbox server. If cloud-based resources are available on demand, then time thieving, resource-requisition processes get eliminated. That results in a culture change.

And it’s time saving, silo-destroying processes like these that drill to the root of the DevOps philosophy. After all, when developers begin using on-demand cloud computing resources to provision their own container hosting services, where exactly does the delineation between development and operations occur? The melding of development and operations is a result of using the right tools and building the right processes around them. That’s what DevOps is all about, and that’s the only manner in which “DevOps encompasses culture and collaboration.”

DevOps encompasses culture and collaboration: True or false?

Don’t buy into the false DevOps culture myth and don’t buy into the call for DevOps teams to collaborate and communicate better. Those are talking points of the uninitiated. Prognosticators dwelling on them should be dismissed. Smooth the ground and introduce tools that developers and operations teams like, while making on-demand resources more abundant than cabbage in the Ukraine. If you do that, things will start to change.

If you want to do DevOps right, focus on the inputs, not the outputs.


September 1, 2018  2:41 PM

Fix ‘could not reserve enough space for 2097152KB object heap’ JFrog Artifactory startup error

cameronmcnz Cameron McKenzie Profile: cameronmcnz

You’ve decided to set up a Maven repository, and you’ve settled on JFrog Artifactory as your artifact life-cycle management tool, but every time you try to run the product you’re getting a could not reserve enough space for 2097152KB object heap startup error message. Don’t fret. There’s an easy fix.

JFrog Artifactory 2097152KB startup error

JFrog Artifactory 2097152KB object heap startup error

In the \bin directory of your JFrog Artifactory installation, there’s a file named arifactory.bat which contains a number of settings. The offending one is JAVA_OPTIONS which sets the -Xmx flag to two gigs, or as the Artifactory startup error more accurately states, to a 2097152KB object heap. To fix the could not reserve enough space for 2097152KB object heap error message, just change the -Xmx setting to something more conservative such as 512m or 1024m.

Before fixing the Artifactory object heap error:

set JAVA_OPTIONS=-server -Xms512m -Xmx2g -Xss256k -XX:+UseG1GC

After fixing the Artifactory object heap error:

set JAVA_OPTIONS=-server -Xms512m -Xmx1024m -Xss256k -XX:+UseG1GC

Artifactory Xmx setting

Change the Artifactory Xmx setting to 512m instead of 2g

Obviously the artifactory.bat file is for windows installations. There is an artifactory.sh file to edit if you are running the JFrog Maven repository on a Linux distribution.

Production Artifactory Maven repos

It should also be noted that the default JFrog Artifactory JAVA_OPTIONS set Xmx to 2g because the people who build the product believe that is a good starting point for production based systems. On a production machine, a better approach to fixing the could not reserve enough space for 2097152KB object heap startup error is to actually allocate more memory to the Docker container or the virtual machine. But for test environments or for people who just want to learn Maven, along with how an artifact based DevOps tool works, this quick fix should be more than sufficient.

JFrog Artifactory startup error fix

Putting out fires on JFrog Artifactory startup.


August 26, 2018  8:15 PM

Is there a place for old developers on young development teams?

BobReselman BobReselman Profile: BobReselman

It’s a tale too often told. A young developer shows up bright eyed and bushy tailed to start a new career in software development. There’s code to create, documentation to write, and problems to solve. The world is a never-ending adventure.

Before you know it, twenty years go by. By then, just about all of the developer’s colleagues have moved into management.

Hands-on work is for the young. They can do the all-nighters. Their energy is boundless. The aging contributor who just wants to create has become the oldest person on the team. She wonders if it was a mistake to stick with the code. Maybe it was better to just go into management.

The story’s not unusual. In fact, I wrote about it in a previous article in which I made the argument that software development is no country for old men …. or old women, for that matter.

But there are exceptions to the conventional thinking. There are people out there who’ve made the conscious decision to stay close to the code and not to go into management. They’ve created lives that are satisfying and prosperous. And they continue to create and contribute. Their stories are worth knowing. So, I’ll share. The first person that comes to mind is Charles Petzold.

 Insights from Charles Petzold

If you’ve programmed Windows, chances are you’ve used one Charles Petzold’s books. He’s written various definitive guides for programming Windows. In addition, he’s written other books about software and the history of computer programming. His book, Code, The Hidden Language of Computer Hardware and Software is an engaging read.

Recently, I had the opportunity to I ask Charles why he stayed with code and never went into management. His response was thought-provoking:

Between 1985 and 2014 — when I joined Xamarin as a full time employee — I was a freelancer, and I thought of myself more of a writer than a programmer — except when I needed to do consulting — so the idea of management never occurred to me.

I guess the equivalent of management for a freelancer would be starting my own consulting firm, and hiring employees. Or even starting a software company and trying to market something. But I simply don’t have the entrepreneurial gene.

My brief experience as a manager (when I worked for NY Life between ’75 and ’85) made me uncomfortable. The interpersonal relationship was awkward.

I like working. I don’t like “managing” others. It seems like a strange word to use when referring to other human beings.

Wow! You’ve got to admire the unfettered honesty. It is indeed thought-provoking.

Another person who made the conscious decision to stay close to the code is noted technology writer, Steven J. Vaughan-Nichols.

Steven J. Vaughan-Nichols

Steven J. Vaughan-Nichols, aka sjvn, is an internationally known technology writer and industry analyst. He’s been writing about technology and the business of technology for a long time, since CP/M-80, one of the early operating systems for the personal computer. Sjvn is considered a trusted authority by many because he stays close to tech. He too has steered clear of the management track. He elaborated:

In my case, I went from being an OK programmer to being a decent sysadmin and finally to being a pretty good, if I say so myself, technology journalist. I tried management every step of the way, but the bottom line is, I’m a mediocre manager. I could have faked it — I fear many IT managers do just that — but I knew I wasn’t good at it. I have zero interest in doing lousy work.

Again, another instance of unfettered honesty. The same can be said of dotNetDave.

dotNetDave

David McCarter is a noted developer and Microsoft MVP (Most Valuable Professional). Dave has written a number of books and published many videos about computer programming. His YouTube series, dotNetDave Explains, is a hit among .NET developers. Dave has been coding for a long time and has no intention of doing anything else. For him, it’s the challenge of continuous learning. He explained:

In any career I’ve done, I’ve always wanted to progress. If I’m not progressing, I’m not growing. The natural progression of a developer is beginner, intermediate, senior, and then maybe a lead and then maybe into management. Since the early 2000s I’ve known this, but have purposely avoided going into management. I like coding too much. As soon as you go into management, you stop coding. The reason I like coding — and I’ve talked about this in my books and conference sessions — is that it’s like guitar playing. I can never learn every song. There’s always something new I can learn. Programming is the same way. This world changes everyday. Coding is a way for me to learn. And, as I’ve said many times, the day I stop learning is probably the day I die.

Dave is a example of a life lived in the pursuit of continuous learning. He’s influenced many over the years. Younger developers are following his lead. Take 32-year-old developer and architect, Derek Zott, for example,

Derek Zott

Derek Zott is a programmer who has been at it since he was 15 years old. He works at a well-known software company. I met him through my wandering about the Internet. As I talked to Derek, I was taken back to a time when the code was everything, when the principal ambition was to make good stuff. Derek said:

Yes, I plan on staying in development doing hands-on coding. To me, that means at least 1x a week, I’m in code. My guiding principle is that if I’m doing good work — quality of design and its impact to the customer/business/organization — my career path will work out for the best. Moving to management doesn’t give me the perspective, freedom or creativity I need to make an impact.

Derek sees a very viable career path staying close to the code. His response is refreshing. Given his dedication, it’s fair to imagine that he’s going to make some significant contribution to the profession.

Putting It All Together

There’s an old joke in coding circles that goes like this: One of the biggest fears of a true software developer is not getting old, it’s waking up one day and finding out that there’s no more code left to write.

Software development is both an art and a science. It requires a unique type of creativity. The truly gifted developers — from coders to SDETs, system architects and product designers — are rare. They’re also driven. They create because they have to. It can take a lifetime to achieve greatness.

Yet sadly for many, software development is perceived as a career with an expiration date. The conventional wisdom is that you spend your younger years slinging code and then move onto other things, maybe management, maybe even becoming a CEO. I call it the Bill Gates path to glory.

But, there is another path forward that is just as challenging, lucrative and prestigious. It’s the path taken by the likes of Steve Wozniak, Martin Fowler and Grady Booch. You stay close to the code and do the work that needs to be done in the pursuit of excellence. You spend a lifetime in the creative endeavor. But just as importantly, your contributions will serve as stepping stones that allow others to do work that matters. For those who have stayed close to the code, such reward is worth the endeavor.


August 16, 2018  1:05 AM

Cybersecurity risk management doesn’t need to be all or nothing

Daisy.McCarty Profile: Daisy.McCarty

Cybersecurity risk management should be a concern for organizations of all sizes. New threats and data breaches make the news every few days. But as vendors and cybersecurity risk management consulting firms can attest, far too many companies still lag behind when it comes to implementing safeguards. In part, this is due to the fragmented nature of the available products and services on the market. Even with options available to solve cybersecurity challenges, however, businesses may not know where to start.

So many options, so many gaps

Tulin Sevgin, cyber risk management specialist with InConsult, has found it difficult to come up with comprehensive protection for her company’s clients. Like most risk management consultancies, InConsult wasn’t looking to become a technology firm. But it could hardly ignore the pressing need for cybersecurity risk management as part of the total picture. The race was on to find a vendor that could best serve its clients. Sevgin took this search seriously.

“Instead of developing our own product from scratch, I went to the market to see what was out there, what our competitors were doing, and what I could do differently to give us an edge,” Sevgin said.

She discovered that there were plenty of vendors in the space, but most were aiming at solving the same handful of problems.

“There are a lot of companies out there that do penetration testing,” Sevgin said. “But there aren’t that many doing things like vulnerability management, cloud scanning, external APIs and website scanning, and then also scanning the internal environment to see where your weaknesses are.”

Cybersecurity risk management choices

Instead of finding three or four vendors who specialized in these different areas, her goal was to get it all in one place. And eventually, they found a company that did it all and teamed up with them.

The selected vendor provided security across all the following areas:

  • Third-party vendors
  • Externally-facing websites and APIs
  • Networks and applications
  • Servers and clouds
  • Personally Identifiable Information (PII) and sensitive business data

That’s quite a lineup. Of course, not every business needs to pay for every possible type of security. However, there could be an advantage to working with a vendor or consulting firm that understands the full scope of what’s available to help determine the right direction. It all begins with an accurate assessment.

Where to start

Start with a plan. Determine the potential risks, the possible fallout, the budget available to shore up security, and the risk tolerance of the organization. For example, a public utility responsible for critical infrastructure requires a high level of cybersecurity, whereas a local business has much more modest needs.

According to Sevgin, companies don’t have to look far—or even pay anything—to get started. Free resources are readily available.

“For best practice purposes, the NIST framework is good to look at,” Sevgin said. “These are great guidelines, not the kind that you need to implement from beginning to end. You can choose what’s most effective to address your weaknesses in a way that fits your organization.”

The National Institute of Standards and Technology (NIST) espouses the well-known five-factor approach to cybersecurity:

  1. Identify: Understand the business context, resources tied to critical functions, and potential scenarios.
  2. Protect: Develop and implement safeguards to ensure delivery of critical services, limiting the impact of a potential incident.
  3. Detect: Ensure the ability to identify cybersecurity events in a timely manner through activities such as continuous monitoring and anomaly detection.
  4. Respond: Determine what will happen in the event of a detected cybersecurity incident, including appropriate technological, business activity, and PR responses
  5. Recover: Put plans in place for resilience and restoration of any capabilities or services impaired by a cybersecurity incident.

NIST recommends mapping the security requirements uncovered by this assessment process with answers already on the market. Interestingly, the institute also recognizes the common difficulty of finding it all in one place. “The objective should be to make the best buying decision among multiple suppliers, given a carefully determined list of cybersecurity requirements. Often, this means some degree of trade-off, comparing multiple products or services with known gaps to the Target Profile.”

Shift attitudes toward cyber risk management

In Sevgin’s experience, there are several misconceptions that hold businesses back from taking adequate steps toward a more secure cyber environment. Companies that have not yet been breached may feel invulnerable.

“They say, ‘Why do we need it? We’re fine, we’ve never been breached,’” Sevgin said. “They see cybersecurity risk management as something complex and technical, like the money spent on it is just going into a black hole. Or they just assume that IT has it all covered.”

But that complacent attitude is beginning to change. Folks from senior management down to the operational level are starting to appreciate cybersecurity.

“When these compliance obligations came in like GDPR, it pushed them to find out what’s going on with their cybersecurity,” Sevgin said. “I think we’ll see a cultural shift in the next year or two causing the business to think about cybersecurity as part of their everyday job rather than just relying on IT to do it.”

An exercise in assessing risk

Sevgin offered key advice for the first cybersecurity exercise companies should go through. It’s an approach that entails exploring the worst-case scenario by putting together a data breach response plan.

“How you deal with a breach is very important because getting it wrong leads to reputation damage internally and externally,” Sevgin said.

So what does the process entail?

“It’s easy to do and doesn’t require a lot of money. Once you start writing that plan — and you can get a template from a consulting company or government website — you see how it fits into your existing business continuity and crisis management plan. It really forces you to think about decisions that need to be made on the spot if there is a data breach. The next step is to do a tabletop exercise to put that plan to the test.”

The data breach response plan determines how you manage the incident, the potential reputation damage, and regulatory compliance. Once businesses start writing a plan, they see how it fits with business continuity as a whole. They may also realize their current precarious risk status and recognize they probably don’t have a handle on all their data.

“They start asking questions,” Sevgin said. “‘What data do we have? How much of it is critical or sensitive?’ That’s the time to do a data mapping project to figure this out and lock it down.”

The greatest obstacles to cybersecurity risk management is still a lack of awareness.

“Stay open-minded and don’t be afraid to educate yourself and ask questions so you can understand,” Sevgin said.

That’s a small price to pay when the risk of doing nothing is so high.


Page 1 of 2412345...1020...Last »

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: