The new JDK™ 9 early access release contains a JDK enhancement proposal JEP 143, Improve Contended Locking, to improve the performance of contended monitors.
Monitors are used by the java synchronized statement to lock the access to a code block. If the synchronized block is called by many threads, the monitor becomes contended. This can degrade the performance dramatically.
Therefore we benchmarked a synchronized method with jmh for different JDK versions. Our Benchmark showed that for 16 Threads JDK 8 needs 2580 ns while JDK 9 only needs 1655 ns for our method. This is an improvement by more than 60 percent. In JDK 9 contended monitors are almost as performant as contended reentrant locks.
Earlier this summer, Lightbend CTO & Co-founder (and creator of Akka) Jonas Bonér and Enterprise Advocate Kevin Webber were seeing their vacation time more as a far-off dream than a plausible reality.
Jonas had just published the free O’Reilly book titled Reactive Microservices Architecture–called “the best single source of the key ideas needed to design modern Reactive systems” by John Nestor–and we wanted to make it even easier to get the principles of microservices architectures into the hands of Enterprise Architects, CTOs, App Dev Managers and other decision makers. With the story of going Reactive with microservices in place, Jonas and Kevin set to work breaking down this concise, valuable book into a three-part series.
Part 1 – Microservices, Monoliths, SOA and How We Got Here
As microservices-based architectures continue to rise against traditional monolithic systems, it’s useful to take a look back to the past to understand how we got to where we are today with heritage applications, and the challenges they pose to productivity, agility, and performance.
Part 2 – The 6 Traits of Reactive Microservices: Isolated, Asynchronous, Autonomous and more
It’s a different world, and when it comes to computing, everything from multi-core processors to the price of RAM and disk have changed remarkably compared to 10 years ago…
Part 3 – Exploiting Reality with Microservices in Production Systems
One microservice is no microservice: they come in systems. To get the benefits of truly distributed systems (i.e. and not “distributed monoliths”), a new set of challenges present themselves to architects and development teams.
In this article we will try to understand the Java memory model and how the garbage collection works. In this article I have used JDK8 Oracle Hot Spot 64 bit JVM. First let me depict the different memory areas available for the Java process.Once we launch the JVM, the operating system allocates the memory for the process. Here the JVM itself is a process and the memory allocation to that includes the Heap, Meta Space, JIT code cache, thread stacks and the shared libraries. We call it as native memory. The “” is the memory provided to the process by the operating system. How much memory the operating system allocates to the Java process depends on the operating system, processor and the JRE. Let me explain the different memory blocks available for JVM.
As DevOps starts to mature with the help of CI and CD tools, roles and responsibilities continue to shift. For developers, this evolution into DevOps culture provides an opportunity to gain more perspective on how every decision impacts the software lifecycle. Along with that greater knowledge comes the power to produce better software at a higher velocity than ever before.
The left shift in DevOps is real
Recent innovations in technologies and tools may relieve pressure on Ops by giving developers a better way to approach everything from testing to deployment. For example, containers promise to give Devs a way to Package Once, Deploy Anywhere, ensuring that Ops receives fully portable images that can run in virtually any environment.
Arun Gupta, Java Champion and VP of Developer Advocacy at Couchbase, pointed out that simplifying the workflow helps teams take joint responsibility for success instead of making every issue someone else’s problem. “Containerization reduces the disconnect between development and staging. Right now, you may have something that works on a laptop but fails in Ops. It’s not uncommon for Development to place the burden on Ops to fix the problem.” With solutions like Docker, it is possible to preempt issues by incorporating a solution that works from development through testing, staging, and production.
On the tooling side of the equation, performance is also becoming a part of the Dev purview. Alon Girmonsky, CEO of Blazemeter, believes Dev should have the ability to address issues like scalability and load testing head on. “We want to democratize performance testing. Metering has traditionally been on the Operations side. Today’s tooling enables everyone to test for performance. This way, everyone is empowered to improve performance.”
Rather than being something that is considered at the end of a release cycle, performance testing gets baked in. Developers can easily create unit tests for performance because they know the desired parameters before they even start coding. More visibility in the early stages prevents later bottlenecks since developers don’t have to wait for the performance team to script tests.
With the left shift, where is the Ops in DevOps?
Even with the emergence of tools and services that make operations such as scaling fairly simple, Chad Arimura, Co-Founder & CEO at Iron.io, provided assurance that DevOps is still just as much about Ops as Dev. “With higher level cloud services on AWS like DynamoDB, you can scale up without worrying about physical infrastructure. But Operations is always an important part of the team. We don’t want developers to create something and then just throw it over the wall to deploy. They need to work together to build scalable applications regardless of the underlying architecture.”
Ops also retains a great deal of responsibility for processes build and deployment automation, monitoring, and configuration. While tools like Jenkins Pipeline and Amazon Cloudwatch can help make some tasks easier, Operations teams are faced with fresh challenges in managing multi-cloud environments and an ever-expanding application portfolio. As organizations get more done faster with a wider variety of resources, Ops will have the opportunity to innovate in the areas of infrastructure and lifecycle management, bringing even more value to the DevOps team.
Security seeks to join DevOps at the table
Late stage stakeholders are seeking to have a voice earlier in the design and development of software as well. Joshua Corman, Director of the Atlantic Council’s Cyber Statecraft Initiative, spoke his mind at the DevOps Enterprise Summit. “The IT security community has realized that security is dead. We can’t keep doing it late in the market, at end stages, bolted on.” That wasn’t a message of despair. Instead it opened up a much needed conversation about what comes next. “Maybe this is a good thing. DevOps was so transformational and delivered so much value that we knew we had to get in front of it. More than that, we had to become a third participant that was driving value with them.”
In Corman’s view, CI and CD should not have the goal of simply increasing the speed of delivery. “It’s not about going faster and doing more deployments a day. It’s about delivering value. This takes into account speed, quality, compliance, sustainability, and maintainability. We need to look at net innovation rather than raw innovation.”
For teams that rely heavily on open source, which includes basically every software development team on the planet, code hygiene and smart sourcing will make it possible to finish more projects on time and on budget to capture market advantage. DevOps will directly benefit by spending less time on unplanned, unscheduled work.
Corman recommends that software teams select fewer and better open source suppliers, ensure that only the most recent code versions are used, and keep track of which code ends up where. With as much as 90% of code in a typical project coming from third parties, it is evident that Security must have a voice in DevOps table to institute best practices that speed development while protecting against threats. Prepare to see the evolution continue as the DevOps of tomorrow is sure to be DevOpSec.
What’s new with Jenkins 2.0? Well, with the 2.0 release, one of the most notable changes is the installation wizard that walks new users through the basic configuration steps. Making it easier for newbies, Jenkins will now suggest a bundle of plugins for users to install, including popular and helpful tools like GitHub and Mercurial.
Speaking about the new 2.0 release, Jenkins community leader R. Tyler Croy saysthe batch of plugins that’s recommended by the wizard meets the needs of about 80% of use cases. Of course, Jenkins has made this plugin bundle optional to preserve the flexibility that the CI community likes. Developers can still choose to decline the suggested package and venture out on their own, choosing whatever plugins they prefer.
Croy revealed that one of the most noticeable updates to Jenkins 2.0 is the out of the box experience for new users. The austere screen that greeted users upon initial installation is a thing of the past. “It’s a fairly big departure from what we’ve had before. When Jenkins started out, there wasn’t a whole lot of stuff in the software industry that there is today. Now there are more than 1000 plugins. It’s always been easy to get started. If you have the Java Runtime, you can run a Jenkins instance. But people would download it and run it locally and get presented with a fairly blank screen. For users who don’t know much—or don’t know what they don’t know—this was a pretty empty experience. They had to go looking for plugins and patterns to get started.”
With version 2.0, users will be greeted by a wizard that walks them through a few simple steps. For example, Jenkins will now suggest a bundle of plugins for users to install, including popular and helpful tools like GitHub and Mercurial. According to Tyler, the batch of plugins that’s recommended by the wizard meets the needs of about 80% of use cases. Of course, Jenkins has made this plugin bundle optional to preserve the flexibility that the CI community likes. Developers can still choose to decline the suggested package and venture out on their own, choosing whatever plugins they prefer.
Atlassian adds scale and connectivity
While Jenkins addressed the new user experience, Atlassian brought attention to the other end of the CI evolution—when organizations start to outgrow their platform. The Bamboo team has added a new agent or slave tier for enterprises that need to scale CI or CD past the 100 mark all the way up to 250 agents. Sepideh Setayeshfar, Product Marketing Manager at Bamboo, said this upgrade was made in response to the challenges organizations face when their code base starts to grow. “Development teams need to be able to keep up with productivity and speed without losing quality. They need more agents to run the deployments by scaling up within the tool that they are using.”
Allison Huselid, Head of the Bamboo project, pointed to another improvement in the recent release. “Because we integrate heavily with other tools like our bucket server and JIRA software, we’ve created an improved integration UI that offers more insights into what it looks like when various tools are connected together. It helps users trouble shoot any issues that are happening from a connectivity perspective.”
Deployment is another area of refinement for Bamboo. According to Allison, “We’ve introduced a REST API that can be called from external systems like Chef and Puppet so you can create much more sophisticated triggers to point at projects from there. You get more refinement in how you orchestrate your deployment process.”
Both continuous integration systems are now more secure
Croy was excited to report that Jenkins is now enabling stronger security out of the box at the insistence of the community’s new security officer. “The 2.0 version is set up to require authentication as the default configuration.” This is a smart measure for today’s cloud-based development and deployment environments. “If you don’t set up authentication, running Jenkins on a public cloud can be dangerous. It has a specific look as far as an API profile that makes it easy for hackers to identify when scanning an EC2 or Azure subnet.” This type of security hole puts organizations at risk for cross-site scripting attacks and other threats.
Why has it taken so long to implement this change? Tyler pointed out that the organization has been trying to keep older users happy as long as possible. “We had a fervent desire to keep backward compatibility with previous versions. That’s why new security features defaulted to ‘off’. Upgraded users will still not have these features switched on automatically, but the Jenkins community is strongly encouraging existing users to accept the security features. Other than that, Croy said making the switch to 2.0 should be smooth sailing. “It’s like a normal upgrade. There’s no data migration that has to happen. You don’t have to worry about settings being changed or updating configurations. We didn’t rip things out that people are depending on.”
Allison pointed out that Atlassian’s CI has always required authentication out of the box. However, the Bamboo team is also taking control to a more refined level with additional administrative permissions. “We know that things can go crazy with big teams, so now you can lock down who can have access to add more repositories or projects. That way the configuration doesn’t get out of hand but assigned permissions also prevent requests from become bottlenecks.”
In the coming year, we can expect both popular Continuous Integration platforms to continue adding new features that allow the enterprise to scale, streamline, and secure their CI processes.
Apache Storm is proven solution for real-time stream processing, which processes data with very low latency and high reliability and fault-tolerance. But Storm is very complex for developers to develop applications. There are very limited resources available in the market for it. Apart from this Storm can solve only one type of problem (ie stream processing), but industry needs a generalised solution which can solve all the types of problems like Batch processing, stream processing interactive processing as well as iterative processing. Here Apache Spark comes into limelight which is a general purpose computation engine and can handle any type of problem. Apart from this Apache Spark is much to easy for developers and can integrate very well with Hadoop