The hype around DevOps can make it sound like the real value comes from faster time to deployment. But this misses the real benefit around maneuverability, argued Michael Nygard, an enterprise architect with Cognitect. “We talk a lot about velocity, but not so much about acceleration, which is the ability to move faster and slower as required,” he said.
Enterprises that can speed or slow their pace of development in response to changing conditions are more maneuverable than the competition. The cloud makes infrastructure disposable, and code repositories make code disposable. “Maybe even the teams need to be disposable,” quipped Nygard. This is different than making people disposable, which kills morale.
Getting projects off the ground
Real maneuverability comes from making it easy for teams to break down and start up projects quickly. The value of the individual comes from the team processes involved in completing and starting projects rather than someone’s role in a particular project. Nygard pointed out that some army units are able to break down and set up a new camp in a few hours, while others can take days. This differences comes from the collaboratively experience of navigating thousands of tiny decisions like how to move the trucks in the right order or where to put the latrines. This means developing a shared understanding around things like version control and build pipelines in the enterprise.
Team members also need to become adept at intuiting the kinds of decision others are likely to make in response to shifting conditions. A small unit commander in the military has a good idea of how other commanders will make a decision. This is something lacking in modern enterprises teams dispersed by function and geography. “Tempo is an emergent property that comes from some characteristics of your organization, and has to be built at every level,” said Nygard.
It handles faults by the process of replica creation. The replica of users data is created on different machines in the HDFS cluster. So whenever if any machine in the cluster goes down, then data can be accessed from other machine in which same copy of data was created. HDFS also maintains the replication factor by creating replica of data on other available machines in the cluster if suddenly one machine fails. To learn more about world’s most reliable storage layer follow this HDFS introductory guide.
Looking for HDFS Hands-on, follow these tutorials: Top 10 Useful Hdfs Commands Part-I
Each year at TheServerSide we map out at the beginning of the year, looking ahead at what the hot topics of the year are going to be, a list of topics upon which we will focus on from month to month. It’s a dynamic list, and by no means does a monthly theme indicate an exclusive focus for TSS on a given subject, but on those theme months, we’ll be trying to focus on that specific topic a bit more than any of the others.
So what does the 2017 calendar look like? Here’s what’s been proposed:
January – Container based application development and deployment with Docker and Kuberneties
As container based technologies become more prevalent, and organizations move away from fully virtualized OSes, organizations are trying to learn how to leverage containers in their application development and deployment.
February – Microservices development and deployment
So many organizations that formerly invested in SOA based solutions are switching to microservices. Here we’ll take a look at why people are doing it, and what’s involved in making the change happen.
March– Application lifecycle management tools for Enterprise Java
This month explores the latest tools helping in the ALM space. We will look at tools that are being used ensure applications are being developed and delivered on time and on budget. Some of the challenges we see in the ALM space is how various development methodologies work with various ALM tools and approaches, so we will look at both how new tools work with both Agile and Waterfall types of development approaches to ALM.
April – Application Lifecycle Management
Managing the application lifecycle from inception to decommissioning is always a challenge. Here we’ll take a look at the people, processes and tools that make ALM work.
May – Modern mobile development
In May we’ll examine how mobile development has changed over the years, and the strategies organizations are using to maintain their mobile presence.
June – Modern APIs
Getting into the code and how developers develop, we’ll look at the best ways to develop public APIs, the best ways to consume them, and also look at new APIs being introduced as JSRs or being revisioned.
July – Big Data and Analytics
How are organizations acquiring analytics, and how are they managing all of the data that is coming in? In July we’ll take a look at analytics consumption and the big data solutions that can crunch those numbers.
August – PaaS based Big Data: Modern persistence strategies for the cloud
So much press has been given to both Big Data and the cloud, but few realize the complexities of bringing the two together. This theme will look at how Big Data technologies are being used in the industry today, while also examining the way cloud based PaaS solutions are simplifying the big data problem.
September – IoT development with containers
More and more IoT products are coming on the market. This month will look at how IoT developers have leveraged container based development, and why IoT development shops are so quick to adopt the technology
October – Peripheral languages, DSLs and Mobile
This month will look at the new languages such as Scala, Clojure and Haskell that are now running alongside regular Java programs on the JVM. Specifically, this month will look at how these languages have changed the mobile development space.
November – IDEs and software development tooling
How are developers doing what they’re doing, and what are the best tools for software development? Here we’ll look at IDEs, plugins and other tools that help make software developers productive.
And finally, December is the traditional Year in Review.
Of course, this is all subject to change, but it provides a bit of a charting of the course for the next twelve months. Is there something missing? Something you’d prefer to hear more about? Let us know.
For the most part, a New Year’s resolution is a commitment to do something that, assuming you were an upstanding person, you should already doing. People commit to losing weight because they shouldn’t have got fat in the first place. People commit to quit smoking because smoking is a bad habit they should never have adopted in the first place. People commit to exercising more because being lazy and lacking energy isn’t a great way to live your life. And while many New Year resolutions are personal, it’s not out of line to make work related New Year resolutions as well. So, what type of New Year resolutions are software developers be making?
Improving code quality
From the standpoint of code quality, these are the following commitments I’m making:
- Improve the commenting of your code
- Use good variable names
- Make your code readable to even junior developers
- Don’t write code that can’t be tested, and more to the point, make sure test cases are written before writing the actual code
- Regularly run performance profilers on the code I have written
Of course, these are just software development best practices in which I should already be doing, but as I said, that’s what New Year resolutions are all about, that is, committing to doing thing the way they should be done.
From a personal development standpoint, there are two technologies that I can’t boast about having as part of my software development repertoire, and I believe that needs to change.
Docker and Big Data
I actually know very little about Big Data processing. I’ve worked quite heavily with JSON and other data formats that lend themselves nicely to document stores, but I’ve never actually used a NoSQL database like Cassandra or MongoDB, and I’ve certainly never used any of the Big Data processing technologies like Hadoop or Yarn. I’m going to see if there’s some type of project I can take on that might get me more familiar with those technologies and potentially writing some tutorials on the topic.
The other technology I’m anxious to ramp up on is software containers like Docker, along with the various technologies that compliment it like Kuberneties and Swarm. We’ve done an incredible amount of coverage on the topic of containers, and I even attended the Docker conference last year in Seattle, so there’s a certain degree of shame associated with the fact that I’ve never actually containerized an application and got the Docker software to run it.
Writing about Java 9
From the standpoint of TheServerSide, I’d actually like to do a bit more writing about Java 9 and the various JSR APIs that are being either released or significantly updated. We’ve spent a great deal of time on TSS talking about Agile and DevOps and how technologies that sit on different levels of the software development stack are changing the software development game, but we’ve got a bit away from the task of writing code. Learning more about Java 9 and writing about what has been learned is another big goal.
What are your software development resolutions for 2017?
So what are your goals as a software developer? Is it to start working with a new language, to stop saying ‘yes’ whenever managements asks for something with an unrealistic deadline, or is it to just start replying to your emails faster. Let me know what your New Year plans in terms of your software development career.
EDIT – By the way, another one of my New Year resolutions is to use social media more often, and to get more followers on Twitter, so if you’d like to help make that resolution come true, here’s my handle: @cameronmcnz
Apache flink is the powerful open source platform which can address following types of requirements efficiently:
- Batch Processing
- Interactive processing
- Real-time stream processing
- Graph Processing
- Iterative Processing
- In-memory processing