Coffee Talk: Java, News, Stories and Opinions

Page 10 of 18« First...89101112...Last »

December 15, 2016  4:34 PM

Reactive streams and the weird case of back pressure

OverOps Profile: OverOps
Uncategorized
There are a lot of initiatives that aim to improve workflows for developers, but only some actually makes it to a living and breathing concept. One of them is Reactive Streams.

Its goal is to to provide a standard for asynchronous stream processing with non-blocking back pressure. Flow-wise, it allows the creation of many implementations, which will be able to preserve the benefits and characteristics of async programming across the whole processing graph of a stream application.

December 15, 2016  4:33 PM

Big data processing catches the eye of income tax enforcement

anishp1313 Profile: anishp1313
Uncategorized

Enter PostOn 8th Nov-16, when honourable Prime Minister of India Narendra Modi began his first ever televised address to the nation, there was great curiosity among the people to know what it was all about. But suddenly there was a shock to the nation when PM announced that from the same day the Rs.500 and Rs.1000 currency notes would be discontinued to track black marketers and the black money they carry. He also announced that anyone having money in these denominations can get them exchanged or deposit them in their accounts with some limitations that were also declared


Now the question arises – How is the government or IT department going to track the black money that has been deposited and how will they segregate black money holders from genuine tax payers? With more than 1.25 Billion populations and 100s of millions bank accounts, it is a big question that how IT department will find out discrepancies? Similar to Software industry Income Tax department is also going to use latest and hottest technology Big Data


December 15, 2016  4:31 PM

The surprising truth of Java exceptions: What is really going on under the hood?

OverOps Profile: OverOps
Uncategorized
Unlike finding out how a sausage gets made, a deeper understanding of Java exceptions is a piece of knowledge you wouldn’t regret learning more about.

In this post, we’ll go one step deeper into the JVM and see what happens under the hood when an exception is thrown, and how the JVM stores the information on how to handle it. If you’re interested the inner workings of exceptions beyond the behavior they display on the surface, this is a good place to get started.


December 12, 2016  5:07 PM

Hate your job? Improving the ALM process might help

news Profile: news

If you ask Damon Edwards, Founder of SimplifyOps, and IT Skeptic Rob England, there’s a dirty little secret in the IT industry: a very high percentage of IT professionals hate their jobs. In fact, this is pretty well known in the technology field.

A woeful history of personal, professional, and organizational suffering

According to Edwards, “Life is not good for everyone in the IT industry.” Yet people continue working in this sector for a couple of reasons. First, it often pays better that whatever they would otherwise be doing for a career. Or, they are in it because they love the technology and want to be able to tinker with it. If getting a steady paycheck for doing intriguing work was the simple reality, IT professionals would, by and large, be a happy lot. “Unfortunately, it’s everything else that’s layered on top of that which makes things miserable.” IT professionals are overworked, given insufficient resources, and expected to keep up an insane level of context switching. They tough it out year after year, hoping to put away enough money for the kids’ college funds and maybe a decent retirement. In the meantime, they suffer from burnout, resentment, frustration, and fatigue.

While the impact on a personal level is troubling, an entire organization suffers from loss of productivity, effectiveness, and innovation when IT workers are stressed. Fortunately, some businesses are starting to take this matter seriously. Rob said, “I’ve noticed in a number of organizations, one of the KPIs for IT is sustainability. They’re not talking ecology, they’re talking about whether they are working in a sustainable way in terms of technical and cultural debt. Can they keep up the pace?” If not, the organization pays the price in competitive advantage, customer satisfaction, revenue, and market share. That’s the stark business reality.

Focusing on people comes first

Both DevOps speakers agreed that, as corny as it might sound, people are still the number one resource in any company. Developing people, creating an environment where they want to work, and retaining them over the long term is essential for success. England pointed out that, in the move to the information age, many businesses have kept a manufacturing mindset from the industrial era, seeing people as clerical cogs. “When you move to a knowledge worker model, you have to respect and empower them.” IT resources may be replaceable, but the cost of having to start over due to poor retention is far too high.

Of course, there are lessons IT can learn from modern manufacturing when it comes to best practices. Edwards noted that at a company like Toyota, executives are almost never on the list of most successful people in their field—yet this business is one of the most efficient and effective in its industry. Instead of giving glory to those at the top, the focus is on excellence throughout the organization, helping people and systems flourish. IT organizations could take a lesson from this model, determining whether executives are in operating a servant/leader capacity or simply as slash and burn artists in search of short term wins that they can add to their resume before heading off to the next cushy job.

Improved processes are the pathway to a better professional experience

Above all, improvement in IT culture requires a clear understanding of the ALM process and how things get done. Again, a manufacturing model can be handy, in Edwards’ view. “When you make the work visible, you can turn it into supply chain management. You don’t have to understand what goes on inside the boxes, if you can just see how things go from idea to cash, how long it takes, and how painful it is.”

One doesn’t even need to be a technology specialist to see ways to improve the ALM process. They just need to be able to visualize the workflow and make intelligent adjustments. Damon mentioned several choices that can make the system more tolerable for IT professionals: working in small batches, building slack into the system, and avoiding overloading people or any piece of the system.

DevOps provides a peek at a brighter ALM process

What kind of impetus does it take to shift culture for the better? Rob mentioned crises as frequent instigators of change. When upper tier executives finally realize that there is a serious problem and step in to make changes, it’s good to be able to point to at least some small pockets of a company where things are being done in a better way. “That’s the time to pull out DevOps.”

The grassroots movement taking shape in small teams can be used to demonstrate that the method works. “You can use those quick wins to drive change.” DevOps teams that are pioneering this new way of working should take heart, even if they are the minority in their organization. They may well be leading the way to a more humane IT culture once enough pressure builds up to force a full system overhaul.


December 2, 2016  8:45 PM

DevOps tooling only a small part of the enterprise security puzzle

news Profile: news
Uncategorized

Tools, culture, and other popular topics were the focus of much attention at the DevOps Enterprise Summit this year. Yet security was still the undercurrent of concern running just below the surface. Fortunately, a number of speakers addressed this issue and offered insights and best practices for large scale organizations that want to mobilize DevOps for teams without losing sight of security and risk management objectives.

One massive organization takes two big leaps—securely

Phil Lerner, Senior IT Executive of UnitedHealth Group’s Optum arm, offered a unique perspective on continuous security monitoring from down in the trenches. UHG recently made the decision to adopt cloud and DevOps simultaneously, a bold move that made sense because of the synergy between the platform and the methodology. As part of a highly regulated, compliance conscious industry, the organization put security first in both initiatives.

“We’re bringing good, solid security infrastructure practices into the cloud and surrounding the pipeline with as few tools as possible to make management easy. That brings security to the forefront where it’s transparent for DevOps folks. But we’re constantly looking for risks, determining what the threat levels are, logging, and monitoring. We have gates we’ve built between zones and really took a network security approach to what surrounds the pipeline.”

In Lerner’s view, the tendency to think about DevOps as a set of tools is not necessarily the best approach. Instead of trying to completely retool the enterprise, the UHG approach focuses on optimizing processes and adding specific capabilities as needed. “To me, it’s more about the culture and using the tools we know in the enterprise and leveraging them end to end. We know how to manage them very well. We innovate around them and push our vendors to build APIs to do things we would like to do to innovate in the virtual security space.” With a staff of about a thousand IT security specialists in a team of about ten thousand total IT professionals at UHG, it certainly makes sense to use DevOps with the tools that Dev, Ops, and Sec already know.

Some standards persist, but fresh challenges have appeared

Akamai’s Director of Engineering Matthew Barr alluded to some typical best practices that organizations of all sizes should adhere to. Architecting applications to prevent unauthorized access is a no-brainer. “We don’t send a password to anything that is not one of the active directory servers. You don’t want to use LDAP on the application side, because then you have to worry about having credentials that might be reused.” He spoke further about Atlassian’s newest options for SSO and how they enable greater security for the enterprise across the application stack.

But with the increasing popularity of virtual and remote teams across the enterprise, there are new concerns that sometimes fly under the radar. “Some people may not realize, when you look at the Git logs, you see the committer username and email which are actually set on the laptop. You can change that any time you like. The server doesn’t authenticate that information. Without using GPG keys to sign your commits, there’s no proof who actually wrote something.” This represents a change from svn or Perforce where it would be reasonably accurate to assume that the person committing the code is, indeed, the committer listed. Matthew painted a scenario in which a backdoor might be discovered in code. When security goes looking for the culprit, they will find a name in the Git repository—but they have no way to determine if that was actually the person who inserted the malicious code. It would be far too easy to set up a patsy to take the fall. This is just one of the ways risk management is changing as DevOps teams become more distributed.

Open source continues to pose problems for enterprise security

The Heartbleed incident will likely go down in history as one of the greatest open source debacles of all time. This massive security hole in the OpenSSL cryptographic software library went unnoticed for a couple of years, putting the lie to the idea that having many eyes on open source effectively alleviates the risk of serious vulnerabilities. This is one reason that Automic Product Marketing Director for Release Automation, Scott Wilson, argues that enterprises should not overuse open source.

“You have to ask yourself what you are really in business to do.” For most companies outside the technology space, from banks to healthcare, transportation, and insurance, the goal is not to create software. It is to generate revenue by selling other products and services. Open source should only be used insofar as it enables that objective. This decision entails weighing the risk of undetected vulnerabilities as well as all the ongoing maintenance and customization that open source brings along with it.

What’s the solution? According to Wilson, in many cases it’s good to bring on third party vendors to balance things out. These vendors are devoted full-time to maintaining and tweaking software for their clients, providing support on a continual basis. It’s simple, “They make money supporting you.” And they may be able to do it more cost effectively than taking a DIY approach. Even though it might be true that ‘every company is a software company’, not every company needs to do it all in-house. It takes internal teams, the open source community, and the vendor ecosystem working together for a more secure enterprise IT. Perhaps DOES itself will one day morph into the DevSecOps Enterprise Summit to take things one step farther.


November 29, 2016  2:41 AM

Java Bullshifier – Generate Massive Random Code Bases

OverOps Profile: OverOps
Uncategorized
The command line tool you’ve been waiting for. Or not. After all, it’s pretty esoteric. Either way, it’s pretty useful to some, and an amusing utility to others. Bullshifier is an internal OverOps tool developed by David Levanon and Hodaya Gamliel. It’s used in order to test some of our monitoring capabilities over ridiculously large code bases, with transactions that go thousands of calls deep, over thousands of classes, and end up with exceptions.


November 29, 2016  2:41 AM

Java Performance Monitoring: 5 Open Source Tools You Should Know

OverOps Profile: OverOps
Uncategorized
One of the most important things for any application is performance. We want to make sure the users are getting the best experience they can, and to know that our app is up and running. That’s why most of us use at least one monitoring tool.

If you’re looking for something a little different in the performance monitoring market, one option you can choose is going for an open sourced tool. In the following post we’ve gathered some open source APM tools that are available today as an alternative to the paid tools, so you’ll be able to see if it’s the right choice for you.


November 28, 2016  8:45 PM

New eBook: The Complete Guide to Application Performance Monitoring Tools

OverOps Profile: OverOps
Uncategorized

How can you be sure a certain APM tool is the right one for your production environment?

That why we’ve decided to combine everything there is to know about these tools, big and small combined, and see which one is the best option for your monitoring needs.


November 28, 2016  3:52 PM

Chipmaker Intel reasserts its longstanding commitment to the Java platform

news Profile: news
Uncategorized

As it has been discussed here many times, Intel has a great fondness for Java.

This affection was again eloquently expressed by Intel’s VP of Intel Software & Services Group, Michael Greene, while discussing the collaboration between one of the world’s best known technology companies and the most popular programming language on the planet.

Understanding the JDK made easy

“With so many contributions to the open JDK, we need to understand what things are being contributed to the codebase day after day,” said Greene at his JavaOne 2016 keynote address. “This will enable Java developers to make informed decisions about changes we’re making to the codebase.” Michael took this opportunity to announce the new performance portal for the JDK, an Intel website where performance metrics and training information are published for each build. Intel engineers will be actively tracking regressions and working with community members to resolve them.

Some of Intel’s own recent contributions include APIs for performant code, lexicographical array comparison, and checksum support. Making multithreaded applications, including those using MapReduce, more scalable is one way Intel hopes to ease the challenges of Java developers. IoT also featured prominently in Greene’s discussion of ways Intel is working hard on behalf of the programming community. “We’ve added more support for sensors and low energy Bluetooth for healthcare, security, and wearables.” Intel is also serving the enterprise space by extending beyond maker platforms into commercial platforms. Robots, drones, and industrial machines are all in line for innovation with these new features.

Java makes science and social more efficient

Special guests from the ultra-serious field of particle physics and the ephemeral field of social media made an appearance to talk with Greene about their own experiences with Java in terms of efficiency. Ben Wolff, a CERN software engineer, spoke about how Java keeps the wheels turning, metaphorically speaking, at the world’s largest particle accelerator—the Large Hadron Collider. Java has been used at CERN since the mid-nineties and is involved in the control system which features about 100,000 devices and 2 million endpoints. That’s not even counting the world-class ERP and BI data warehouse. It takes a lot of code to keep up.

Maintaining, managing, and updating all the code is quite a feat. Ben suggested that the next version of Java could well make this easier. “I’m personally looking forward to seeing the new modularization feature in Java9.” The institute works with a large system that includes internal and public APIs. Managing all the dependencies and inter-dependencies has been a true journey into classpath hell. Jigsaw may prove to ease this pain.

Nandini Ramani, VP of Engineering at Twitter, talked about the difference it has made for the social media giant to transition to the JVM from its monolithic Ruby on Rails stack. The company is now building in Java using Scala, deploying OpenJDK8 on Linux. Back in 2010 during the World Cup, traffic caused the Twitter site to fail over and over. In contrast, Ellen DeGeneres’ viral tweet about Bradley Cooper in 2014 proved the JVM’s ability to allow Twitter activity to scale to 3.3 million events over a period of eleven seconds. Ramani is looking forward to additional OpenJDK improvements that focus on low latency garbage collection and a pause time of less than 10ms.

Java and Intel will help connect the world

Expansion was the final frontier covered in Greene’s keynote. He shared a short, futuristic film that helped audience members visually comprehend the implications of “The Connection Effect”. By 2020, Intel estimates that there will be 50 billion devices and 200 billion sensors deployed throughout the world. This fact has profound implications for software programming. Compute, analytics, and storage will become even more important, and 5G connectivity must become ubiquitous.

Intel and Micron Technology have been collaborating on the 3DXpoint, a revolution in non-volatile memory that is intended to support large datasets. Intel predicts that the Xpoint will be up to 1,000 times faster than NAND with 1,000 times more endurance and ten times the density. Along with next generation field programmable arrays, Intel certainly appears to be gearing up for the expansion that will support a fully connected human ecosystem.


November 17, 2016  5:48 PM

Three things you didn’t know about Golang, NoSql, and Cloud

news Profile: news
Uncategorized

With so much happening across the enterprise IT community on multiple fronts, it’s impossible to keep up. It’s also easy to make assumptions about the state of any particular technology based on a few web forum comments or a couple of blog posts. But the tech evolution is occurring so rapidly that yesterday’s opinions are often invalid today. Several knowledgeable professionals shared their updates about a few topics that have generated a lot of hype, controversy, and confusion over the past few years. As of mid-2016, this is where things stand.

Golang is prepared to go long

When Google first started toying with the Go language, many developers thought it would be a passing fancy. Would the internet giant really go the distance and provide lasting support, or would the language fizzle out leaving just a few die-hard programmers clutching at the remnants?

One company that bet on Golang being around for the long term was Iron.io. This microservices architecture vendor was already coding in Go in 2012, before version 1.0. According to CEO Chad Arimura, it came down to choosing the right tool for the job. “We made a list of what we needed and started comparing languages. We wanted to find one that was fast, nice to write, and didn’t make us use the JVM or write Javascript. There were only a few options. We said, ‘let’s go for it’ with Golang.”

The Iron.io team was instrumental in sparking the Golang community. “We started with a very small group for support, but it is an elegant, usable, readable language. Since then, almost five years later, we’ve seen it grow in support. Now, there’s a bustling community growing around it.”

The language, which is popular at Docker and in the microservices community at large, is now well supported and likely to stick around. But it will remain a language that is only the right fit in very particular circumstances. “We use it when it makes sense. It is creeping its way into a standard language into almost every company you can name. It is best suited to fast, distributed systems, computer and system level programming rather than for a web app or a Rails stack. You use it when you need fast, high performance, parallel processing such as for a load balancer, network server, API server, or queue service.”

NoSql is still a valid solution

NoSQL has certainly passed the ‘Peak of Inflated Expectations’ in the Gartner Hype Cycle model. But it’s time to crawl out of the ‘Trough of Disillusionment’ and start climbing the ‘Slope of Enlightenment’. NoSQL does not solve all of SQL’s problems, and it will never replace relational databases. But it was never intended to. If enterprise users want to start realizing full productivity out of NoSql, they have to accept that it is part of a multi-pronged approach to handling Big Data.

Trisha Gee, Developer Advocate at Jetbrains, pointed out that NoSQL does have a valuable place in today’s app development. “It’s not a flash in the pan, but another tool that you can use at the right time. If I’m starting to work on a new application and it’s likely to evolve rapidly, like mobile, I’m going to choose a NoSQL database because it’s more flexible and able to change.”

She acknowledged that working with NoSQL does require an adjustment since it can’t be queried in the same way. “You need a different way of thinking about non-relational. It works best when you’re not doing massive transformations on your data.” For databases that are constantly changing, having a more sensible relational schema may make traditional SQL work better. With fairly stable historical data, MapReduce and similar functions can work wonders with NoSQL to derive value from massive data stores.

The public cloud isn’t as popular as it seems

From the amount of press big cloud players like Amazon and Azure are receiving, it might be reasonable to assume that companies are doing eighty percent of everything in the cloud. That’s an impression Jordon Jacobs, VP of Products at Singlehop, has tried diligently to dispel. All the attention on public cloud is making CIOs question their own decisions. “It’s a funny place where everyone thinks they are doing it wrong, but everyone is doing the same thing. In fact, Amazon, Azure, and Google make up only one third of the market share of the managed services market. When CIOs realize this, they feel better that they are doing the right thing.”

Should everything in the enterprise portfolio really be in the cloud? According to Jacobs, that isn’t necessary. “Some applications belong there and some don’t. There’s this assumption that every single application needs to live in the cloud. That’s simply not the case. CIOs are embracing it for appropriate use cases, but there are other things that have no place there.” For “paper applications” that do boring stuff like email, accounting, and HR, there’s no need for auto-scaling or provisioning and de-provisioning. Jacob pointed out that these boring apps with their consistent workload don’t need to go in the cloud. “Applications that are very consistent are better suited to a standard environment with an ISO process.”

It’s important to keep in mind that cloud has expanded to encompass a very broad range of services. They don’t just provide virtual machines. From messaging queues to mobile software, Amazon alone has more than 50 different products. Choosing a public cloud solution simply because it offers virtualization doesn’t necessarily make sense.

According to Jordan, “The problem with the business model in the cloud is that they use instance based billing. For every app or VM, you incur an additional cost. That’s the same model that we had before virtualization existed. You had to buy and deploy more bare metal servers. In the cloud, you deploy five VMs, and pay for that many VMs. But with managed services, you have pool based pricing. You can buy a certain number of servers, then spin up unlimited VMs until you reach the max for those server resources. That’s significantly more efficient. VMware’s business exists only because of that efficiency. Amazon is a step back in efficiency for consistent workloads. We believe a hybrid makes a lot of sense.”

Of course, managed service providers are still buying much of their own cloud and data center resources from Amazon, Azure, and Google to provide the hybrid model their customers need. The real price efficiency of managed services is tied to reducing costs for personnel within each client organization. That’s an offer that the average CIO is finding it difficult to refuse. For this reason, even though public cloud providers may have captured the majority of mindshare in the market, it will likely be a tougher battle to capture the heart of the market.


Page 10 of 18« First...89101112...Last »

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: