You hear the phrase big data a lot. But what does it really mean?
Big data is defined as “any voluminous amount of structured, semi-structured and unstructured data that has the potential to be mined for information.” That’s great, that means all businesses need to do, in a nutshell, is drive these large collections of data in an actionable way.
Businesses today are looking to extract value from this overwhelming amount of information. At the end of the day, the data is meaningless if it doesn’t say something, right? But part of the big data challenge includes knowing what to use but it’s equally important to know what not to use. Companies need to be selective about what they analyze so they don’t drown.
In a survey conducted by Capgemini Consulting in November 2014, 79% of participants reported they have yet to fully integrate all of their data sources. Other implementation issues included data silos, disconnection between groups and ineffective data governance. With that said, to every challenge, there is a solution like investing in tools to tackle big data problems.
Big data applications are of use too but it’s just as easy to get lost in the hype around them. So before you go shopping for them, it’s important to be able to identify the most common use cases. According to Cloudera CEO Tom Reilly, who spoke at the Structure Data conference earlier this year, big data apps fall into three distinct categories: customer insight, product insight and business risk.
Algorithms can be equally helpful in cleaning up the big data mess, but the challenge is identifying which algorithms. Thankfully, a new class of deep learning algorithms can help overcome this challenge.
“In essence, this approach makes it possible to identify hidden patterns buried in large troves of data,” Lawton said. “Although the basic deep learning techniques have been around for decades, they were constrained to work on a single computer. Promising new architectures are now making it possible to scale these deep learning systems to work in the cloud.”
One interesting thing big data can bring is improving business processes – a benefit that probably isn’t as obvious. According to expert George Lawton, McLaren Applied Technologies explored just that and is “somewhat akin to bringing a sim-city like view to the enterprise.” It allows analysts to “tinker with different approaches to optimize important metrics.”
Big data is growing and the rise of mobile, IoT and Web applications are all driving that growth. What will your enterprise do to capitalize on this trend?
So, they’ve moved the Google I/O conference away from downtown San Francisco out to Mountain View where Google has a big stake in the Shoreline Amphitheater. I’ve got mixed emotions on the move. San Francisco is a beautiful city, but it’s also not much larger than my backyard, and when they try and shoehorn ten thousand people into a week long conference at the Moscone, as they do with JavaOne, the hotels start implementing ridiculous surge pricing that nobody should be forced to pay, and anyone who doesn’t book a hotel a month in advance, or isn’t willing to pay $500 a night to sleep in a cardboard box quality room in the Tenderloin district is going to have to bus their way into the city every day from the Burlingame area by the airport.
So maybe a nice, big venue with an open-air amphitheater for the keynotes and access to cheaper hotels in San Jose and Palo Alto isn’t a horrible idea. Still, there’s something to be said for being able to walk back and forth to the venue instead of the discomfort of a crowded bus, so the Mountain View venue isn’t without its drawbacks.
Digging into the meat of the conference
TheServerSide has Android advocate and prolific author of various Java and Ruby Dummies books on site reporting back about what’s new at this tenth Google I/O conference. Take a look at his full account of the big things that happened on the first day of the conference, including a variety of product announcements and feature improvements for new products like Google Duo, Google Allo and stalwarts like Android Studio and the PlayStore.
Follow Barry too: @allmycode
Books penned by Barry Burd:
We live in an information society where new advancements have consistently made the technological landscape an exciting one. And programming obviously plays a crucial role in making these new technologies possible.
Progamming, as contributor Joseph Ottinger points out, is about function and not form. He argues that it’s about accomplishing something, “not about conforming with a model of the real world.” It should be clear what you want your code to execute. If you were to explain how your favorite sports team is performing, you wouldn’t detail every play made in every game. You’d provide the highlights in a clear, succinct manner.
But the thing is, in order for consumers to enjoy all these shiny new toys, businesses really need to step up their game when it comes to application performance management.
The importance of the end-user experience cannot be stressed enough, yet it seems to be an area a lot of organizations need to improve on. As TheServerSide.com contributor Jason Tee puts it, “if a company builds its success on technology, software failure and application downtime can have far reaching consequences.” Actually, to be more specific, there are at least seven ways businesses suffer from application failure, varying from brand damage to a stock dip to loss of usage.
Oh, and of course we can’t forget about social media. In a world where there is immediate feedback, bad reviews in an app store can scare away new, potential customers. Besides, I’m sure we can all imagine how relentless social media can be.
It really comes down to an urgent need to develop a strategy for both improving and maintaining performance. Sometimes, I wonder why it isn’t as big as a priority to organizations as it should be. And I get it, this is one of those things where it’s easier said than done but it doesn’t necessarily have to be hard. For example, in the case of Web app performance, it can be as simple as embracing HTTP/2.
Maybe application performance is something I think needs to constantly be at the front and center. But one concept that has been gaining traction is containerization. Docker, in particular, has received a great deal of attention but could it be the next step beyond VM? Tee argues that it could “make the VM less of a virtual monster to wrestle with when it comes to resource management.”
What does it boil down to? Know what you want your application to do and code accordingly. From there, measure if it’s accomplishing what you intended for it to accomplish. Simple enough, right?
A full increment release is usually a big deal. I mean, just recall for a moment all of the hoopla surrounding Java’s last full increment release… Actually, there’s never been a full increment release of Java since version 1.0 was unleashed upon the world twenty years ago. So yeah, full increment releases are kinda a big deal.
I don’t get the impression that the Jenkins jump to two-point-oh is by any means an Armstrongesque leap from a software standpoint. There’s no big migration planning that needs to be done to move from whatever version you’re currently on to the butler’s latest and greatest. All of the underlying metadata is the same, all of your settings get preserved, and there’s no notable data migration required, so the underpinnings of the technology are still fundamentally the same.
They’ve introduced the new Pipeline as Code feature out of the box, something you couldn’t previously get without doing some custom configuration and installing the popular workflow plugin, so that’s a bit of a big deal. Out of the box support for workflow is a big steer, and with that shift in their sails, the enterprise community should now have a better idea on both how to use Jenkins, and how Jenkins wants users to use Jenkins.
They’ve also bundled a bunch of plugins with the installation package so that new users will have a full featured product after unpacking the binaries. That’s an improvement, because before there was a need to download and install things like the Git or Gradle plugins, and with about fifty different plugins available with the word Git in their name, it could get confusing for continuous integration neophytes. So they’ve eliminated a few barriers to entry for new users, which will hopefully transitions those thousands of downloads into real, live implementations.
Security takes center stage with Jenkins 2.0
Security has come front and center as well. Previously, the Jenkins WAR had security turned off by default, but that’s become a bit of a worry as naive customers have deployed their all-access, continuous build tool into the cloud without so much as an authentication challenge. Now a basic installation requires at least some due diligence in the form of providing a username and a password.
My take? Jenkins 2.0 really does seem to me like a drama free release. From what I can see, the full increment is as much a mental and emotional move as it is technology driven. I think that the Jenkins team looks back at the 1.0 release and wishes they could have done a few things differently, but the community didn’t want to tighten any bolts or drain any pipes in a minor release for fear of upsetting their users. Now, with Jenkins 2.0, we have a mature product that has evolved to support the common workflows that most organizations wish to use, while providing simple and sensible defaults that will make the art of continuous deployment more user friendly. There doesn’t appear to be any serious migration issues, backwards compatibility problems or deprecated APIs that are going to send the enterprise community all aflutter. It’s just a nice, simple, drama-free release.
By the way, TheServerSide spoke with unethical blogger and Jenkins community leader R. Tyler Croy about the release, with that interview soon to be made available as both a feature article and a podcast, so stay tuned.
How to become a Jenkins expert
Struggling to learn Jenkins? Check out these great, step-by-step Jenkins CI tutorials. They’ll make you a Jenkins CI expert in not time.
Step 1 — Download Jenkins and install the CI tool
Step 2 — Create your first Jenkins build job tutorial
Step 3 — Inject Jenkins environment variables into your scripts
Step 4 — Fix annoying Jenkins plugin errors
Step 5 — Put the Jenkins vs Maven debate behind you
Step 6 — Learn to use Boolean and String Jenkins parameters
Step 7 — Do a Jenkins Git plugin GitHub pull
Step 8 — Add knowledge of basic Git commands to your DevOps skillset