Coffee Talk: Java, News, Stories and Opinions

Page 18 of 18« First...10...1415161718

July 12, 2016  2:23 PM

Choose your own adventure: Ottinger’s advice about simple code and complex models

cameronmcnz Cameron McKenzie Profile: cameronmcnz
Java

What could be a better way to launch TheServerSide with its new look and enhanced functionality than by reaching back into the past and highlighting some of the latest insights of the former site editor, Joseph B. Ottinger?

It’s actually not that big a jump back in the past. Ottinger submitted a couple of articles earlier this year, replete with his sharp tongue and biting opinions. But the old site’s format made it difficult for readers to see where they were published, so given the evergreen nature of his advice, it just seems right to profile them here, in my first ever post to our aptly named Coffee Talk Blog.

Opening one’s mind to new approaches

So what’s Ottinger getting on about? First of all, there’s some sound advice for stodgy old programmers (like him). Resistant to change? Ottinger discusses the value in learning to appreciate this golden age of programming in which cool libraries and open source code can easily address the age-old question about “Why can’t we do that a different way in Java?” It’s always tempting to answer such questions with an self-important ‘because’, but ‘because’ no longer has to be the answer, no matter how stodgy one is.

“After thinking that the appropriate response was to shake my cane while shouting “Get off my lawn!,” it hit me that my response was outdated. There is absolutely no reason not to push the envelope further – and people do it all the time, in marvelously different ways.”

Article: Java’s lambda syntax rigidity exposes spoiled programmer’s frailties

Joe’s second submission is simply a call for simplicity. Of course, Joe’s call for simplicity in code is contrasted by an article penned with a high ranking Flesch-Kincaid score and esoteric references to the Monte Carlo method and Choose Your Own Adventure books. But of course, that type of eclectic writing style is why we always enjoy Joe’s contributions.

“Programming is about actually accomplishing something, not about conforming with a model of the real world; you’re writing code for a CPU to execute, so you want to write instructions, not a choose-your-own-adventure novel.”

Article: Excellent programming is about function, not form

You can follow Joe on Twitter: @josephbottinger

You can also follow me, Cameron McKenzie: @cameronmcnz

June 14, 2016  1:47 PM

Future-proofing your applications: Fix it now, not later

Jodie Ng Profile: Jodie Ng

There’s a saying that goes “Why do it tomorrow when you can do it today?” and that is exactly the kind of mindset designers and developers should have. The demand for rushing software to the market has developers adopting more of a “we’ll fix it later” mentality with a less emphasis of good, high-quality software engineering. This is why it’s time to future-proof your applications.

Future-proofing is a concept that means having the right architecture in place so applications are capable to be modified and updated accordingly. And it is this exact kind of thinking that developers should apply. It’s an important element with respect to performance as organizations are then able to better embrace new technologies, trends and protocols. Instead of trying to get your product on the market as soon as possible and fixing the bugs later, Theresa Lanowitz, founder of Voke, argues professionals should be investing a little bit more time to prevent fixing those bugs at another time.

Of course there will always be bugs, and there will always be updates. Future-proofing doesn’t eliminate that. It’s about making your product just that much better before it’s out there in the open.

As we enter an era of more Internet of Things and wearables, developers should commit more to performance monitoring and testing. No one can deny the constantly evolving technological landscape and as a result of that, monitoring and testing should include preparing for the future. At the end of the day, if your application or product doesn’t perform as intended, what’s the point right?

Lanowitz believes where we are today is market trumps quality.

The solution seems simple enough; take the time to ensure accurate performance. It’s about abandoning the thinking there’s no way to create an accurate end-to-end testing environment that produces realistic results but Lanowitz argues that’s not true.

“The advent of tools such as service virtualization and virtual and cognitive-based labs gives you an environment as close to production possible,” Lanowitz argued.

According to Lanowitz, there are tools that can help future-proof your applications. After all, it’s all about delighting the customer right? But, can virtualization really substitute and simulate your applications?

“Applications should always have an actual end-to-end test conducted,” she said. “However, service virtualization makes it possible for incomplete or unavailable systems or components to be simulated in an end-to-end fashion.”

She adds that it assists with removing the constraints of cost, quality and schedule.

Future-proofing is an important because the cost of re-work can skyrocket if the right provisions aren’t in place. IT professionals may be in the race to meet time-to-market pressures but performance should no longer be frequently ignored.


June 9, 2016  3:11 PM

The first steps of understanding mobile security

Jodie Ng Profile: Jodie Ng

We live in a very mobile heavy society and many enterprises have mobility at the center of their IT strategy. As such, it can’t help to understand mobile security better and equipping your organization with the security tool kit to defend yourself from harm.

Before setting guards in place, it’s important to first know what you’re fighting again. In a post by Craig Mathias, a principal with Farpoint Group, he explains how mobile security threats appear in many different forms and what steps you can take against them.

A few common risks include mobile malware and viruses, eavesdropping, unauthorized access and physical security. But there are tools that can help prevent becoming a victim of the threats that plague the mobile world.

Cameron McKenzie, editor-in-chief of TheServerSide.com, highlights five ways to boost security and reduce mobile risks.  One of those methods include authenticate at the application level.

McKenzie writes, “It is sometimes assumed that since the mobile device itself is protected by a four-digit password, and because the user of the device is a trusted employee, the mobile app itself need not employ a subsequent authentication mechanism.”

He argues it’s more than just securing the application on the phone. Organizations should want to authenticate those coming in and ensuring they are, in fact, an employee of your network.

According to McKenize, another way to improve mobile security is by not making your data accessible. In other words, encrypt your local device as no data should ever be stored in an unencrypted format. Other solutions to decrease your mobile risks include securing all communications over the public network, containing the threats and to just not provide mobile functionality at all.

The goal of mobile security is to protect data and unwanted visitors but just as technology advances, mobile threats can grow and evolve alongside. So what do we do about it? TechTarget’s senior reporter Michael Heller argues that the answer lies in understanding the users.

No matter how strong you believe your company’s mobile security is, it never hurts to re-evaluate and adapt accordingly.


May 24, 2016  12:21 PM

How would you define the term IDE?

cameronmcnz Cameron McKenzie Profile: cameronmcnz

A popular player in the TechTarget family of sites is WhatIs, a place where people can go to find easy to comprehend technical definitions for acronyms, terms and new technical catchphrases. The site’s been around for quite a while, and while that tends to earn it a great deal of respect with both search engines and the overall IT community, one of the drawbacks to its lineage is the fact that occasionally, definitions become a bit stale.

One of the terms I’ve been recently asked to update is IDE, which according to the WhatIs definitionthat is currently hosted on the sister-site SearchSoftwareQuality, stands for Integrated Development Environment. Every experienced software developer knows what an IDE is, but how would one define the term to someone just learning about IT?

What is an IDE?

Fundamentally, an IDE is a tool for assisting in the development of software, or more specifically, writing code. An IDE helps developers write code. We all know certain developers that praise tools like VI or EMACS, and insist that any truly righteous software developer needs nothing more than a text editor and a compiler to develop software, but for the less worthy members of the congregation, a toollike NetBeans, Eclipse or JetBrains’ IntelliJ certainly makes life a lot easier. I guess that really is the most important point: an IDE makes writing code easier.

I used to each a Java programming course. We’d force students to use nothing but Notepad++ and the Java compiler for the first week. On the second week, we’d introduce Eclipse, and their jaws would drop. Simple things like color-coded keywords, automatic syntax formatting, incremental compilation that identified syntax errors as soon as they occurred, intellisense, auto-complete features and suggested solutions to existing problems were just some of the features that made those new students wide-eyed. Of course, the aforementioned items don’t differentiate an IDE much from text-processing software like Microsoft Word or OpenOffice Writer, so any explanation of the term IDE must explain what differentiates the development tool from a modern word processing program.

With tools like IntelliJ and Eclipse, the editor is the central piece, but the IDE also includes windows orviews that act as portals into the project as a whole. While editing a piece of software, there might also be a visual element displayed on the IDE that shows all the external files the piece of code currently being edited references. Another window might show warnings about existing code snippets that a hacker might be able to exploit, while another window might show the current status of the latest CI run and how many unit tests are failing on the current build.

The core element of any IDE is of course the editor, but the beauty of an IDE is that it can provide a variety of other mechanisms that allow a developer to look beyond the specific slice of code they are currently working on and instead get a broader picture of the projects as a whole. Most IDEs allow a developer to view the structure of the database, search various feature and bug-fix branches of the repository, edit build scripts that might be written in Gradle or ANT, and even submit bug reports or update project tracking tools like JIRA.

Putting the ‘I’ in IDE

An IDE’s main focus is to help software developers write code, be it Java, HTML, C# or Scala. But any non-trivial software project has an intimidating number of dependencies and integration points. Perhaps that’s why the “I” in IDE stands for integration, because a development environment must no only assist in the writing of code, but also assist in integrating the code that is written into other parts of the system. In fact, one of the most exciting advances to happen to IDEs in recent years is the introduction of plugins, which are almost like power-pills that the tool can consume to give it new capabilities. A basic installation of a popular IDE like Eclipse or NetBeans can easily be given the capability to interact with a GIT repository, or run Gradle scripts, or execute a suite of unit or performance tests simply by downloading and installing the required plugin. As a result, an IDE has the capability to not only assist in the writing of code, but instead grow into a tool that can interact with a variety of different systems that participate of the management of the overall application lifecycle.

I’m not too sure how well this definition works in terms of explaining succinctly what an IDE is or what it does, but I do believe it covers most of the pertinent points. If any of the readers have insights on what I might have missed, let me know in the comments.

 


May 24, 2016  12:05 PM

The DevOps train has arrived and it’s time to hop on

Jodie Ng Profile: Jodie Ng

DevOps have been an exciting shift in the technological landscape and if you haven’t already jumped on the bandwagon, perhaps it’s time your organization does.

Understandably, you can’t transition to DevOps if you don’t know what it is so start at the beginning. The goal, in a nutshell, is to enhance collaboration and cooperation between software developers and IT operatives, thus was how the name was probably born. One of the most important sentiments to remember is that DevOps is neither a tool nor technique. It’s a cultural change and encouraging IT professionals to adopt a new mindset as we collectively begin to drift from Waterfall methodologies.

The movement is becoming a more common winner among IT businesses and understanding its best practices can only bring more value to your company. Among those cited includes continue to refine the process and keep fueling the momentum, focus on performance, create tight feedback loops and redefine your skill sets.

Documentation is also important when discussing the DevOps process. In fact, expert Chris Riley argues that it’s a necessity.

“The fact that documentation and governance have been synonymous with slow is a matter of historical baggage,” he writes. “The same automation we bring to software releases also can be applied to documentation. And DevOps documentation is not going away: it’s transforming.”

According to Riley, the sources of DevOps documentation involve code, configuration management script and application performance logs. Additionally, it includes infrastructure logs, alerting tools and component monitoring.

Continuous integration and delivery have emerged to be next chapters of the Agile and DevOps book. But why should that matter? Why should enterprises keep tabs on solutions that enable CI further? Key benefits include shorter delivery times, better quality and a high level of adaptability to deal with security, compliance and availability challenges.

Cloudbees, a provider of conuous delivery solutions powered by Jenkins CI, said “When you enable cheap, low-risk experimentation through continuous delivery, you can direct business investments with more information and uncover opportunities you would otherwise completely miss.”

But maybe the real extension of DevOps is DevSecOps. According to Gartner, what teams need to make way for is adding security professionals. Either way, DevOps, DevSecOps, these methodologies are here and it’s time to buy a ticket and board the train.


May 19, 2016  3:51 PM

Big data isn’t going to go mainstream, it’s already there

Jodie Ng Profile: Jodie Ng

To IDC Directions, big data has already gone mainstream.

Big data is undeniably becoming more wide-spread as it continues to grow. Part of the reason for that is because many things are fueling big data, but what’s interesting is that big data, in return, is driving cognitive computing growth.

Carl Olofson, IDC research vice president for databases and data tools, reiterated at the IDC Directions 2016 conference that NoSQL technology is just as influential in big data trends. TechTarget reporter Jack Vaughan writes “they are not replacing existing relational apps. Instead they are ushering in new apps with a new class of functionality – one very much aligned with emerging Web and mobile operations.”

A post last week touched upon big data and what that really means. It briefly explained that companies are collecting and analyzing an overwhelming amount of data on a massive scale, but what it did not discuss is storage. Organizations are hoarding and mining all this data, but where does it go? Where can it go?

Paul Turner, chief marketing officer at Cloudian, talks about the need for IT to implement cloud capacity storage layer as the long-standing industry standard, RAID storage, is no longer suffices. He encourages professionals to ask these two questions. First, how do they plan to consume storage and second, is there a need to move to a cloud-based, capacity-oriented object storage model.

Storage isn’t the only thing riding the big data wave. In recent years, machine learning methods are hopping onto the big data train as well. In fact, according to a podcast by TechTarget reporters Jack Vaughan and Ed Burns, growing evidence suggest many professionals confide in machine learning when big data is accumulated. Use cases cited are risk estimation in insurance, credit scoring and digital ad placement.

It’s time we all board the big data train and understand how its growing, its storage needs and how methods like machine learning can benefit from big data.


May 11, 2016  3:55 PM

How big data can help your enterprise

Jodie Ng Profile: Jodie Ng

You hear the phrase big data a lot. But what does it really mean?

Big data is defined as “any voluminous amount of structured, semi-structured and unstructured data that has the potential to be mined for information.” That’s great, that means all businesses need to do, in a nutshell, is drive these large collections of data in an actionable way.

Businesses today are looking to extract value from this overwhelming amount of information. At the end of the day, the data is meaningless if it doesn’t say something, right? But part of the big data challenge includes knowing what to use but it’s equally important to know what not to use. Companies need to be selective about what they analyze so they don’t drown.

In a survey conducted by Capgemini Consulting in November 2014, 79% of participants reported they have yet to fully integrate all of their data sources. Other implementation issues included data silos, disconnection between groups and ineffective data governance. With that said, to every challenge, there is a solution like investing in tools to tackle big data problems.

Big data applications are of use too but it’s just as easy to get lost in the hype around them. So before you go shopping for them, it’s important to be able to identify the most common use cases. According to Cloudera CEO Tom Reilly, who spoke at the Structure Data conference earlier this year, big data apps fall into three distinct categories: customer insight, product insight and business risk.

Algorithms can be equally helpful in cleaning up the big data mess, but the challenge is identifying which algorithms. Thankfully, a new class of deep learning algorithms can help overcome this challenge.

“In essence, this approach makes it possible to identify hidden patterns buried in large troves of data,” Lawton said. “Although the basic deep learning techniques have been around for decades, they were constrained to work on a single computer. Promising new architectures are now making it possible to scale these deep learning systems to work in the cloud.”

One interesting thing big data can bring is improving business processes – a benefit that probably isn’t as obvious. According to expert George Lawton, McLaren Applied Technologies explored just that and is “somewhat akin to bringing a sim-city like view to the enterprise.” It allows analysts to “tinker with different approaches to optimize important metrics.”

Big data is growing and the rise of mobile, IoT and Web applications are all driving that growth. What will your enterprise do to capitalize on this trend?


May 10, 2016  10:29 AM

What does Android look like from a Mountain View? Insights on Google I/O

cameronmcnz Cameron McKenzie Profile: cameronmcnz

So, they’ve moved the Google I/O conference away from downtown San Francisco out to Mountain View where Google has a big stake in the Shoreline Amphitheater. I’ve got mixed emotions on the move. San Francisco is a beautiful city, but it’s also not much larger than my backyard, and when they try and shoehorn ten thousand people into a week long conference at the Moscone, as they do with JavaOne, the hotels start implementing ridiculous surge pricing that nobody should be forced to pay, and anyone who doesn’t book a hotel a month in advance, or isn’t willing to pay $500 a night to sleep in a cardboard box quality room in the Tenderloin district is going to have to bus their way into the city every day from the Burlingame area by the airport.

So maybe a nice, big venue with an open-air amphitheater for the keynotes and access to cheaper hotels in San Jose and Palo Alto isn’t a horrible idea. Still, there’s something to be said for being able to walk back and forth to the venue instead of the discomfort of a crowded bus, so the Mountain View venue isn’t without its drawbacks.

Digging into the meat of the conference

TheServerSide has Android advocate and prolific author of various Java and Ruby Dummies books on site reporting back about what’s new at this tenth Google I/O conference. Take a look at his full account of the big things that happened on the first day of the conference, including a variety of product announcements and feature improvements for new products like Google Duo, Google Allo and stalwarts like Android Studio and the PlayStore.

Google Allo and Google Duo shine bright at Google I/O 2016

 

Follow Barry too: @allmycode

Books penned by Barry Burd:

Java For Dummies
Android Application Development All-in-One For Dummies 
Beginning Programming with Java For Dummies
Java Programming for Android Developers For Dummies

 


May 5, 2016  3:59 PM

APM and programming does not need to be that hard

Jodie Ng Profile: Jodie Ng

We live in an information society where new advancements have consistently made the technological landscape an exciting one. And programming obviously plays a crucial role in making these new technologies possible.

Progamming, as contributor Joseph Ottinger points out, is about function and not form. He argues that it’s about accomplishing something, “not about conforming with a model of the real world.” It should be clear what you want your code to execute. If you were to explain how your favorite sports team is performing, you wouldn’t detail every play made in every game. You’d provide the highlights in a clear, succinct manner.

But the thing is, in order for consumers to enjoy all these shiny new toys, businesses really need to step up their game when it comes to application performance management.

The importance of the end-user experience cannot be stressed enough, yet it seems to be an area a lot of organizations need to improve on. As TheServerSide.com contributor Jason Tee puts it, “if a company builds its success on technology, software failure and application downtime can have far reaching consequences.” Actually, to be more specific, there are at least seven ways businesses suffer from application failure, varying from brand damage to a stock dip to loss of usage.

Oh, and of course we can’t forget about social media. In a world where there is immediate feedback, bad reviews in an app store can scare away new, potential customers. Besides, I’m sure we can all imagine how relentless social media can be.

It really comes down to an urgent need to develop a strategy for both improving and maintaining performance. Sometimes, I wonder why it isn’t as big as a priority to organizations as it should be. And I get it, this is one of those things where it’s easier said than done but it doesn’t necessarily have to be hard. For example, in the case of Web app performance, it can be as simple as embracing HTTP/2.

Maybe application performance is something I think needs to constantly be at the front and center. But one concept that has been gaining traction is containerization. Docker, in particular, has received a great deal of attention but could it be the next step beyond VM? Tee argues that it could “make the VM less of a virtual monster to wrestle with when it comes to resource management.”

What does it boil down to? Know what you want your application to do and code accordingly. From there, measure if it’s accomplishing what you intended for it to accomplish. Simple enough, right?


April 29, 2016  2:23 PM

Jenkins 2.0: A drama free, full increment release of the popular CI tool

cameronmcnz Cameron McKenzie Profile: cameronmcnz

A full increment release is usually a big deal. I mean, just recall for a moment all of the hoopla surrounding Java’s last full increment release… Actually, there’s never been a full increment release of Java since version 1.0 was unleashed upon the world twenty years ago. So yeah, full increment releases are kinda a big deal.

I don’t get the impression that the Jenkins jump to two-point-oh is by any means an Armstrongesque leap from a software standpoint. There’s no big migration planning that needs to be done to move from whatever version you’re currently on to the butler’s latest and greatest. All of the underlying metadata is the same, all of your settings get preserved, and there’s no notable data migration required, so the underpinnings of the technology are still fundamentally the same.

They’ve introduced the new Pipeline as Code feature out of the box, something you couldn’t previously get without doing some custom configuration and installing the popular workflow plugin, so that’s a bit of a big deal. Out of the box support for workflow is a big steer, and with that shift in their sails, the enterprise community should now have a better idea on both how to use Jenkins, and how Jenkins wants users to use Jenkins.

They’ve also bundled a bunch of plugins with the installation package so that new users will have a full featured product after unpacking the binaries. That’s an improvement, because before there was a need to download and install things like the Git or Gradle plugins, and with about fifty different plugins available with the word Git in their name, it could get confusing for continuous integration neophytes. So they’ve eliminated a few barriers to entry for new users, which will hopefully transitions those thousands of downloads into real, live implementations.

Security takes center stage with Jenkins 2.0

Security has come front and center as well. Previously, the Jenkins WAR had security turned off by default, but that’s become a bit of a worry as naive customers have deployed their all-access, continuous build tool into the cloud without so much as an authentication challenge. Now a basic installation requires at least some due diligence in the form of providing a username and a password.

My take? Jenkins 2.0 really does seem to me like a drama free release. From what I can see, the full increment is as much a mental and emotional move as it is technology driven. I think that the Jenkins team looks back at the 1.0 release and wishes they could have done a few things differently, but the community didn’t want to tighten any bolts or drain any pipes in a minor release for fear of upsetting their users. Now, with Jenkins 2.0, we have a mature product that has evolved to support the common workflows that most organizations wish to use, while providing simple and sensible defaults that will make the art of continuous deployment more user friendly. There doesn’t appear to be any serious migration issues, backwards compatibility problems or deprecated APIs that are going to send the enterprise community all aflutter. It’s just a nice, simple, drama-free release.

By the way, TheServerSide spoke with unethical blogger and Jenkins community leader R. Tyler Croy about the release, with that interview soon to be made available as both a feature article and a podcast, so stay tuned.


Page 18 of 18« First...10...1415161718

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: