Coffee Talk: Java, News, Stories and Opinions

Page 5 of 21« First...34567...1020...Last »

October 4, 2017  10:01 AM

Java SE 9 approaches Atari-like performance at JavaOne 2017

cameronmcnz Cameron McKenzie Profile: cameronmcnz

As things get better, they often get slower, making better things worse. Far too often, that’s how things work in the tech sector, which is why I’m glad to see the architects of Java SE 9 bucking this trend in the latest full version release.

When I think about application performance, I think back to the days when I played my Atari 2600 as a kid. I’d shove in the Space Invaders cartridge and the only delay between clicking the on switch and engaging with the aliens was the speed of light traveling between the television set and my eyes. I stopped playing video games when Sony improved the gaming console so much that any time my Dig Doug died, I’d have to wait a minute and a half while the CD-ROM spun and the Fygars reloaded. Video games got better, and because of that they got slower, which made them worse than they were before. Even today, I long for the performance of an Atari 2600.

It’s a demoralizing cycle that asserts itself in a variety of areas of the tech sector. My 8 gig Android phone is unusable after the latest OS update. Windows 10 won’t even install on my old Lenovo laptops which run just fine with XP. And even if I bought a new phone, a new desktop and a new laptop with the most expensive hardware Fry’s Electronics is wiling to sell me, none of it would boot up as fast as my old Atari.

I doubt that an Atari 2600 was the inspiration as Oracle’s language architects worked on Java SE 9, but it may as well have been, because Java SE 9’s new module system is making Atari-like performance a real possibility.

Atari-esque performance and Java SE 9

The highlight of every JavaOne keynote happens when Oracle’s chief language architect Mark Reinhold takes the stage. Reinhold doesn’t talk in superlatives as do most other keynote speakers. Reinhold talks Java and always shoots straight about where we are in the evolution of the language. At JavaOne 2017, Reinhold demonstrated Java SE 9’s evolution beyond the simple classpath model and into an age of module isolation. It’s easy to tell that the Java language team is proud of this achievement.

The evolution of Java SE 9

Now there’s a plethora or reasons to be excited about modularity’s introduction, but in my opinion, Project Jigsaw’s greatest contribution to Java SE 9 is the fact that it not only makes software development with the JDK better, but it makes the applications we develop faster as well.

During his JavaOne 2017 keynote, Reinhold engaged in a little live coding in which a simple, module based Java SE 9 application was created. The whole thing was deployed to Docker and when the 261 meg container was run, the compulsory Hello World message was displayed. That was impressive in itself, but what ensued immediately following this little demonstration can only be technically described as witchcraft.

After the first Docker build, Reinhold remade the container but employed the new Java SE 9 tool JLink. “Java finally has a linker,” said Reinhold. “It’s an optional step, but it’s a very important one.” Using JLink, any of the 26 modules that the JDK has been divided into that aren’t used by the application get pruned away. The resulting recompilation using JLink created a new container that impressively tipped the scales at just under 39 megs.

With Java SE 9, Reinhold has not only delivered a better JDK, but he’s also delivered a system that can be configured to be faster and have a much smaller footprint as well. They haven’t just improved the functionality of the Java SE 9 platform, but at JavaOne 2017 they’ve shown how they’ve improved upon important non-functional aspects of it as well.

Prognosticating about Java SE 9 performance

Now I should be careful to draw a line between a container’s footprint and actual performance. I’ve deployed plenty of Java applications to WebSphere servers hosted on big, bare metal behemoths, and I doubt the presence of an unused Swing package sitting on the file-system inside of the JDK ever had a big impact on the performance of my e-commerce apps. But a module system does allow for a variety of tricks, such as the lazy loading of components, that developers can start taking advantage of in their code. And being able to move smaller Docker images across the network when updates happen or patches need to be applied will have a real, measurable impact on the performance of administrative and infrastructure tasks. The benefits of the Java SE 9 environment’s newfound modularity will assuredly reach far and wide.

It was an uphill battle getting Project Jigsaw finalized, ratified and packaged into the Java SE 9 platform before the Java community descended upon San Francisco for the JavaOne 2017 conference, but Reinhold and the rest of the Java language team made it happen. It’s an impressive feat, and it’s one for which they are deservedly proud.

You can follow Cameron McKenzie on Twitter: @cameronmcnz

October 3, 2017  5:29 PM

Three reasons to start digital transformation projects

Jan Stafford Jan Stafford Profile: Jan Stafford
Uncategorized

Customers have high expectations of applications today, demanding personalized experiences, super-fast responsiveness and business value with each transaction. That pressure is one of the top drivers for businesses to initialize digital transformation projects, according to Tata Consultancy Services (TCS) experts Sunder Singh and Kallol Basu. Another is the speed of IT innovation, which has been and is enabling automation of all business processes, said the TCS consultants from Oracle OpenWorld 2017 in San Francisco this week.

In our recent conversation, the TCS  duo explained the following three key reasons why businesses should start doing digital transformation  projects. Kallol is the TCS Oracle Practice transformation change management consultant, and Singh is the global head, Oracle Practice, Enterprise Solutions. Here at Oracle OpenWorld/JavaOne, Singh is a co-speaker in the sessions, “Building Smarter Enterprises” and “Driving Speed, Scale and Efficiency.

Three drivers for digital transformation projects

#1: Customer expectations: Rising customer expectations include demands of more personalized experience, faster response and the desire to always feel valued at each touch point of their journey, according to Basu. Singh noted that it is now the norm for human and machine interaction to be simple, intuitive, cheap and fast. “Companies are transforming themselves to fit into the so-called norm and or de facto standards of user experience,” Singh explained.

Basu sees digital transformation placing people back at the heart of the company. “The silo mentality is not suited to digital transformation, which relies on openness and a transversal approach.

#2:  Increasing reliance on and capabilities of data analytics: Analytics allow businesses to put their fingers on the pulse of the customer, said Basu. Singh adds: ”The power of machines to manage the velocity, variety and volume of data like never imagined before opens up new art of possibilities for analytics and insights in a hyperconnected world.”

#3. Computing has put business competition in hyperdrive: IT has increased a company’s ability to compete as traditional boundaries vanish allowing for new competitors to enter the market, according to Singh. Technology advancement and the availability of storage, computers and network at a throw away price – better, cheaper and faster. This has opened up a plethora of possibilities.

“With digital and cloud, one can start a business with practically nothing from anywhere at any time and usurp your very existence,” said Singh. “The entry barrier is gone. Heavy capex and years of lead time to start business is gone.”

What about your business?

Are there other reasons why your company has started or is planning a digital transformation project? Or, is digital transformation not on your agenda? Or, have your companies automation projects already made it what is known as a Digital Business?


October 3, 2017  5:04 PM

Java SE 9 a perfect fit for a nimble, scalable and serverless future

Daisy.McCarty Profile: Daisy.McCarty

Last year’s JavaOne conference generated quite a bit of excitement with the discussion of many of the new Java SE 9 features. But this year’s event is already proving to be more groundbreaking. From making every aspect of Oracle’s Java EE open source to introducing Functions as a Service, each speaker in the opening keynote brought a little more excitement to the crowds gathered in San Francisco, California.

An open Java SE 9

The biggest announcements during the keynote were the intention to make the Eclipse Foundation the new steward of Java EE. All the elements of the commercial version of the Oracle JDK will become available in the Open JDK as well, giving developers unprecedented access to features that were previously available only to the enterprise elite. In addition, Oracle committed to stepping up the speed of releases. According to Mark Reinhold, Chief Architect of Java, the new timeline of releasing every six months instead of every few years accomplishes a couple of goals. “It helps us move forward and do so faster.” But speed isn’t the only focus. “Features go in only when they are ready. If a feature misses a current release, that’s OK. Because it’s only six months to the next one. It’s fast enough to deliver innovation at a regular pace, and slow enough to maintain high levels of quality.”

A nimble Java SE 9

According to Mark Cavage, VP of Product Development at Oracle, Java SE and Java SE 9 offer over 100 new features and streamline the JVM with better support for containers that will allow the platform to evolve in new ways. “You can get just enough Java and just enough JVM to right-size the JVM for a cloud world.” Niklas Gustavsson, Principal Architect at Spotify, spoke about how his organization has gradually shifted more and more of its services to Java as the need to scale its cloud-based offering has grown with its user base.

With 140 million active users and 3 billion streaming songs per day, the service had to handle 4 million requests to the backend per second. Over time, Spotify shifted more and more of its services from Python to Java. Better stability and scalability were just two benefits. But transparency was just as important. With the JVM, “We could observe what was happening in runtime in two ways: collecting runtime metrics on the platform itself or profiling the service while running in production.” Spotify deliberately used a microservices architecture to make it easier to shift to Java piece by piece as it made sense to do so. This approach allowed them to scale each service separately to meet the needs of a wide range of user behaviors and ensured that any outages were well-contained.

Containers and serverless architecture

Kubernetes was championed by Cavage as the optimal open-source container option for the Java community. Heptio CEO Craig McLuckie spoke in more detail about the ability of containers to simplify operations “Containers are hermetically sealed, highly predictable units of deployment with high portability.” With the use of dynamic orchestration technology, much of the work of operations can be automated. Craig also pointed out that containers, in a sense, may spell the demise of middleware as it currently exists, separating it into two different layers with containers on one side and application level libraries on the other. And flexibility is inherent. As well as containers and the cloud work together, McLuckie pointed out that this pairing is optional since Kubernetes could just as easily be deployed on premises.

On the developer side, going serverless was highlighted by Mark as “a Compute abstraction that takes away all notion of infrastructure from the user/developer.” It could be applied to many different use cases from compute to DB to storage, allowing developers to focus on functions and services that meet business needs.

Functions as a Service

FaaS was showcased in the form of the Oracle FN project headed by VP of Product Development, Chad Arimura. This three-pronged technology starts with the FaaS platform which should allow developers to build, deploy, and scale in a multi-cloud environment—while running FN locally on their laptop. The Function Development Kit (FDK) was the second part of the puzzle, “It allows developers to easily boostrap functions and has a data binding model to bind the input to your functions to common Java objects and types.” The FDK is Lambda compatible and has Docker as its only dependency. The FN Flow system is the final piece, enabling developers to build higher level work flows and orchestrate functions in complex environments. Arimura showed off Oracle’s commitment to open source with a few mouse clicks at the end of his presentation, providing the whole world with access to the project.

More to come…but hard to top this year

The keynote ended with a review of some of the same features discussed in 2016, with Jigsaw and Project Panama receiving substantial attention. The Amber project for right-sizing language ceremony was mentioned and will no doubt be showcased at next year’s JavaOne. Another contender is the Loom project which is still in the discussion phase. While each new conference reveals fresh features, it will be difficult to beat the excitement of having unlimited access to every aspect of Java SE 9.


October 2, 2017  2:40 PM

From reactive design to JUnit 5, here’s what’s hot at JavaOne 2017

cameronmcnz Cameron McKenzie Profile: cameronmcnz

What’s trending at JavaOne 2017? A simple way to tell is to search through the conference catalog and take note of the various sessions that are overbooked and no longer adding attendees to a wait-list. Taking that approach, here’s a quick look at a few of the sessions that JavaOne 2017 attendees will be missing out on if they weren’t savvy enough to register early for a seat.

Lambdas still loom large at JavaOne 2017

A few years ago, when Java 8 came around, everyone was excited about the fact that Lambdas were finally being shoehorned into a full version release. This year, it looks like everyone is getting around to actually using them, as not even an 8:30am start on a Monday morning is scaring people away from Java Champion José Paumard’s Free Your Lambdas session.

Introductory and advanced reactive design

Every time I’ve talk to my good friends at either Payara or Lightbend, they’re flogging the merits of reactive development and design. Continuing to spread the word, Lightbend’s Duncan DeVore will be joining up with IBM’s Erin Schnabel to provide an Introduction to Reactive Design, while Payara’s Ondrej Mihalyi and Mike Croft will be stepping it up a notch by tag-teaming a hands-on lab entitled Traditional Java EE to Reactive Microservice Design.

Interestingly, these hands-on labs are taking place at the Hilton by Union Square, a good ten-minute walk from the main conference grounds located in the Moscone Conference Center. In years past, the whole conference took place in a cluster of hotels next door and across the street from the Hilton. This year, everything but the hands-on labs takes place alongside Oracle OpenWorld at Moscone.

Romancing the Java 9 stone

The conference is called JavaOne, so it comes as no surprise to discover that a session entitled JDK 9 Hidden Gems would play to a packed house. Back in the Moscone West building, Oracle’s JVM Architect Mikael Vidstedt and Intel Corporation’s Senior Staff Software Engineer Sandhya Viswanathan will avoid the Java 9 hype by skipping over big ticket items like Project Jigsaw and Java 9’s multi-jar deployment capabilities and instead, according to the syllabus, “talk about JDK 9 optimizations spanning support for larger vectors with enhanced vectorization, optimized math libraries, cryptography and compression acceleration, compact strings, new APIs with associated optimized implementation, and many more features that help big data, cloud, microservices, HPC, and FSI applications.” To a Java aficionado, a session description like that is more tempting than candy is to a baby. Here’s hoping they can get through as many of those topics as possible in the time allotted.

Keeping Roy Fielding’s dream alive

Java developers continue to take a keep interest in developing RESTful web services, as e-Finance’s enterprise architect Mohamed Taman’s session entitled The Effective Design of RESTful APIs will be running at capacity. Speaking about more than just the development of RESTful APIs in the enterprise sphere, Taman’s session addresses how to create multi-channel RESTful web services that interact seamlessly with IoT components, embedded devices, microservices and even mobile phones. Roy Fielding would no doubt be pleased.

Boyarsky demystifies JUnit 5

And finally, it should be noted that if you want to meet with popular CodeRanch Marshall Jeanne Boyarsky, you’re not going to be able to do it by walking in at the last minute on her hands-on session about solid software testing practices, because that Hilton attraction is emphatically overbooked. Co-presenting with enterprise software architect Steve Moyer, this hands-on session is entitled Starting Out with JUnit 5.

I’m actually surprised that a session on JUnit would go to max capacity. It’s hard enough to get developers to write good JUnit tests at the best of times, let alone attend a technical session on the topic at a time when the beer garden calls. I’m postulating that Boyarsky’s reputation and online persona is responsible for packing the house. Or, it could be due to the fact that the session’s syllabus reads more like a fear inducing warning than a simple overview: “The difference between JUnit 4 and 5 is far bigger than the difference between 3 and 4. JUnit 5 is almost up to the GA release, so it is high time to learn about next-generation JUnit.”

So that’s what’s trending today at JavaOne 2017. It’s largely what you’d expect from a group of forward thinking software engineers. It’s a matter of learning about new topics like reactive design, getting the most out of the language features from both JDK 8 and Java 9, learning how to write RESTful APIs that will integrate multi-channel devices, and finally, learning how to write tests to make sure that any code that gets written is reliable and robust. As I said, it’s pretty much what you’d expect from this JavaOne 2017 crowd.

You can follow most of these speakers on Twitter, and you probably should:

You can follow me, Cameron McKenzie, too: @cameronmcnz


September 26, 2017  8:05 PM

A bug fix always beats a round of risk assessments

cameronmcnz Cameron McKenzie Profile: cameronmcnz

When static code analysis tools identify a bug in the production code, there are two approaches organizations can take. The sensible one is to put a software developer or two on the problem and implement an immediate bug fix. The other option is to assemble the software team, debate the relative risk of not addressing the problem, and then choose not to do anything about the issue because the reward associated with doing so isn’t commensurate with the risk. You’d be surprised how often teams choose the latter approach.

The dangers of risk assessment

“Many organizations have an effective process for identifying problems, but no process for remediation,” said Matt Rose, the global director of application security strategy at Checkmarx. “Organizations do a lot of signing off on risk. Instead of saying ‘let’s remediate that’ they say ‘what’s the likelihood of this actually happening?'”

Sadly, the trend towards cloud-native, DevOps based development hasn’t reversed the this trend towards preferring risk assessment over problem remediation. The goal of any team that is embracing DevOps and implementing a system of continuous delivery is to eliminate as many manual processes as possible. A big part of that process is integrating software quality and static code analysis tools into the continuous integration server’s build process. But simply automating the process isn’t enough. “A lot of times people just automate and don’t actually remediate,” said Rose.

The bug fix benefit

There are very compelling reasons to properly secure your applications by implementing a bug fix. The most obvious is that your code has fewer identifiable issues, giving software quality tools less to complain about. “It doesn’t matter whether a bug is critical or non-critical. A bug is a bug is a bug. If you don’t act upon it, it’s not going to go away.”

“Many organizations have an effective process for identifying problems, but no process for remediation. Organizations do a lot of signing off on risk.”
-Matt Rose, the global director of application security strategy at Checkmarx.

The other benefit is the fact that the process of addressing a problem and coding a bug fix is actually an educational experience. Developers get informed of the problem, realize how a given piece of code may have created a vulnerability, and then they are given the opportunity to re-write the given function so that the issue is eliminated. “Working on vulnerabilities that are in your application and are real-world to you is going to teach you how not to make the same mistakes over and over again.”

So skip the risk assessments. If there’s a problem in your code, implement a bug fix. That will eliminate the risk completely.

You can follow Checkmarx on Twitter: @checkmarx
You can follow Cameron McKenzie too: @cameronmcnz


September 19, 2017  3:42 AM

Expert advice for JavaOne 2017 first-timers

cameronmcnz Cameron McKenzie Profile: cameronmcnz

If JavaOne 2017 is your first time attending the conference, it will serve you well to follow some advice and insights from a veteran attendee of the JavaOne and OpenWorld conferences.

The first piece of advice, for which it is currently far too late to act upon, is to make sure you’ve got your hotel booked. Barry Burd wrote a JavaOne article for TheServerSide a couple of years ago that included some insights on how to find a last minute hotel in San Francisco that isn’t obscenely far from the venue, although given the limited availability when I did a quick search on Expedia earlier this week, I’d say you’d be lucky to find a hotel in Oakland or San Jose for a reasonable price, let alone San Francisco.

Schedule those JavaOne 2017 sessions

For those who have their accommodation booked, the next sage piece of conference advice is for attendees to log on to the JavaOne 2017 session scheduler and reserve a seat in the sessions you wish to attend. Adam Bien’s session on microservices, Java EE 8 and the cloud is already overbooked. The Java platform’s chief architect Mark Reinhold’s talks on Jigsaw and Java modules already has a wait list, and the ask the Java architects session with Oracle’s Brian Goetz and John Rose is at capacity. The longer you wait to formulate your schedule, the fewer the sessions you’ll have to choose from.

When choosing session, I find the speaker to be a more important criteria for discernment than the topic. Most speakers have a video or two up on YouTube of them doing a presentation. Check those videos out to see if the speaker is compelling. An hour can be quite a long time to sit through a boring slide show. But an exciting speaker can make an hour go by in an instant, and if you’re engaged, you’re more likely to learn something.

Skip the Oracle keynotes

One somewhat contrarian piece of advice I’m quick to espouse is for attendees to skip the Oracle keynotes, especially the morning ones. That’s not to say the keynotes are bad. But getting to the keynotes early enough to get a seat is a hassle, and you can’t always hear everything that’s being said in the auditorium. A better alternative is to stream the keynote from your hotel room, or better yet, watch the the video Oracle uploads to their YouTube channel while you’re eating lunch.

But here’s why keynotes can take away from your JavaOne 2017 conference experience. For example, if you attend Thomas Kurian’s Tuesday morning keynote on emerging technologies and intelligent cloud applications, you’d miss Josh Long and Mark Heckler’s session on reactive programming with Spring 5. Actually, there’s a bunch of other sessions going on at that time, ranging from Martijn Verburg’s talk on surviving Java 9 to Stuart Marks’ talk on Java collections. If anything interesting gets said about new trends or technologies in a keynote, it’ll be covered extensively by the tech media. The same can’t be said for the nuggets of understanding that can be panned from attending a good JavaOne session.

Enjoy the party

The other big piece of advice? Enjoy San Francisco, especially if it’s your first time in the city. It’s the smallest alpha city in the world, but it is an alpha city. There are plenty of parties, meet-ups and get-togethers you’ll find yourself invited to, and it’s worth taking up any offers you manage to get. Having said that, keep an eye on how much gas you have left in the tank at the end of the day, because you want to be able to make it to all of the morning sessions you’ve scheduled for yourself.

If it’s your first time attending, I assure you that you’ll have a great time at JavaOne 2017, and with the new layout bringing JavaOne 2017 closer to the Oracle OpenWorld conference, this event should be better than any of the others in the memorable past. San Francisco is a great city, and the greatest minds in the world of modern software development will be joining you in attendance.


September 1, 2017  9:24 PM

Implementing cloud-native security means going back to your secure coding basics

cameronmcnz Cameron McKenzie Profile: cameronmcnz

There’s really nothing new under the sun when it comes to addressing security vulnerabilities in code. While there has been a great shift in terms of how server side application are architected, including the move to the cloud and the increased use of containers and microservices, the sad reality is that the biggest security vulnerabilities found in code are typical caused by the most common, well-known and mundane of issues, namely:

  1. SQL injection and other interpolation attack opportunities
  2. The use of outdated software libraries
  3. Direct exposure of back-end resources to clients
  4. Overly permissive security
  5. Plain text passwords waiting to be hacked

SQL injection and other interpolation attacks

SQL injections are the easiest way for a hacker to do the most damage.

Performing an SQL injection is simple. The hacker simply writes something just a tad more complicated than DROP DATABASE or DELETE * FROM TABLE into an online form. If the input isn’t validated thoroughly, and the application allows the unvalidated input to become embedded in an otherwise harmless SQL statement, the results can be disastrous. With an SQL injection vulnerability, the possible outcomes are that the user will be able to read private or personal data, update existing data with erroneous information, or outright delete data, tables and even databases.

Proper input validation and checking for certain escape characters or phrases can completely eliminate this risk. Sadly, too often busy project managers push for unvalidated code into production, and the opportunity for SQL injection attacks to succeed exist.

The use of outdated software libraries

Enterprises aren’t buying their developers laptops running Windows XP. And when updates to the modern operating system that are using do become available, normal software governance policies demand applying a given patch or fix pack as soon as one comes along. But how often to software developers check the status of the software libraries their production systems are currently using?

When a software project kicks off, a decision is made about which open source libraries and projects will be used, and which versions of those projects will be deployed with the application. But once decided, it’s rare for a project to revisit those decisions. But there are reasons why new versions of logging APIs or UI frameworks are released, and it’s not just about feature enhancements. Sometimes an old software library will contain a well known bug that has gets addressed in subsequent updates.

Every organization should employ a software governance policy that includes revisiting the various frameworks and libraries that production applications link to, otherwise they face the prospect that a hidden threat resides in their runtime systems, and they only way they’ll find out about it is if a hacker finds the vulnerability first.

Direct exposure of back-end resources to clients

When it comes to performance, layers are bad. The more hoops a request-response cycle has to go through in order to access the underlying resource it needs, the slower the program will be. But the desire to reduce clock-cycles should never bump up against the need to keeps back-end resources secure.

The exposed resources problem seems to be most common when doing penetration testing against RESTful APIs. With so many RESTful APIs trying to provide clients an efficient service that accesses back-end data, the API itself is often little more than a wrapper for direct calls into a database, message queue, user registry or software container. When implementing a RESTful API that provides access to back-end resource, make sure the REST calls are only accessing and retrieving the specific data they require, and are not providing a handle to the back-end resource itself.

Overly permissive security

Nobody ever sets out intending to lower their shields in such a way that they’re vulnerable to an attack. But there’s always some point in the management of the application’s lifecycle in which a new feature, or connectivity to a new service, doesn’t work in production like it does in pre-prod or testing environments. Thinking the problem might be access related, security permissions are incrementally reduced until the code in production works. After a victory dance, the well intended DevOps personnel who temporarily lowered the shields in order to get things working are sidetracked and never get around to figuring out how to keep things running at the originally mandated security levels. Next thing you know, ne’er-do-wells are hacking in, private data is being exposed, and the system is being breached.

Plain text passwords waiting to be hacked

Developers are still coding plain text passwords into their applications. Sometimes plain text passwords appear in the source code. Sometimes they’re stored in a property file or XML document. But regardless of their format, usernames and passwords for resources should never appear anywhere in plain text.

Some might argue that the plain-text password problem is overblown as a security threat. After all, if it’s stored on the server, and only trusted resources have server access, there’s no way it’s going to fall into the wrong hands. That argument may be valid in a perfect world, but the world isn’t perfect. A real problem arises when another common attack, such as source code exposure or a directory traversal occurs, and the hands holding the plain text passwords are no longer trusted. In such an instance, the hacker has been given an all-access-pass to the back-end resource in question.

At the very least, passwords can should be encrypted when stored on the filesystem and decrypted when accessed by the application. Of course, most middleware software platforms provide tools such as IBM WebSphere’s credential vault for securely storing passwords, which not only simplifies the art of password management, but it also relieves the developer from any responsibility if indeed any source code was exposed, or a directory traversal were to happen.

The truth of the matter is, a large number of vulnerabilities exist in production code not because hackers are coming up with new ways to penetrate systems, but because developers and DevOps personnel simply aren’t diligent enough about addressing well-known security vulnerabilities. If best practices were observed, and software security governance rules were properly implemented and maintained, a large number of software security violations would never happen.

You can follow Cameron McKenzie on Twitter: @cameronmcnz


August 14, 2017  8:17 PM

Implementing a custom user registry to consolidate LDAP servers and active directories?

cameronmcnz Cameron McKenzie Profile: cameronmcnz

Should you implement a custom user registry to help mitigate access to your various LDAP servers in order to simplify security tasks such as authentication and group association? The answer to that question is a resounding ‘no.’

The simple beauty of the custom user registry

On the surface, implementing a custom user registry is simple. While it differs slightly from one application server to the next, to implement a custom user registry, you typically only have to write a Java class or two that provides an implementation for half a dozen or so methods that do things like validate a password, or indicate whether a user is a part of a given group. It’s easy peasy.

For example, to create a custom user registry for WebSphere, here is the IBM WebSphere UserRegistry interface that needs to be implemented, along with the 18 methods you need to code:

com.ibm.websphere.security.UserRegistry

1. initialize(java.util.Properties)
2. checkPassword(String,String)
3. mapCertificate(X509Certificate[])
4. getRealm
5. getUsers(String,int)
6. getUserDisplayName(String)
7. getUniqueUserId(String)
8. getUserSecurityName(String)
9. isValidUser(String)
10. getGroups(String,int)
11. getGroupDisplayName(String)
12. getUniqueGroupId(String)
13. getUniqueGroupIds(String)
14. getGroupSecurityName(String)
15. isValidGroup(String)
16. getGroupsForUser(String)
17. getUsersForGroup(String,int)
18. createCredential(String)

Now remember, the goal here is not to invent a system for storing users. When implementing a custom user registry, there is typically an underlying data store in which the application is connecting. So perhaps the purpose of the custom user registry is to combine access to a combined LDAP server and a database system that has user information. Or perhaps there are three different LDAP servers that need to have consolidated access. Each of those systems will already have mechanisms to update a password or check if a user is part of a given group. Code for a custom user registry simply taps into the APIs of those underlying systems. There’s no re-inventing the wheel with a custom user registry. Instead, you just leverage the wheels that the underlying user repository already provides.

So it all sounds simple enough, doesn’t it? Well, it’s not. And there are several reasons why.

Ongoing connectivity concerns

First of all, just connecting to various disparate systems can be a pain. There’s the up front headache of getting credentials, bypassing or at least authenticating through existing firewalls and security systems that are already in place. Just getting initial connectivity to disparate user registry systems can be a pain, let alone maintaining connectivity as SSL certificates expire, or changes are made in the network topology. Maintaining connectivity is both an up-front and a long term pain.

LDAP server optimization

And then there’s the job of optimization. Authenticating against a single user repository is time consuming enough, especially at peak login times. Now imagine there were three or four underlying systems against which user checks were daisy chained through if..then…else statements. It’d be a long enough lag to trigger a user revolt. So even after achieving the consolidation of different LDAP servers and databases, there is time that needs to be invested in figuring out how to optimize access. Sometimes having a look-aside NoSQL database where users ids are mapped to the system in which they are registered can speed things up, although a failed login would likely still require querying each subsystem. Performance optimization becomes an important part of building the user registry, as every user notices when logging into the system takes an extra second or two.

Data quality issues

And if there are separate subsystems, ensuring data quality becomes a top priority as well. For example, if the same username, such as cmckenzie, exists in two sub-systems, which one is the record of truth? Data integrity problems can cause bizarre and difficult behavior to troubleshoot. For example, cmckenzie might be able to log in during low usage times, but not during peak usage times, because during peak usage times, overflow requests get routed to a different sub-system. And even though the problems may stem from data quality issues in the LDAP server subsystems, it’s the developers maintaining the custom user registry code who will be expected to troubleshoot the problem and identify it.

LDAP failure and user registry redundancy

Failover and redundancy is another important piece of the puzzle. It’s good to keep in mind that if the custom user registry fails, nobody can log into anything from anywhere. That’s a massive amount of responsibility for anyone developing software to shoulder. Testing how the code behaves when a given user registry is down, or figuring out how to make the custom user registry resilient when weird corner-cases happen is pivotally important when access to everything is on the line.

Ownership of the custom user registry

From a management standpoint, a custom user registry is a stressful piece of technology to own. Any time the login process is slow, or problems occur after a user logs into the system, the first place fingers will point is to the custom user registry piece. When login, authentication, authorization or registration problems occur, the owner of the custom user registry piece typically first has to prove that it is not their piece that is the problem. And of course, there certainly are times when the custom user registry component is to blame. Perhaps a certificate has been updated on a server and nothing has been synchronized with the registry, or perhaps someone has updated a column in the home grown user registry database, or maybe an update was made to the active directory? The custom user registry piece depends on the stability of the underlying infrastructure to which it connects, and that is a difficult contract to guarantee at the best of times.

So yes, on the surface, an custom user registry seems like a fairly easy piece of software to implement, but it is fraught with danger and hardship at every turn, so it is never recommended. A better option is to invest time into consolidating all user registries into a single, high performance LDAP server or active directory, and allow the authentication piece of your Oracle or WebSphere applications server to connect into that. For small to medium size enterprises, that is always the preferred option. That way you can concentrate on using the software and hardware that hosts the user records to be optimized and tuned for redundancy and failover, rather than trying to handle such problems in code that has been written in house. It also allows you to point your finger at the LDAP server or active directory vendor, rather than pointing fingers at the in-house development team when things go wrong.

Inevitably, there will be times when a custom user registry is required, and it has to be written, despite all of the given reservations. If that’s the case, I wish you the best of luck, and I hope your problems are few. But if it can be avoided, the right choice is to avoid, at all costs, the need to implement a custom user registry of your own.


August 14, 2017  3:43 PM

Gender and ethnic parity is not equivalent to workplace diversity

cameronmcnz Cameron McKenzie Profile: cameronmcnz

Former Google employee James Damore’s recently leaked memo about his old employer’s employment activities has brought the discussion about IT hiring practices to the fore. After reading a vast number of articles written on the topic, it would appear that many believe the terms workplace diversity and gender representation are interchangeable. They of course are not, and doing so is not only intellectually dishonest, but it’s incendiarily disingenuous to the point that doing so actual hinders the progression of the important goal of balanced gender and ethnic representation in the workforce.

How do you define diversity?

I ran for president of my University Student Council twenty-five years ago. One of the other candidates was an enlightened progressive whose main platform plank was to promote and improve diversity in all areas of the university. It was a message that was well received in the social sciences, law and humanities buildings, but it ran into a brick wall when it was trucked into engineering.

In compliance with all preconceived stereotypes, gender parity in the engineering department was a little lacking back then, but a few of those future train conductors were getting a bit tired of constantly being beaten with the ‘lack of diversity’ stick. A student stepped up to the microphone during question period and asked the candidate if she felt the engineering department lacked diversity. After the candidate stumbled in her effort to provide a diplomatic answer, the student followed up with something more rhetorical.

“The leader of the school’s Gay and Lesbian committee is an engineer. Our representative to the student council is from India. Three of the five students who are on full scholarships are second generation Chinese, and even my friends with paler complexions, who you believe lack diversity, are here on Visas from countries like Australia, Russia, Israel and eastern Europe. So how can you possibly stand there and tell me we are not diverse?” The student was mad, and he had every right to be.

The engineering faculty was indeed diverse in a variety of beautiful even inspirational ways. Gender parity was certainly lacking, and I can think of a few minority groups that were under-represented, but for someone to stand in front of that group of students and tell them they weren’t diverse was an undeserved and unmitigated insult.

Confronting intellectual dishonesty

Even twenty-five years later, that exchange still resonates with me. Not just because it was so enjoyable to see a social justice warrior be so thoroughly destroyed intellectually, but because the student wasn’t wrong. He had every right to stand up and object to the insults and the derision that were constantly being thrown at the faculty to which he was proud to be a part.

With a history of participating in medium-term consulting engagements, I can say that I have worked on an admirable number of projects in a wide array of cities. I can’t remember any engagement in which the project room looked like a scene out of the 1950’s based TV series Mad Men, where every programmer was a white male, and every developer was a product of a privileged background. In fact, I was on a Toronto based project a number of years ago where my nickname on a team of over thirty individuals was ‘the white guy.’

I’m proud of all of those projects I’ve worked on over the years, and I’ve made friends with people who come from a more diverse set of backgrounds than I could possibly have ever imagined. And the friends I’ve made include a number of incredible female programmers, although I will admit that all of those project teams on which I worked lacked in terms of gender parity. But it would be an insult to me and to everyone I’ve worked with to tell me that the teams I’ve worked on weren’t made up of a diverse set of people, because they were. I have seen great diversity in the workforce. I have not seen great gender parity. There is a difference.

There is certainly an issue in the technology field in terms of an under-representation of both women and certain visible minorities. But gender and ethnic parity is not the same thing as workplace diversity. Arguing that they are is disingenuous, and perpetuating this type of insulting intellectual dishonesty will do more to hinder the goal of achieving balanced gender and ethnic representation in the workplace than it ever will to enhance it.


August 8, 2017  7:15 PM

Big-Data is helping in wildlife conservation

shwati12 Profile: shwati12
Uncategorized

Objective

Big data is on the boom these days. It has been helping every field. Let us see few of the projects of Big Data in Wildlife Conservation that has used Big data and Machine Learning as their key components.
Big Data in Wildlife Conservation

2. Big Data in Wildlife Conservation

In this section, various projects are discussed below which shows the aid of Big Data in Wildlife Conservation.

2.1. The Great Elephant Census

 In Africa alone, more than 12,000 elephants have been killed each year since 2006 and if this goes on, that day is not far when there will not be any elephant left on this planet. The protection of ecosystem is vital not only to wildlife but the communities around them to complete the ecosystem cycle and Big Data is helping in the same. In 2014, a survey The Great Elephant Census was launched by Microsoft co-founder Paul Allen to achieve a greater understanding of elephants number in Africa. 90 researchers traversed over 285,000 miles of the African continent, over 21 countries to conduct this research.

One of the largest raw data sets was created in this survey. The survey has shown that African elephant numbers has become only 352,271 in 18 countries and has gone down by 30% in seven years. This highlighted the need for on-going monitoring to make ensure better response times to emergency situations. Big Data is having a huge impact on conservation efforts that is going to help protect the Elephant population of Africa.

2.2. eBird

This project was launched in 2002. It is an app that helps users’ in recording bird sightings as they find any and input this data into the app. The app was created with a target to help create usable Big Data sets that could be of value to professional and recreational bird watchers. These data sets are then being shared with professionals like teachers, land managers, ornithologists, biologists and conservation workers who have used this data to create BirdCast, a regional migration forecast giving real-time predictions of bird migration for the first time ever. This uses machine learning to predict migration and roosting patterns of different species of birds. This will provide benefits by providing more accurate intelligence for land planning and management and allowing necessary preparations for areas prone to roosting bird gatherings.
Read Complete Article>>


Page 5 of 21« First...34567...1020...Last »

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: