Coffee Talk: Java, News, Stories and Opinions

Page 2 of 1912345...10...Last »

November 13, 2017  1:39 PM

Five features to make Java even better

yegor256 Profile: yegor256
Uncategorized

I stumbled upon this proposal by Brian Goetz for data classes in Java, and immediately realized that I too have a few ideas about how to make Java better as a language. I actually have many of them, but this is a short list of the five most important.

November 13, 2017  2:27 AM

Shortcomings of DevOps causes security bug detection to suffer

cameronmcnz Cameron McKenzie Profile: cameronmcnz

Eariler this year we spoke with Jim Manco of Manicode security. It was immediately prior to Oracle OpenWorld 2017, in which Manico was delivering a JavaOne session on Java SE 9 security.

There are plenty of new tools and technologies in the latest version of the JDK to help minimize the number of Java security bugs that developers might encounter. Of course, it’s not good enough just having technologies like JEP-273 (DRBG-Based SecureRandom Implementations), JEP-290 (Filtering of Incoming Serialization Data), and the new unlimited JCE (Java Cryptography Extension) in the Java 9 specification. What’s important in terms of minimizing the number of Java security bugs that get into production is having developers that know what these various security controls do and how to use them.

Talking to Jim Manico about Security

The following is a transcript of the interview between TheServerSide’s Cameron McKenzie and Mr. Manico in which a variety of Java security topics are addressed, including how Java modularity will impact how security bugs are addressed, the shortcomings of DevOps automation tools when Java security bugs arise, and of course, insights on various Java security controls that are new in Java SE 9.



Cameron McKenzie: When it comes to enterprise Java security, what are the things that concern you and what are the things that should concern people who are doing enterprise software development with Java?

Jim Manico: The things that really concern me are the risks against Java and Java is being attacked. To a lot of developers, to be honest with you, this is esoteric stuff. So it’s not necessarily the most exciting or sexy feature of Java, but I daresay it’s the necessary stuff. Like for example, there’s a new JSR that addresses serialization and tries to allow for direct white list filtering of exact classes at multiple tiers within your Java application. This is not exciting stuff, right? But it’s necessary to have a secure Java application or at least it gives you a way to address known Java risks. But, you know, it’s not sexy; But it’s important.

Avoiding Java security bugs

Cameron McKenzie: Quite often, the enterprise Java developer just focuses on fulfilling the business requirements and doesn’t really concern themselves about some security implications of the code that they’re writing. For some of the code the developer would write on an everyday kind of basis, what are some of the security concerns they should be taking into account in order to avoid Java security bugs from finding their way into the code?

Jim Manico: Sometimes the use of numerics for financial processing is a lot more tricky than people would expect. Now other problems range from, when you’re using older school Java technologies and you wanna provide certain kinds of controls, like escaping, to stop cross-site scripting, that’s not part of the core language. There’s some parts of it in J2EE, there’s some frameworks that provide it, but it’s not in the core of the language. And it’s a control that I think should be made more readily available to the developer.

There’s also the issue of all the cryptographic APIs. I think some of the best cryptographic APIs are not in the core of Java. They’re in different places like Google projects. Google Tink is a new project. It’s a cryptographic API that makes Java’s world, the Java developer’s world of interacting with low-level cryptographic APIs easier. I’d love to see more of those kind of APIs closer to the core. Let me rephrase that. The closer these APIs are to the core language, the more likely we’re gonna get use from the developers, right?

We’re all Java security engineers

Cameron McKenzie: Now I’m paraphrasing you a bit here, so correct me if I’m wrong. But in one of the previous sessions of yours that I attended, I remember you talking about how you felt that software developers and DevOps people should actually consider themselves security engineers nowadays. What exactly did you mean by that?

Jim Manico: You know, the world’s changing. So I challenge that most developers, whether they believe it or not, whether they think it or not, or whether they even do it or not, they’re security engineers now. Because the code that they’re writing is on the front-lines of protecting organizations from data loss, financial damage, reputation damage, privacy violations, compliance regulations and fines. So these developers and the code they write are on the front-line of all those issues and more. And so they’re security engineers. So it’s a matter of if they’re gonna do it or not. And if you beat a developer over a head with like pen tests and having some security teams run tools, and, you know, especially early in the maturity, training the developers to attack their code really will help change things.

Later on in the maturity, developers are part of conversations at the early part of design, with existing security libraries and knowledge and controls in place, like rigorous authentication and access control services and advanced cryptographic services available to developers. And these are controls at very early stages of designing software; that’s the ideal, right? But at first, if you’re at the early stage of maturity and maybe you haven’t done application security in your organization ever before, then I think a lot of assessment just to see initially where developers are, show them exploits against their software, that’s usually a good way early on to help shake up a culture. But you know, we wanna do this way more proactively, as we get better at writing secure software.

The limits of DevOps based security

Cameron McKenzie: Does DevOps change the software security game? How do you feel about DevSecOps?

Jim Manico: It’s a piece of the puzzle.

I think it’s a nice word but the concept is that we’re automating all aspects of the software development life cycle. Like we’re automating the build process, we’ve done that a lot of times. We’re integrating the security tests into the build life cycle at different phases of building and deploying software. We’re automating. What else we are we doing? We’re automating dynamic tests, we’re automating code skin tests, we’re automating deployment of software. And as we deploy software, we run this huge batch of tests, automated unit tests and dynamic tests and static tests against our code looking for security bugs. Maybe we find security bugs, we’re gonna stop the build and not allow that code to be deployed. Maybe it’s just a warning and we ship it anyways. There’s many gradients how we do that automation.

DevOps also talks about automating, you know, different dashboards and alerts, so people who are monitoring the application real-time get better intel on security when it’s happening. These are things we’ve done for a long time in software. I think DevOps is putting a little more rigor around it, putting a nice name in front of it, and trying to, at least from my world, add more security to it as much automation as possible.

Now the other side to that is there are some elements of application security that don’t translate really well to automation. Like, especially if like, if you wanna look at a turnkey tool, it’s not always great at finding access control problems, or business logic problems, or deeper problems that maybe a pen test would find, that a turnkey tool, especially a tool that’s tuned to work fast in a DevOps environment, you know, maybe a pentester will find that manually where automation may miss the problem. The dark side of DevOps is that we still need people. We can’t just automate everything. We still need people involved at some level of a deep review, I think, to really provide deeper security assurance, if that’s what you want.

Cameron McKenzie: Now tell me honestly, is the deprecation of the Java Web Browser Plug-in the greatest thing to happen to security analysts?

Jim Manico: As a programmer, I’m like, “whatever,” but as an infrastructure person who’s trying to manage a fleet of PCs or Macs or whatnot to keep an organization secure, not supporting that is usually a good thing, right? I don’t wanna trash Java but Java in the client traditionally has not been a great thing. A lot of policies are to heavily limit how Java runs on the client. And so this is one kick in that direction, which is nice. I’m not saying Java-client is bad, it’s just…it’s good for administrators that can do as much as possible to control clients’ JVMs as much as they can. And having rogue applets from any website just running the browser and similar technology just is not an ideal situation, right? So we wanna restrict that and manage that as best as we can.

Software security tooling

Cameron McKenzie: Now when you’re working as a security consultant and you go into an organization, what are some of the tools and the governance models and the policies that you like to see already in place as soon as you get in there?

Jim Manico: So it’s like a good authentication service is usually a good idea, right? So for a developer, especially as your organization matures, it’s really…rather than having every developer recode in some way parts of an authentication service, having or using a rigorous one that all developers can use in a standard way makes that front gate of your application easier to lock down. And the second layer I look at is, of course, access control. Having a good access control service and series of methodologies and database driven rules and configuration capability, all that good stuff for really good horizontal, detailed, permission-level access control in a way that all developers of your team can leverage, that’s a good idea.

You know, we also wanna be able to build user interfaces in a secured fashion. Whatever framework that we’re using, we wanna understand how to keep that locked down via scripting. And we also wanna make sure developers understand, you know, what’s proper data flow through a client, what we should and should not be storing on a client long term. What kind of logic should and should not run on a client? What stuff should we push more server side? What shouldn’t we even do on the client? You know, how are we relating data access and user interface functionality access with the same consistent set of access control rules on both the client and the server?

You know, all these things are things that developers do day in and day out that they really need to understand how to use and get right. And if we don’t have these tools available, if we’re like, “Yo, developer, you know, welcome to the team, go figure out access control now,” you know, good luck with that. You know, that’s so critical of an area of your software, it’s not something you can just…it’s not something you…it’s something you can but I daresay it’s something you shouldn’t just casually put into your application. It should be extremely intentional.

Cameron McKenzie: And, you know, there are a whole bunch of hot topics rising in the Java ecosystem. There’s lots of talk about containers and microservices and coding. But security is important. And if you wanna learn about security, there’s no better speaker to learn from then Jim Mancio.


You can follow Jim Manico on Twitter: @manicode
You can follow Cameron McKenzie on Twitter: @cameronmcnz

 


November 7, 2017  1:18 AM

From monoliths to cloud native composition with Apprenda’s Sinclair Schuller

cameronmcnz Cameron McKenzie Profile: cameronmcnz

In our series on cloud native computing, TheServerSide spoke with a number of experts in the field, including a number of members of the Cloud Native Computing Foundation. The following is the transcription of the interview between Cameron McKenzie and Apprenda’s Sinclair Schuller.

 


An interview with Apprenda’s Sinclair Schuller


How do you define cloud native computing?

Cameron McKenzie: In the service side’s quest to find out more about how traditional enterprise Java development fits in with this new world of cloud native computing that uses microservices and Docker and container and Kubernetes, we tracked down Sinclair Schuller. Sinclair Schuller’s the CEO of Apprenda. He’s also a Kubernetes advocate and he also sits on the governing board of the Cloud Native Computing Foundation.

So the first thing we wanted to know from Sinclair was, well, how do you define cloud native computing?

Sinclair Schuller: Great question. I guess I’ll give you an architecturally-rooted definition. To me, a cloud native application is an application that has an either implicit or explicit capability to exercise elastic resources that it lives on and that has a level of portability that makes it easy to run in various scenarios, whether it be on cloud A or cloud B or cloud C. But in each of those instances, its ability to understand and/or exercise the underlying resources in an elastic way is probably the most fundamental definition I would use. Which is different than traditional non-cloud native applications that might be deployed on a server. Those have no idea that the resources below them were replaceable or elastic, so they could never take advantage of those sorts of infrastructure properties. And that’s what makes them inherently not scalable, inherently not elastic and so on.

Cameron McKenzie: Now, there’s a lot of talk about how the application server is dead, but whenever I look at these environments that people are creating to manage containers and microservices, they’re bringing in all of these tools together to do things like monitoring and logging and orchestration. And eventually, all this is going to lead to some sort of dashboard that gives people a view into what’s happening in their cloud native architecture. I mean, is the application server really dead or is this simply going to redefine what the application server is?

Sinclair Schuller: I think you’re actually spot-on. I think, over this whole application server is dead thing is, unfortunately, a consequence of cheesy marketing where new vendors try to reposition old vendors. And it’s fair, right? That’s just how these things go, but nothing about even the old app server’s dead. In many cases they’ll still deploy their apps to something like Tomcat sitting in a container. So that happens, so that hasn’t gone away per se, even if you’re building new microservices.

Has the traditional heavyweight app server gone away? Yes. So I think that has fallen out of favor. Take the older products like a WebLogic or something to that effect, you don’t see them used in new cloud native development anymore. But you’re right, what’s going to happen, and we’re seeing this for sure, is there’s a collection of fairly loosely coupled tooling that’s starting to surround the new model and it’s looking a lot like an app server in that regard.

This is a spot where I would disagree with you: I don’t know if there’s going to be consolidation. But certainly, if there is, then what you end up with is a vendor that has all the tools to effectively deliver now a cloud native app server. If there isn’t consolidation and this tooling stays fairly loosely coupled and fragmented, then we have a slightly better outcome than the traditional app server model. We have the ability to best of breed piecemeal tooling the way we need it.

Cloud native computing and UI development

Cameron McKenzie: One of the things that a lot of developers and architects are asking is, “What is the role of the UI in a world of cloud native computing?” Is it simply to be developed by a front-end JavaScript framework like Angular UI? Is it actually embedded inside of the application that gets deployed to the container? What’s actually going on in the UI development space in terms of cloud native development?

Sinclair Schuller: I think to a degree, the UI battle has been won, so to speak, right? It seems like the market’s settled on JavaScript and libraries like Angular to build out effectively frontends for these backend services. I guess there isn’t too much tumultuousness or controversy anymore on that side, which is why you don’t hear about it too much. And the backend side of things lagged a bit and now containers came in and that’s being revolutionized. So I think of what’s happening with backend services and containers to be very similar to what happened with JavaScript, say, 10 years ago when it really started to take hold. So I think the role that JavaScript and HTML5 play is already pretty well defined and we’re not going to see a ton of change there.

Cameron McKenzie: I can develop a “hello world” application. I can write it as a microservice. I can package it in a Docker container and I can deploy it to some sort of hosting environment. What I can’t do is I can’t go into the data center of an airline manufacturer and break down their monolith, turn it into microservices, figure out which ones should be coarse grain microservices and figure out which ones should be fine grain microservices. I’ve no idea how many microservices I should end up with and once I’ve done all that, I wouldn’t know how to orchestrate all of that live in production.

Apprenda’s role in advancing cloud native computing

How do you do that? How do organizations take this cloud native architecture and this cloud native infrastructure and scale?

Sinclair Schuller: Yeah, so you actually just described our business. Effectively, what we notice in the market is that if you take the world’s largest companies that have been built around these monolithic applications, they have the challenge of “how do we decompose them, how do we move our message forward into a modern era? And if we can, of course, some applications don’t need that, how do we at least run them in a better way in a cloud environment so that we can get additional efficiencies, right?”

So what we focused on is providing a platform for cloud native and also ensuring that it provides, for lack of a better term, error bridging capabilities. Now, in Apprenda, the way we built it, our IP will allow you to run a monolith on the platform and we’ll actually instrument changes into the app and have it behave differently so that it can do well on a cloud based infrastructure environment, giving that monolith some cloud architecture elements, if you will.

Now, why is that important? If you can do that and you can quickly welcome a bunch of these applications onto a cloud native platform like this and remove that abrupt requirement that you have to change the monolith and decompose it quickly into who knows how many microservices, to your point, it affords a little bit of slack so the development teams in those enterprises can be more thoughtful about that decision. And they can choose one part of the monolith, cleave it off, leverage that as an actual pure microservice, and still have that running on the same platform and working with the now remaining portion of the monolith.

By doing that, we actually encourage enterprises to accelerate the adoption of a cloud native architecture since it’s not such an abrupt required decision and such an abrupt change to the architecture itself. So for us, we’re pretty passionate about that step. And then second, it is the how do I manage tons of these all at once? The goal of a, in my opinion, good cloud abstraction like Apprenda, a good platform that’s based on Kubernetes is to make managing 10, 1,000 or N number of microservices feel as easy as running 1 or 2.

And if you can do that, if you can turn running 1 or 2 to kind of the M.O. for running just tons of microservices, you remove that cost from the equation and make it easier for enterprise to digest this whole problem. So we really put a lot of emphasis on those two things. How do we provide that bridging capability so that you don’t have to have such an abrupt transition and you can do so on a timeline that’s more comfortable to you and that fits what you need and also deal with the scale problem?

Ultimately the only way that scale problem does get solved, however, is that a cloud platform has to truly extract the underlying infrastructure resources and act as a broker between the microservices tier and those infrastructure resources. If it can do that, then scaling actually becomes a bit trivial because the consequence of a properly architected platform.

Cameron McKenzie: Now, you talked about abstractions. Can you speak to the technical aspects of abstracting that layer out?

Sinclair Schuller: Yeah, absolutely. So there are a couple of things. One is if you’re going to abstract resources, you typically need to find some layer that lets you project a new standard or a new type of resource profile off the stack. Now, what do I mean by that? Let’s look at containers as an example.

Typically I would have the rigid structure of my infrastructure like this VM or this machine has this much memory. It has one OS instance. It has this specific networking layout and now if I want to actually abstract it, I need to come up with a model that can sit on top of that and give you that app with some sort of meaningful capacity and divvy it up in a way that is no longer specifically tied to that piece of infrastructure. So containers take care of that and we all understand that now.

But let’s look at subsystems. Let’s say that I’m building an application and we’ll start with actually a really trivial one. I have something like logging, right? My application logs data to disk, usually dumping something into a file someplace. If I have an existing app that’s already doing that, I’m probably writing a bunch of log information to a single file. And if I have 50 copies of my application across 50 infrastructure instances, I now have 50 log files sitting around in the infrastructure who knows where. And as a developer, if I wanted to debug my app, I have to go find all of that.

With Apprenda, what we focus on is doing things like, “Well, how can we abstract that specific subsystem? How can we intervene in the logging process in a way that actually allows us to capture log information and route it to something that is something like a decentralized store that can aggregate logs and let you parse through them later? So for us, whenever we think about doing this as a practical matter, it’s identifying the subsystems in an app architecture like logging, like compute consumption, which the containers take care of, like identity management, and actually intercepting and enhancing those capabilities so that it can be dealt with in a more granular way and in a more affordable way.

Preparing for cloud computing failures

Cameron McKenzie: Earlier this year we saw the Chernobyl-esque downfall of the Amazon S3 cloud and it had the ability to pretty much take out the internet. What type of advice do you give to clients to ensure that if their cloud provider goes down that their applications don’t go out completely?

Sinclair Schuller: I think the first part of that is culturally understanding what cloud actually is and making sure that all the staff and the architects know what it is. In many cases, we think of cloud as some sort of, like, literally nebulous and decentralized thing. Cloud is actually a very centralized thing, right? We centralize resources among a few big providers like Amazon.

Now, what does that mean? Well, if you’re depending on one provider, like Amazon, for storage through something like S3, you can imagine that something could happen to that company or to that infrastructure that would render that capability unavailable, right? Instead what I think happens is that culturally people have started to believe that, you know, cloud is foolproof, it has 5-9s and 9-9s, pick whatever SL you want, and they rely on that as their exclusive means for guaranteeing availability.

So I think number one, it’s just encouraging a culture that understands that when you’re building on a cloud, you are building on some sort of centralized capacity, some centralized capability and that it still can fail. As soon as you bring that into the light and people understand that, then the next question is, “How do I get around that?” And we’ve done this in computing many, many times. To get around that, you have to come up with architecture patterns that can properly deal with things like segregation of data and fails, right?

So could I build an application architecture that maybe uses S3 and potentially strikes data across something like Azure or multiple regions in S3? So if you think that if you had a mentality that something like S3 can fail, you suddenly push that concern into the app architecture itself and it does require that a developer starts to think in a way like that where they say, “Yeah, I’m going to start striping data across multiple providers or multiple regions.” And that gets rid of these sort of situations.

I think part of the reason that we saw the S3 failure happen and affect so many different properties is that people weren’t thinking that way and they saw the number of 9s in the SLA and said, “Oh, I’ll be fine,” but it’s just not the case. So you have to take that into consideration in the app architecture itself.

Cameron McKenzie: Apprenda is both a leader and an advocate in the world of Kubernetes. What is the role that Kubernetes currently plays in the world of orchestrating Docker containers and cloud native architectures?

Sinclair Schuller: So when we look at kind of the world around containers a couple things became very clear. Configurations, scheduling of containers, like making sure that we can do container placement, these all became important things that people cared about and certain projects evolved, like Docker Swarm, like Kubernetes, compete with that in a common way.

So when we look at something like Kubernetes as part of our architecture, the goal was let’s make sure that we’re picking a project and working against a project that we believe has the best foundational primitives for things like scheduling, for things like orchestration. And if we can do that, then we can look at the set of concerns that surround that and move up the application stack to provide additional value.

Now in our case, by adopting Kubernetes as the core scheduler for all the cloud native workloads, we then looked at that and said, “Well, what’s our mission as a company?” To bring cloud into the enterprise, right? Or to the enterprise. And what’s the gap between Kubernetes and that mission? The gap is dealing with existing applications, dealing with things like Windows because that wasn’t something that was native to Kubernetes. And we said, “Could we build IP or attach IP around Kubernetes that can solve those key concerns that exist in the world’s biggest companies as they move to cloud?”

So for us, we went down a couple very specific paths. One, we took our Windows expertise and built the Windows container and Windows notes support in Kubernetes and contributed that back to the community, something I would like to see getting into production sometime soon. Number two, we surrounded Kubernetes with a bunch of our IP that focuses on dealing with the monolith problem and decomposing monoliths into microservices and having them run on a cloud native platform. So for us, it was extending the Kubernetes vision beyond container orchestration, container scheduling, and placement and tackling those very specific architectural challenges across platform support and the ability to run and support existing applications side by side with cloud native.

Cameron McKenzie: To hear more about Sinclair’s take on the current state of cloud native computing, you can follow him on Twitter @sschuller. You can also follow Apprenda and if you’re looking to find out more information on cloud native computing, you can always go over to the Cloud Native Computing Foundation’s website, cncf.io.

You can follow Cameron McKenzie on Twitter: @cameronmcnz


November 6, 2017  11:30 PM

Feature highlights from the ZK 8.5 release

jeanyen Profile: jeanyen
JavaScript

ZK Team has just announced the release of ZK 8.5. The new release takes the core ZK 8 philosophy “Stay true to your Java roots and effortlessly keep up with front-end innovations” and continues to push the innovation envelope: a major improvement on MVVM data binding at the client side enlivened pure HTML content with minimal effort. The Fragment component, in combination with Service workers, allows for caching and managing offline user data and easier Progressive Web Apps (PWAs) building. Other exciting features include: 24 freshly baked modern themes, built-in Websocket, splitlayout component, smooth frozen component and more. Let’s take a look at some of the most interesting new features: 

1. Built-in Websocket

WebSocket is a new communication protocol standardized by the IETF as RFC 6455. It provides a full-duplex communication channel over a single TCP connection. Once the WebSocket connection has been established, all subsequent messages are transmitted over the socket rather than new HTTP requests/responses. Therefore, it can lower handshake overhead and reduce a lot of HTTP requests when there are many small updates comparing to AJAX server push. A server can actively send data to a client without a client’s request. So it’s more trivial than Comet. ZK now supports not only WebSocket-based update engine but also a WebSocket-based server push.

To enable websockets in ZK 8.5 all you need is to add the following <listener> to your zk.xml:

<listener>
 <listener-class>org.zkoss.zkmax.au.websocket.WebSocketWebAppInit</listener-class>
</listener>

2. 24 Freshly baked themes

ZK 8.5 comes with a new theme called Iceblue as well as another 23 brand new, modern and elegant themes. To apply the desired theme, simply set the preferred theme in zk.xml:

<library-property>
    <name>org.zkoss.theme.preferred</name>
    <value>breeze</value>
</library-property>

You can also include multiple themes and allow each end user set his or her preferred theme in your application with cookies like:

Themes.setTheme(Executions.getCurrent(), "custom");
Executions.sendRedirect("");

3. New Client-side Data Binding Component: Fragment

Fragment is a special component that turns a static HTML page into dynamic. It can bind an HTML snippet with data from a ViewModel with ZK data binding syntax. With this new component, you can create a custom HTML widget that’s not part of standard ZK components, e.g. custom layouts or custom components, and it binds the data from a ViewModel.

Behind the scene: fragment is a data container and renderer. It synchronizes data between itself and a server according to data binding syntax and it stores the data from server as JSON objects at the client-side. Inside a Fragment, the specified data binding syntax actually binds the JSON objects, and it renders HTML elements based on the JSON objects. This also reduces the server’s tracking nodes for data binding since data is tracked at the client-side.

View illustration

4. Source Maps for WPD Files

In previous versions, ZK merges/compresses javascript code into WPD files, but now with this new feature, if you enable javascript in zk.xml with the property below:

<client-config>
    <debug-js>true</debug-js>
    <enable-source-map>true</enable-source-map>
</client-config>

You can see separate JavaScript files for each widget with comments like this image.

If your browser supports source map, you can also see separate js files for every ZK widgets, not just WPD files (compressed files). It’s very useful when you wish to debug JavaScript code or customize widgets. You can still debug with uncompressed WPD even a browser doesn’t support this.

If you are interested, you can read more about ZK 8.5 New Features here.


November 3, 2017  2:40 PM

The One Thing That Is Repeatedly Breaking Your CI/CD Workflow

OverOps Profile: OverOps
Uncategorized
Companies and teams want to move fast. This includes frequent releases, constantly updating the product and keep team members on their toes about new and relevant technology. These needs led to the rise of continuous integration and continuous delivery practices.
 
The current widespread understanding of the CI/CD cycle adds a lot of automation to test-build-deploy stages, but it misses out on a critical step in a complete release cycle. In the following post we’ll understand why the CI/CD cycle doesn’t end after deployment, and why it’s important to add automation to your monitoring practices. Let’s check it out.


November 3, 2017  2:37 PM

Why effective DevOps needs maneuverability more than speed

George Lawton Profile: George Lawton
DevOps

The hype around effective DevOps can make it sound like the real value provided by the methodology comes from faster time to deployment. But this misses the real benefit around maneuverability, argued Michael Nygard, an enterprise architect with Cognitect. “We talk a lot about velocity, but not so much about acceleration, which is the ability to move faster and slower as required,” he said.

Enterprises that can speed or slow their pace of development in response to changing conditions are more maneuverable than the competition. The cloud makes infrastructure disposable, and code repositories make code disposable. “Maybe even the teams need to be disposable,” quipped Nygard. This is different than making people disposable, which kills morale

Effective DevOps means being nimble

Real maneuverability comes from making it easy for teams to break down and start up projects quickly. That’s effective DevOps. The value of the individual comes from the team processes involved in completing and starting projects rather than someone’s role in a particular project. Nygard pointed out that some army units are able to break down and set up a new camp in a few hours, while others can take days. This differences comes from the collaboratively experience of navigating thousands of tiny decisions like how to move the trucks in the right order or where to put the latrines. This means developing a shared understanding around things like version control and build pipelines in the enterprise.

Team members also need to become adept at intuiting the kinds of decision others are likely to make in response to shifting conditions. A small unit commander in the military has a good idea of how other commanders will make a decision. This is something lacking in effective DevOps teams dispersed by function and geography. “Tempo is an emergent property that comes from some characteristics of your organization, and has to be built at every level,” said Nygard.


November 1, 2017  1:24 AM

The right five questions to ask before purchasing CRM software

DianeKeller Profile: DianeKeller
CRM

Shopping for CRM software system can be daunting. Many platforms come with bells, whistles, add-ons and integrations that you never considered — not to mention a high price tag. Adding a complicated, expensive software to your business is not a decision to be made lightly.

However, the time that it takes to implement your CRM is worth it, if you can find the right CRM software to fit your needs.

How can you determine which CRM software system is right for you? Ask these questions, and then take your top CRM candidates for a test drive! Many systems offer a free trial period. The best way to see if a system is the right fit is to put it to work for you. Here’s where to start to make your decision simple.

What do you want to accomplish?

Get your CRM strategy in order before shopping around for a system. Take the time to be clear about what your goals are for capturing your customer relationships. Are you going to use this information for sales, marketing, customer service, or all of the above? What details will you need to consistently report to get the big-picture data you need? Understand the variety of reporting options that come standard with each CRM platform: customer data matters, but that data must drive action. How strong are the reporting capabilities of each CRM?

Reporting is just one piece of the puzzle. Think about what other processes your CRM might need to manage. What tasks do you want to automate? Many CRM systems can automate email alerts for important events, escalate uncompleted issues, and streamline workflows by directing traffic among your teams.

Additionally, consider where your business might grow in the future. Many CRM platforms offer add-ons that you may not need today, but are worth considering as you start to see your company take off. You need a customer solution service right now, but next quarter you might be ready for some online marketing and social media monitoring. Companies like Hubspot and Zoho have marketing and social media capabilities. Others, like Microsoft, will offer project management tools and organizational supplements.

Who will use the system?

What teams will need access to your CRM system? How many accounts will you need? Most CRM platforms, like Salesforce, offer pricing based on the number of users. Factor in things like continuity and mobility: do you have a mobile salesforce? Do you have some team members who cover multiple roles?

Some platforms will also allow you to set different features and access levels for different teams. For example, you might make certain reports available to your senior management team, or limit who has access to sales leads. Consider the existing workflows within your organization. If you plan to grow your business rapidly within the next year, make sure you get a system that can accommodate many new accounts (and ensure continuity and consistent service among your team members).

Should it be cloud-based or on-premise?

Of course, cost is a big factor in choosing whether or not your CRM is on-site or cloud-based. An on-premises CRM solution is often less expensive, but keep in mind the maintenance costs will add up. Upgrades, IT maintenance, and support costs might end up making a cloud-based system a better investment. You might also need a new server to keep your on-site system up and running.

Likewise, if you choose a cloud-based CRM solution, you’ll need the network resources to support the product. How much bandwidth will it use? Will your internet speeds be fast enough for a cloud-based system? Save yourself hours of frustration and internet down-time by running some speed tests. As you add accounts, make sure your CRM won’t crash your entire network.

Typically, cloud-based systems come with quicker installation and regular, easily accessible updates and improvements. You’ll also need to factor in data security to your decision.

Does it integrate with your existing systems?

Just because you’re getting ready to shell out some cash on a new system doesn’t mean you should have to replace your existing software. CRM software can integrate with lots of other parts of your business, including POS software, accounting tools, marketing platforms, and more.You shouldn’t have to manually export and import data between platforms — as long as your new CRM is compatible with the apps you already use. Make sure all your systems will coordinate by asking customer support and double-checking with the vendor before making a commitment.

What is your budget?

Finally, the biggest question of all: what are you willing to spend on a CRM platform? There is quite a range on what a CRM might cost, from freemium offerings to price tags in the millions for enterprise-sized corporations. Mostly, you can expect to pay on a per-user, per-month basis, though some vendors charge a flat monthly fee for a set number of users.

Factor in how many people are going to use your platform, as well as how much customization is required. More customization and more usually lead to a higher price point and higher maintenance costs.

Realistically, a CRM system is a great investment. The ability to capture customer interactions and valuable sales leads: priceless.


October 27, 2017  1:17 AM

What’s new in Enterprise Java News? Some JSF. Some Java EE. Some Liferay. Stuff like that.

kito99 Profile: kito99
Uncategorized
In this episode, Kito, Danno, and Ian discuss the Equifax hack (caused by an unpatched version of Struts), news from the Polymer Summit, Oracle’s donation of Java EE to Eclipse, Docker in-depth, and more.
 
Listen to the Podcast here:
 
Kito D. Mann | @kito99 | Author, JSF in Action


October 27, 2017  1:15 AM

Lazy Loading and Caching via Sticky Cactoos Primitives

yegor256 Profile: yegor256
Uncategorized
You obviously know what lazy loading is, right? And you no doubt know about caching. To my knowledge, there is no elegant way in Java to implement either of them. Here is what I found out for myself with the help of Cactoos primitives.

Continue…


October 27, 2017  12:13 AM

Freenode ##java’s interesting content podcast

JosephOttinger Profile: JosephOttinger
Uncategorized

The ##java channel on Freenode IRC has been collecting interesting content for years, including content from here on TSS on occasion. Now, it’s being collected into a weekly podcast, at http://javachannel.org/ . The RSS feed for the podcast itself is http://javachannel.org/feed/podcast/ – check it out!


Page 2 of 1912345...10...Last »

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: