Coffee Talk: Java, News, Stories and Opinions

Page 2 of 2112345...1020...Last »

April 28, 2018  3:33 AM

Thwart the threat by abiding to network security fundamentals

Daisy.McCarty Profile: Daisy.McCarty

Cloud,  mobile, and IoT have changed the face of the modern network so it’s no surprise  network security fundamentals have become important for businesses of all sizes. It seems even the largest organizations are just one mistake away from a massive data breach or other system failure, all of which could be avoided if more attention was paid to network security fundamentals.

#1 Prioritize full network visibility

According to Ryan Hadley, COO of Signal Forest, today’s IT leaders know they need to be aware of everything in their network. But this isn’t always easy to do. “CIOs who want visibility on all wired clients have to look at all of the switching infrastructure and what switches are capable of in terms of higher end security like 802.1X, etc.,” he said. “They need to be able to tell what types of devices are in use, where they are connecting to, what kind of access has been given to them by default, and if all the network access controls fail, whether the system is failing open or closed.”

One of the network security fundamentals on the wireless side is to maintain full transparency about each connected device and provide special pages for each user to register their own device for specific network access. These are best practices. The fact the “edge” will disappear and the core network will expand outward doesn’t mean location no longer matters. In Hadley’s view, the knowledge of where a device is located doesn’t just determine what type of access is allowed. Location data can also be used to identify potential intrusions. For example, if a worker is logged in from the on-site network and then signs in from a smartphone fifty miles away, that’s a red flag.

#2 Do away with PSK

In Hadley’s view, “Companies that still have PSK networks are quickly discovering that they are the ones that are most liable for a breach. Everyone is migrating away from blanket level access to a more role-based security access point where you are a user with certain privileges that are representative of a particular role assigned based on access directory credentials, your title in the company, the type of device being used, where you are connecting, etc. They also want to move to 802.1X where there’s a cert that’s pushed out to each of the connecting devices and you’ve got a cert that’s on the server that’s facilitating the e-transaction.”

#3 Network security fundamentals are changing

Many of today’s enterprises are complacent and rely on network security techniques that were helpful in the past but are no longer pertinent today, said Brian Engle, cybersecurity consultant and risk advisor. “They aren’t hunting for their own weaknesses,” he explained. “Instead, they are reliant on things that have perhaps worked in the past such as firewall technology or anti-virus measures. They haven’t considered that most of their business processes are extended into software services in the cloud or places that their defensive measures aren’t currently reaching.”

#4 Never leave the door open for physical intrusion

One of the oldest network security fundamentals is to simply limit direct physical access to the network. An open USB port is like an unlocked car door; it can give data thieves deep access into a network.  “Physical security of devices is paramount to correctly configuring your firewalls,” Hadley said. “If you have a public facing location and you have USBs on those computers, you don’t want those to be active. You want them shut down. Or, at the very least you want to have a policy in place. This might be something at the Microsoft level or having a PC management program running that will lock down that USB port—or at least alert someone that a USB has been put into that slot and determine if it is OK to use.”

#5 Check and double check—then hire someone else to check

Both experts agreed a third party should perform penetration testing.  “Having someone else test what you have in place has a lot of value,” Engle said. “When you’ve built it, you see it through rose-tinted glasses. You may think you can see the holes better because you constructed it. But you want to have someone else looking at it from a different vantage point—especially someone whose sole focus is getting good at breaking into networks. They bring a different skillset and mindset than someone who is building things from a defender’s point of view. Having someone else checking and trying to break the things you built will reveal weaknesses you couldn’t see otherwise.”

#6 Don’t ignore the human factor

Attacks can happen when things are sent to individual users too, Engle pointed out.”When users click, it activates ransomware or something else that could lead to infiltration into the environment or exfiltration of data. Each of those things need to be detected. Most detection is based outward rather than looking inward at what might be leaving. And a lot of enterprise security programs aren’t built to see the things that evaded what the traditional security measures would catch.”

While education and encouragement can help users avoid attacks, the classroom doesn’t always simulate the pressures that occur in real life when a target is being tempted, distracted, or frightened into clicking on a suspicious link or opening a potentially dangerous attachment. That’s why it is critical for organizations to plan in advance to detect these attacks promptly, respond swiftly, and recover fully.

#7 Understand the purpose and importance of the security environment

Engle spoke to the short-sightedness of approaching security as a challenge that is solved solely with technology. “It’s important to start a conversation about strategy and objectives to reduce risk rather than just building a security stack with various technologies.” Much like DevOps, security is not a set of tools to buy, but a cultural change to implement.

Finally, enterprises should take advantage of the real-life examples hitting the media to inform their own security strategies and ensure that those strategies address all network security fundamentals and best practices. Organizations need to start asking, “What if what happened to Equifax or Facebook happened to us? What then?” While it’s not pleasant to contemplate, it’s certainly the best way to highlight why security should be taken seriously, no matter the apparent cost.​

April 11, 2018  9:14 PM

Borderless blockchain collaboration to change how software is developed

DeanAnderson Profile: DeanAnderson
Gaming

The gaming industry has, despite its roots in technology, so far failed to adopt a collaborative model. But that may be about to change.

Gamers are known as among the most passionate of the world’s digital communities and more games are being released than ever before. This wealth of releases brings a major problem: it is extremely difficult for indie studios and developers to rival the influence of multinational companies and bring attention to new games.

Blockchain technology could take advantage of the popularity of borderless online communities and be the catalyst that transforms this situation. New platforms, such as Gamestatix, will soon provide a level playing field for all developers, no matter how small. With more sophisticated technology, we can expect to see borderless collaboration continue, on a far wider and more advanced scale.

Rewards for co-creation?

When blockchain and cryptocurrencies were in their infancy, there was no feasible way to financially reward gamers on mass for co-creating games. Now these technologies are established, a model where co-creators are guaranteed financial rewards is possible. Now community creators will quickly become third party developers and gain financial rewards for their work.

Gamers who play and review games could receive cryptocurrency. Developers will in-turn have access to millions of gamers throughout the entire creation process, and will even be able to license user generated content, sell, and market the games through the platform. All of this could happen with blockchain collaboration. And this new model comes ahead of a key transition in the global labor market.

While it is widely known AI and automation could render many traditional jobs obsolete, it is also true new careers will arise. For some, this could mean part-time passions may become full-time careers. But blockchain collaboration is different from the current sharing economy, as the livelihoods of players and developers will be able to bypass economic turmoil in their countries of residence.

Accessing world-wide talent

With blockchain facilitated peer-to-peer cryptocurrency payments and transactions, employers are now able to pay anyone anywhere in the world, and ultimately tap into a global talent pool.

In the UK, according to TIGA’s 2018 business survey, more than two-thirds (68 percent) of video games firms plan to increase their workforce. But 29 percent of developers are concerned about Brexit’s impact on their ability to recruit the right talent. “In order to grow and thrive, the UK video games industry will need to continue to recruit talent on a global level,” said Dr Richard Wilson, TIGA CEO, in a statement.

The ability to call on a dedicated community with a wealth of knowledge and expertise means developers will be able to efficiently generate, curate and promote their content.

Meanwhile make choices based on what they hear about from the communities they trust, rather than traditional brand placements. This allows developers to take advantage of the multiplier effect: the wider a gamer’s reach, the greater the awareness and interest among newer audience groups.

Deeper community involvement will yield game co-creation and will allow developers to work directly with community moderators and creators to formally co-craft, publish and sell add-on content that resonates with their communities.

The games industry could use blockchain collaboration technology and secure cryptocurrencies to lead the way, recognize talents and encourage real innovation and creativity, all while power and wealth are distributed.

A work revolution?

With blockchain involved in work, everything from recruitment to collaborative labor, finances and contracts can be far more productive, efficient, and democratic.

The video game industry should be optimistic. With a blockchain-facilitated, collaborative global workforce of gamers and developers on the horizon the industry is primed to revolutionize the world of work for its own people. It will influence other sectors as well.


Dean Anderson is the Co-Founder of Gamestatix

Gamestatix is an upcoming social platform for the co-creation of PC games that recognises, encourages and rewards user contribution, whilst giving game developers access to global pool of talent to efficiently generate, curate and promote their content. The company was founded in 2016 by Dean Anderson, Visar Statovci and Valon Statovci.

Gamestatix is currently preparing to launch an ICO (Initial Coin Offering) – a form of crowdfunding for cryptocurrency and blockchain projects.
www.gamestatix.com


April 10, 2018  8:41 PM

Using Agile for hardware development to deliver products faster

Daisy.McCarty Profile: Daisy.McCarty

When metal and plastic are manipulated instead of ones and zeros, is it possible to pursue an Agile development process? Or is the idea of Agile for hardware development an misnomer?

The fact is, more and more organizations have given waterfall the could shoulder and turned to scrum, lean and kanban based models as they make Agile for hardware development a reality. A combination of rapid prototyping, modular design, and simulation testing makes Agile for hardware development a real possibility for modern manufacturing in technology.

From Agile software to Agile hardware

With Agile software development, creative processes are the cornerstone of the framework and constant change is the order of the day. It’s expected that alterations will happen continuously. Relatively speaking, there’s a low cost associated with constant change in software since it requires only time, knowledge, and the ability to type some code. However, the physical world is not as amenable to alteration as the digital world. Hardware changes are associated with high costs and significant resistance.

Yet according to Curt Raschke, a Product Development Project Manager and guest lecturer at UT Dallas, Agile for hardware development isn’t really a fresh idea. In fact, Agile had its roots in the manufacturing world to begin with. “These ideas came out of lean and incremental product development,” he said. In that sense, it is no surprise  the concept has come full circle and Agile has appeared in hardware development. It’s simply requires a fresh look at existing best practices.

For Shreyas Bhat, CEO of FusePLM, a full-cloud product lifecycle management system that uses a cards-based approach to the development process, the question isn’t if Agile for hardware development can be done, but instead, how? “Hardware tends to be self-contained and has too many dependencies,” he said. “How do you split it into deliverable sections? You need to be able to partition the project and come up with milestones for the customer.”

Prototyping and Agile hardware development

Rapid prototyping is one part of the answer to the question of how. “These days, you can build a prototype cheaply and early on to give a customer a feel for the eventual functionality,” Bhat said. “With 3D printing, the cost of prototyping has gone down significantly.” Contract manufacturers have turned prototyping into a commodity in an industry where cost is always a driving factor. Being able to turn to a prototype provider for quick, inexpensive modeling saves both time and money for the actual manufacturer later.

As Bhat explained, the requirements for hardware have to be stable since changing tooling increases costs. Prototyping helps pin down the proper tooling early on. “You have a better idea of what is needed for production. That way, you’re getting the right tooling for your manufacturing line up front rather than having to retool down the road.”

But that doesn’t mean prototypes can solve all problems. Raschke explained one of the prime limitations. I “It works for product development from a form factor point of view,” he said. “You can use it to get feedback ahead of time on look and feel. But you can’t use 3D printing alone for stuff with moving parts that require assembly.”

Practical thinking in Agile hardware design

One answer to the issue of Agile for hardware development is the nature of the design itself. The more modular an item is, and the fewer dependencies are involved within any given component, the easier it will be to make changes. Also, choosing the fewest material types possible to get the job done is key aspect to move hardware in a more flexible direction. Again, these are well-known practices in the manufacturing sector and lend themselves well to implementation within an Agile framework.

When simulation or other types of iterative hardware evaluation are part of the process, it is a smart bet to design with testing in mind. Test driven development (TDD) can readily be incorporated into hardware as long as the limitations and capabilities of the simulating environment are known. Cross-functional teams are the best fit for developing hardware in this way, with an eye toward validating design functionality through iterative testing.

In fact, Raschke revealed  this is one of the prime challenges facing hardware where agility is concerned, particularly in terms of pretesting and system integration. “Design, development, and delivery are not always done by the same team,” he said. “Even if pieces can be developed by the same team, they are different subject matter experts. In a hardware product, there are electrical, mechanical, and software engineers along with firmware specialists, and so on.” These diverse experts would all have to be brought together in order to create a faster, more iterative approach to hardware development. That would be quite a feat for any Scrum Master to pull off.

The future of Agile for hardware development

Startups and fast-growing companies with bright ideas are primed to take on the Agile for hardware development challenge. While it may be risky, there is the chance of a high payoff. In Bhat’s view, “Where there is innovation, there is uncertainty. That’s in the sweet spot for Agile. You can do a lot of ‘what if’ scenarios to try out ideas and quickly release them to the customer.”

However, Agile does have one obvious limitation that Raschke addressed. “Hardware products are developed incrementally, but they can’t be released that way,” he explained. That’s certainly true. Given the high percentage of customers who ignore factory recalls and upgrades on everyday consumer goods, it’s hard to imagine them happy about the constant need to update their hardware, particularly if it involves a trip to a local provider or installation of a kit received in the mail. So, while Agile for hardware development and design may play an increasingly important role in the industry, delivery will remain a fixed point with little room for trial and error.


March 9, 2018  1:38 AM

Acts of discrimination lets gender inequality in technology go unresolved

Daisy.McCarty Profile: Daisy.McCarty
Uncategorized

Coming up in tech over the past few decades wasn’t easy. A successful entrepreneur told me a story of how she landed her first tech job as a sales rep for a telecom agency. This was in the early days of deregulation, long before gender inequality in technology was an issue organizations were willing to address. She had a background in sales but knew nothing about the industry. She did well in the interview, but the hiring manager contacted her with bad news. The word from the VP of sales was, “We just hired our first woman and we’re not going to hire another until we see if she works out.”

This vivacious woman wasn’t about to take that as a final answer. She told the hiring manager in no uncertain terms: “I want you to get me an interview with the sales VP.” During that meeting, she talked her way into a job. The VP was reluctant, and warned, “I’m going to give you a chance, but I’m going to be watching you.”

“I hope you do!” she answered.

The VP got to watch as she went on to fight gender inequality in technology and become a top performer. She learned as she went and hopefully made it easier for the next few females who tried to join the sales force.

A woman who is now an executive at a software firm revealed how dreadful it was to work at a different company many years ago as the sole female coder in her department.

“I was the only woman there, and I guess this was before people knew how to deal with us,” she said. The harassment was overtly sexual and very distressing. “As just one example, when they would have a company dinner and start telling off color jokes, my name was always the one inserted as the butt of the joke.”

She took these issues to her boss who simply advised, “Don’t tell your fiancé about it.” After all, she wouldn’t want to upset her husband-to-be with such trivialities when it was all just in good fun. When she couldn’t stand the harassment any longer, she left without even having another job lined up. Sadly, the recent Uber scandal reveals sexism is alive and well in some organizations in the tech industry. While it may be in vogue to be politically correct on the surface, a culture of male entitlement only serves to advance gender inequality in technology, while wreaking havoc in the lives and careers of women in tech.

Perhaps the second most annoying expression of sexism is the dismissal of the opinions of women—even those who have credentials and experience that put them in a position to be exceptionally knowledgeable. A highly educated woman in the analytics field discovered that being heard takes real effort.

“I’m the only female in this role within my organization,” she said. “I have to push a lot to be taken seriously. I communicate in writing with clients. Having a PhD can help with respect and it’s definitely easier to navigate the situation because of having that education level. It shouldn’t be that way, but when gender inequality in technology prevails, it is. I’ve still found I have to get pushy verbally and give evidence that I am right. It’s like trying to swim in mud.”

Women in tech are there to win

An entrepreneur who started and still runs a thriving business found that she faced hurdles in getting venture capital.

“In one meeting I remember the men looking at us with veiled amusement,” she said. “They didn’t come out and say it but the sense we got was that they felt, ‘You have a great idea, but you are women! Come back when you have a male CEO.’ Based on the questions they were asking, the VCs couldn’t wrap their minds around the fact that we had long term goals. They obviously thought we wouldn’t be around long.”

It’s not just men who can make life hard for women in tech. Female leaders can also contribute to gender inequality in technology related fields without even realizing it. One woman told me about getting passed over for one promotion after another because her female bosses simply assumed she wanted to stay home with her children. A failure to take female ambition seriously means that many organizations are missing out on the opportunity to invest in developing and mentoring high potential women.

The unpaid second shift

For some women, the environment doesn’t have to be sexist in any overt way to create an obstacle to advancement. It just has to ignore the responsibilities that fall on women’s shoulders on the home front.

According to an expert speaker on female empowerment in the workplace, “Women in technology are often faced with having to work in an environment where they don’t feel comfortable. Silicon Valley in particular has a startup culture. Working in a startup environment or a new department within a company requires intense, long work hours. A lot of younger women leave. They can’t adequately manage being a wife, mother, and so on.”

Women are still being forced to choose career versus taking care of children and aging parents.

“Even with all the focus on placing women in senior positions, when you see women who are highly successful they are often unmarried or have a stay-at-home husband who relocated to support them in their career,” this expert said.

She pointed out that men are facing hard choices as well in their careers. But women are usually the ones who end up stepping in to take on the unpaid work while their plans for promotion are sacrificed.

Individual encounters can rankle

According to a mid-level manager who manages a sizeable team, it pays to be choosy about where to work. Women really do research a company’s culture before accepting a position. They need to know if they are getting into a bad situation.

“I’ve worked in the military, automotive, and construction arenas,” the manager said. “The tech company where I am now does not tolerate harassment. It has a good reputation, and I made sure of that before taking the job.”

Yet she certainly knows what it is like to run into men who are jarred by the presence of a woman in their midst. They aren’t going to welcome an outsider with open arms.

“Some people don’t think women should be in that role. You can work to make that relationship professional, but it is not going to be cordial.”

There are also women who say they’ve never noticed any harassment or gender-based discrimination in a conventional corporate tech environment. According to one woman who worked in a larger organization for many years, “Either I’m extraordinarily dense, or I wasn’t being discriminated against.” Yet as a consultant who now deals directly with clients, she can easily think of that one person who routinely makes her feel uncomfortable. Not surprisingly, he’s an older gentleman who holds views that can most kindly be called “highly traditional.”

“He has told me to my face that ‘Women shouldn’t be in the workforce’ and ‘The reason women get married is so men can protect them,'” she said.

Even the client’s condescending greeting of “It’s so good to see your pretty face!” rightfully irritates this successful and accomplished business owner. She certainly can’t imagine a male counterpart being treated in a similar way.

Coping with a male-dominated workplace

As I’ve talked with women over the past year about what it takes to make things work as part of a team or as a leader, I have noticed an interesting trend. Women in the Baby Boomer generation are much more likely to say they had to “act like one of the guys” or simply treat gender as irrelevant in order to survive in their careers. They have learned to lead and collaborate in a masculine way to fit in and be seen as an equal.

Younger women in the 30 to 45 age-range are more likely to say they consciously bring female aspects and qualities to the table to help them succeed. Things like emotional intelligence, good communication, multi-tasking, and the ability to collaborate are traits they use to their advantage.

Both strategies certainly have merit and can be used depending on the workplace culture, a woman’s career goals, and her natural aptitudes. It will take persistence, experimentation, and courage to find the right mix for each individual. Here is some additional advice from the women in my tech network for how to make it work at work.

  • On fitting in: “If you are a minority in a group it’s partly your responsibility to fit in. As an example, I’m not a big follower of sports, but I will skim the headlines before going into the CEO staff meeting. We have to see what we can contribute to the conversation.”
  • On work/life balance: “We find that work-life balance is a day-by-day thing. Some days, works gets most of you. Other days, you have to make the decision to put family first.”
  • On earning respect: “You can’t be a shy or retiring person and get your voice heard. You have to speak up and step up. You can’t be uncertain, you need to have knowledge and come across as knowledgeable.”
  • On getting promoted: “Be bold. Let people know what’s important to you and go after it.”

While the fight against gender inequality in technology remains ongoing, most women see a positive outcome as inevitable. It’s just a matter of persistence, patience, and boldness.


February 27, 2018  11:27 PM

Re-introducing Jakarta EE: Eclipse takes Java EE back to its roots

DarrylTaft Profile: DarrylTaft
Uncategorized

The Eclipse Foundation has chosen Jakarta EE as the new name for the technology formerly known as Java EE.

Last September, the Eclipse Foundation announced that Oracle would hand over the Java Platform, Enterprise Edition (Java EE), to Eclipse to foster industrywide collaboration and advance the technology. Since then, Eclipse has established the top-level Eclipse Enterprise for Java (EE4J) project to oversee the future of enterprise Java at the Eclipse Foundation. EE4J is a solid and even semi-cool name for that project, but it was never meant to replace the Java EE brand.

Oracle’s Java trademark

However, the new name for the brand – or for the top-level project, for that matter — could not be Java EE, or even start with Java. Oracle’s trademark usage guidelines require that they are the only entity that can use Java at the beginning off the name, so the phrase “for Java” was a requirement from the start, wrote Mike Milinkovich, executive director of the Eclipse Foundation, in a September 2017 blog post.

After a series of trademark reviews to secure a proper name that passed legal and trademark searches,   Eclipse came up with a few possible names. Jakarta EE won 64.4 percent of the Java community’s vote, while Enterprise Profile captured 35.4 percent.

“I am personally very happy that the community selected Jakarta EE,” Milinkovich said, happy because Jakarta EE is a simple name with potential for future technology, and a nod to the past. “This is just one small step in the overall migration, but the community reaction has been universally positive.”

Alternatives, including Enterprise Profile, were deemed too boring, and the word “enterprise” is viewed by many to mean stodgy and unapproachable to younger developers.

“The Jakarta EE name is a reasonable compromise, considering the objectives of the community and the objections that Oracle has raised,” said Cameron Purdy, CEO of xqiz.it, a stealth mode startup, and former vice president of development at Oracle.

Apache Jakarta project

Other Java thought leaders said the new name, which comes from an old Apache Software Foundation project, might actually be better for enterprise Java.

Jakarta EE is named after Apache Jakarta, which was founded in 1999 and retired in 2011. “Those were roughly the dates when Java EE mattered,” said Rod Johnson, CEO of Atomist and creator of the Spring Framework. “The Java EE brand is tired now, so while a new name will have less recognition, it might also escape some negative associations.”

It’s also a nudge to the ‘JEE’ shortened form, and taps into Jakarta‘s reputation in the Java ecosystem,” said Martijn Verburg, co-founder of the London Java Community and CEO of jClarity, a London-based provider of software that helps developers solve Java performance problems with machine learning.

The renaming of Java EE seems to fit the ongoing saga of Java, said Sacha Labourey, CEO of CloudBees and former CTO of JBoss. That product’s original name was “EJBoss: Enterprise Java Beans Open Source Server,” but after a cease-and-desist letter from Sun Microsystems’ lawyers over the use of ‘EJB,’ JBoss founder and CEO Marc Fleury pragmatically dropped the E.

Reminiscing, Labourey said the Java community has had to “learn how to dance with its trademark and community process [Java Community Process (JCP)], so what else could we expect?” He noted that the JavaPolis conference had to be renamed to Devoxx because of Java trademark issues. “We would have been so disappointed not to have one more chapter to add to the Java trademark book,” he said.


February 21, 2018  6:36 PM

Clear software development governance needed in this polyglot world

cameronmcnz Cameron McKenzie Profile: cameronmcnz

New architectures composed out of language agnostic software containers have made polyglot programming a new reality. But out of this newfound freedom chaos can ensue if clear software development governance policies do not exist that describe when, where and why certain programming languages can be used, and which other languages cannot.

When advocates talk about the various things organizations should be doing to prepare for the move towards modern middleware servers like Amazon and OpenShift, the increased use of serverless systems and all of the headaches that go along with DevOps adoption, a topic that far too often gets overlooked is software development governance. Organizations should address how they are going to manage these new software development stacks and deployment targets rather than discuss technology adoption or how to continuously deploy.

“If you look at all of the Cloud platforms, you know, open shared Cloud Foundry, Amazon, Google, Azure, OpenShift, they’re polyglot,” said RedHat Product Manager Rich Sharples. “They support multiple languages.” And that’s great for organizations that want to put together a tapistry of best-of-breed applications where each one is built with the language that best suits the application’s purpose. But best-of-breed environments can be incredibly difficult to manage over the long term, especially when an application written in a language that has fallen out of favor needs fixed, enhanced or just generally maintained.

Flexible software development governance

How can software development governance address the problems a poloyglot programming shop might encounter? One way is to simply make the organization monoglot and settle on a highly proven and capable programming language like Java. “Java is clearly the best language for running large, complicated, transactional applications in the traditional, long-lived server model,” said Sharples. “It is the active standard for enterprises building those kind of applications.”

But an inflexible and rigid governance model is exactly what caused the colonies to revolt and create a union of states. While settling on Java as a fundamental standard is a good start, a software development governance model should also allow for alternate languages to be used when certain extenuous conditions are met. For example, an organization might specify Scala to be the language of choice if it is understood that an application might have an exceptional requirement for parallelism or list processing speed. Similarly, Kotlin might be agreed upon for the development of mobile applications that require support for the latest Java version.

It’s understood that Java might not always be the right language for every situation, especially in new arenas where containers and microservices dominate. “Java must prove itself when it comes to building smaller microservices on a small stack. It’s got to proove itself in serverless where the environment is even more dense and constrained. And it’s got to compare with languages like GO, which is super lightweight.”

Containing the polyglot chaos

Ask a group of programmers to choose the best coding language and you’ll find it’s a great way to start a fight. To ensure peace and harmony on the development team, a software development governance policy needs to be well defined so programmers  know which languages are permissible, and under which conditions alternates can be used. Setting clear guidelines avoids the chaos that can ensue when every program is written in a different language. Furthermore, clear expectations in a software development governance model ensure egos don’t get bruised when a developer’s choice of programming options is limited to those the organization has standardized upon.

In this new age of poloyglot programming, DevOps adoption and serverless systems, the potential for chaos is a real threat to the wellness of the IT department. Software developers with a host of different languages from which to choose are understandably excited about developing microservices and deploying them into language-agnostic, container based architectures. But for the long term health of the IT department, software development governance models should never be agnostic about the languages they permit.

 

 


January 30, 2018  11:21 PM

Developers, learn from the iPhone battery glitch

MarkSpritzler Profile: MarkSpritzler
Uncategorized

Apple is forcing all of its older iPhone and iPad users to buy new devices by throttling their batteries as they get older. Countries that perceive this as unsavoury forced obsolescence have demanded Apple executives explain themselves. Class action lawsuits were launched, as Apple finds its business practices coming under an unprecedented amount of scrutiny. Apple has always been a lightening rod for both passion and angst, and there have no doubt been times in the past where Apple’s business practices were worthy of the public lashings it may have taken. But when it comes to the controversy over the legacy iPhone battery glitch, the backlash and public derision the company has suffered is not warranted. But, it is an important situation for developers to keep in mind.

The new iPhone battery update for legacy phones is actually a great feature. It’s a feature that we have not only in phones, but also on our desktop and laptop computers. On desktops it’s simply called Power Save. It’s that feature that enables your computer to reduce the power consumption to increase the amount of time your battery lasts on a single charge. Windows has it, Linux has it and Mac OS has it. Even Android phones have it too.

Will it slow down your computer? Yes it will. Does it give you less functionality than when you are not in power mode? Yes it does. But it is a powerful feature to help in situations where the battery is about to die.

So how is this new iPhone battery “glitch” any different from the corresponding one that Linux and Windows desktop users already enjoy?

I know what you are thinking: Power Save mode is optional on the desktop, and it can be turned on and off, and it is just to make a single charge last longer, rather than  about the life of my battery. And it is only on rare occasions that I would ever use it. Power Save mode is just that. However, what if you wanted to make your battery life last longer over many charges — wouldn’t it be a great feature to have too? It is just an extension of this design model.

Let us say you have a battery that lasts two years and allows 500 charges. Batteries have different ranges, but let’s assume they are all the same for this discussion.

If you have an Android phone and never use Power Save mode, at the end of two years, your phone battery dies. You now have to go get a new phone.

If you have an iPhone and never use Power Save mode and Apple did not code special protection for the battery, at the end of two years, you too will have a dead phone and battery which you now have to replace.

Now, let’s take the special Power Save mode Apple has developed. As the battery is getting closer to its two-year end of life, Apple will put it into this special Power Save mode so that the battery/phone now lasts longer than two years. Now you are getting a better longer lasting phone and reducing any planned obsolescence. How is that a bad thing? It’s really not an iPhone battery glitch.

It is a bad thing in one way and only one way — performance. That is the trade off Apple chose to take here. We can give up some speed nearer to the two year time span and make the phone last longer. Is it a bad thing that the speed is slower? Is it a great thing that the phone actually lasts longer?

It all depends on your perspective. Extend the battery life at the expense of performance near the end of life of the battery, or keep the battery life as is and cause earlier upgrades — which would you choose? I think that determines what your perspective will be towards Apple.

As a software developer, we make these trade off choices everyday. For instance, I am always asking questions like: do I write an SQL query to grab the data that might create a very complex SQL String with many joins that is extremely fast; or maybe use a couple of simple queries to gather some Java Entity objects that will be easier to loop through and do work on, are slower than the query, but much easier to understand, maintain and write. This happens a lot when writing reports in Jasper. I could write a 300-500 line SQL query inside the Jasper Report file, or write 20 lines of code to gather a List of Entities I can pass to the report.

From my perspective, I would have chosen to make the batteries last longer and therefore allow users to use the devices for longer. Performance is also very subjective; sometimes what appears to be slow might appear very fast to someone else.

But you can’t say that extending the battery life is a form of planned obsolescence or an iPhone battery glitch. And in recent news, Apple says in an upcoming release of iOS that it will become an option to turn it on and off, making it exactly like Power Save mode on computers.


January 23, 2018  9:00 PM

Choosing the right container orchestration tool for your project

Daisy.McCarty Profile: Daisy.McCarty

The use of software containers like Docker has been one of the biggest industry trends this past decade. Hitherto, the focus has been to educate the development community on the capabilities of container technology and how enterprises might use container orchestration tools effectively. But it would appear that an event horizon in terms of container awareness has been breached, as familiarity has increased to the point where organizations have moved beyond simply entertaining the thought of bringing containers into the data center and have moved forward into actual adoption. But before companies actually deploy software into containers, the question remains about which software container, and complimentary set of container orchestration tools, an organization should adopt. Options such as Swarm, Cloud Foundry and Mesos make strong arguments for themselves so the container technology landscape is a difficult one to navigate.

Container orchestration with Kubernetes

Of course, Craig McLuckie, CEO of Heptio, is confident in the horse his organization is betting on to create cloud-native applications. “Kubernetes over the last three and a half years has emerged as the standard.” Mark Cavage, VP of Product Development at Oracle, also gives Kubernetes the thumbs up. “It’s the right open source option that abstracts away clouds and infrastructure, giving you a distributed substrate to help you build resilient microservices.”

Kubernetes has been compared favorably with other container orchestration solutions before, with one of its big selling points the vendor-agnostic nature of its open source availability. McLuckie poins out the lack of vendor lock-in as an appealing benefit. Other experts have praised Kubernetes for working well with Docker while requiring only a one-tier architecture. Because of its flexibility, developers who use Kubernetes may be less likely to face the situation of writing applications to match the expectations of a vendor-driven container orchestration platform rather than writing applications in the way that makes the most sense for what their organization hopes to accomplish.

Simplify operations

McLuckie speaks highly of Kubernetes’ ability to simplify ops with containers that are a hermetically sealed, highly predictable and portable unit of deployment. But deployment is just where the real work begins. Keeping the containers running and automating a container based architecture without a hiccup is where Kubernetes shines. “By using a dynamic orchestrator, you can rely on an intelligent system to deal with all of those operational mechanics like scaling, monitoring, and reasoning about your application.”

Increased uptime and resiliency are obvious benefits of having a well-orchestrated container system. And since the self-contained units run the same way on a laptop as they do in the cloud, they can make development and testing easier for transitioning DevOps teams. End users also enjoy the perks of Kubernetes without realizing it, since the orchestration tooling can perform automatic rollout and rollback of services live without impacting traffic, even in the event of a failure of the new version.

Container orchestration and middleware

One interesting point McLuckie makes pertains to how middleware is being “teased apart” as a result of containerization. “Systems like Kubernetes are emerging as a standard way to run a lot of the underlying distributed system components, effectively stitching together a large number of machines that can operate as a single, logical, ideal computing fabric.” As a result, “A lot of the other functionality is getting pushed up into application level libraries that provide much of the immediate integration you need to run those applications efficiently.”

What might the outcome of this fragmentation of middleware be? According to McLuckie, it could open Java apps to a polyglot world by making it simpler to peacefully coexist with other languages. In addition, it would make dependencies easier to manage, even supporting the ability to run something like Cassandra DB on the same core infrastructure and underlying orchestrator as the Java app that relies on that database.

Cutting down on system complexity could be what makes Kubernetes and containerization itself attractive to larger organizations in the long run. “This could address enterprise concerns for governance, risk management, and compliance with a single, consistent, underlying framework.” And enterprises that prefer on-premise or hybrid clouds could use this approach just as readily as those that are fully cloud-reliant.

With container orchestration tools becoming more sophisticated with time, don’t expect the hype around container technology to die down in 2018. It will only intensify, with the focus moving away from education and awareness and instead towards adoption and implementation.


January 12, 2018  5:36 PM

Four wise pieces of advice for women in technology

Daisy.McCarty Profile: Daisy.McCarty

One of my favorite things about interviewing women in technology has been hearing all their helpful tips and insights. Many of these women spent decades in the tech world, moved up the career ladder, innovated in their areas of expertise, started new businesses, and created more opportunities for the next generation of women. Here are some of  takeaways that resonated.

Tip #1: You can end up in tech from just about anywhere

Tanis Cornell, principal of TJC Consulting, offered this insight to young women and teens, “You don’t have to be an engineer to have a job in technology. What I discovered in my own career was an affinity for absorbing a technical concept, grasping the advantages and disadvantages, and becoming fairly technically adept quickly.” Her educational background in communications ended up translating well into the tech sector, where the ability to speak about complex ideas in terms business decision-makers can understand is a highly valued skill.

Jen Voecks, a former event planner and current CEO of the bride-to-vendor matchmaking service Praulia, found that jumping into technology gave her an advantage as a brand new startup owner. “I learned the ropes one step at a time, front end and back end.” she said. “For a while I was stuck in a learning curve, wearing all the hats and trying to build a product while running a business. Once I got into the ‘developer brain’ mode, I realized that it’s like a puzzle with a lot of pieces and things got a lot easier. But I also learned to appreciate what developers do and how to communicate with devs and engineers.” Now she has enough experience to make smart decisions about hiring specialists to handle the various aspects of her technology needs and can focus her own efforts on strategic growth. Her advice: “Just stay at it. Most women I’ve talked with are experiencing hurdles. We should stick together and push through. We are powerful.”

Tip #2: Communicate calmly and clearly

CeCe Morken, EVP and general manager of the ProConnect Group at Intuit, spoke highly of Raji Arasu, who is the senior vice president, platform and core services CTO. “She is an excellent leader, and she leads a lot of men,” she said. “She’s always very calm and never changes her demeanor. Raji takes in information and handles it elegantly even in high stakes situations.” Morken also pointed out that Arasu speaks in a language everyone can understand, translating concepts from software architects to business leaders in a way that makes sense.

Charlene Schwindt, a software business unit manager at Hilti, agreed that being able to customize communication for a given audience is critical for success. “You really need to be able to transition your communication style and tailor it to the level of understanding of your audience,” she explained. “When I’m talking with developers, I can be highly technical. With customer support, I talk about things at a higher, summary level. Business leaders want a conversation that’s results oriented, but I might drill down to more detail if questions are asked.”

Of course, being confident as a communicator doesn’t always mean you have to be right about everything. In fact, it’s completely OK to pivot as you grow. Mary McNeely, Oracle database expert and owner of McNeely Technology Solutions, shared this sage advice. “Don’t be afraid to change your mind. You have to decide something in order to move forward. But once you have more information and time to think, anything could change. You might reach a different conclusion. It’s OK to reconsider your decisions, perspectives, and opinions.” And when you do change your mind, remember to let people know!

Tip #3: Believe in yourself. Seriously.

What would it be like to grow up in a culture that simply accepts that women are awesome? Candy Bernhardt, head of design and UX for Capital One’s auto lending division, recounted her experience of being raised by her grandmother who is from the Philippines. “It’s a very matriarchal society, so I didn’t know any better. I just thought women ruled the world and our opinions mattered. I challenged authority because I thought it was my right.” That pluck and boldness served her well when she landed in a career she thoroughly enjoys.

For those who grew up a little less sure of themselves, it’s not too late to gain the confidence to grasp the brass ring. Meltem Ballan, a data scientist with a PhD in complex systems and brain sciences, encouraged this can-do attitude as well. “Get out in front and show that a female can do it,” she said. “Ask for mentorship and stand for your own rights. Women must support women.” For her part, she found that giving a presentation on short notice, even though she didn’t feel completely ready, was a turning point in her speaking skills. “It’s important to go out and have that moment where we leave our comfort zone. Then our comfort zone gets larger.”

Tip #4: Never stop improving yourself and helping others grow

Julie Hamrick, COO and founder of Ignite Sales, spoke about the value of continuous improvement for success as an entrepreneur. “The harder I work, the luckier I get,” she offered. “At first I had to work hard to make my product good. Now, I am still thinking every day about how to deliver even better results for my customers.”

What about lending a helping hand to other women? How can managers and leaders do better in this area? Morken offered this advice for those who want to be effective mentors: “Focus on developing the person first and then the goals.” It’s important to understand what energies someone because that is what fuels growth.

My own final tip is this: Grow your network starting today. There’s a wealth of information and insight available from the women in tech all around you, and they are happy to share. Start leveraging these resources to take you farther than you ever thought possible!


January 6, 2018  12:29 AM

Requirements management and software integration: Why one can’t live without the other

RebeccaDobbin Profile: RebeccaDobbin
Uncategorized

With the complexity of products being built today, no one person can understand all the pieces. Yet everyone involved is still responsible for ensuring their work is compatible with co-workers and aligns with the company’s strategy. The only way to keep everyone moving toward the same goal is to have a shared understanding of what’s being built and why. This is where requirements management and software integration comes in.

Requirements define what a product should be. Equally important, though, they’re the single shared communication bridge between all disciplines. Everything involved in the pursuit of product development can be tied back to the definition of what you’re building — otherwise there’s misalignment somewhere. The goals of the market-facing teams translate into product requirements, making abstract desires achievable. The effort of design, development, and testing teams are driven by requirements. Product documentation, software integration, marketing, and spec all tie back to requirements at the end of a project, all derived from the same set of goals.

However, the relationship between requirements and everyone else’s work is not always easily visible. There might be degrees of separation, such as many levels of decomposition from market goals down to tasks. Information may be kept in multiple task-specific tools. While a shared understanding — in the form of requirements — is essential for alignment, it’s not sufficient in many cases. Companies with complex, multi-level, cross-tool product data run the risk of creating silos and disconnects from the original product goal.

This is why software integration and requirements management work hand-in-hand to realize the full potential of collaboration. When managed effectively, requirements are dynamic. Thoughtful integrations facilitate timely communication between those defining the product goals and those building the product, so those changes on either side are visible and part of continuous decision making.

Aligning teams with value stream networks

While it’s tempting to think of product delivery as a linear process of sequential steps, in reality it’s much more like a chaotic, organic network. Activities occur in tandem, spanning many parts of the organization. Requirements can change after implementation has begun, and modifications made to one part of the process can have far-reaching impacts to several different teams. It should be no surprise, then, that the root cause of failure in product development can be traced to an under-connected value stream network.

A value stream is a notion borrowed from lean manufacturing; it’s the sequence of activities an organization undertakes to deliver a customer request, focusing on both the production flow from raw material to end-product, and on the design flow from concept to realization. Counter to our common understanding, product management should be viewed as a value stream network, rather than simply as a value stream, as the process truly consists of many activities that occur in conjunction, overlap, and influence one another. In order to be truly successful, your value stream network must have a high level of connectivity between each node (or team) within your organization.

Given that requirements are central to driving actions and decisions throughout the product lifecycle, key challenges arise when they’re not integrated and easily accessible to all teams throughout the organization. Despite relying on separate, purpose-driven tools, each team must be able to access the same information in order to work cohesively. And that information isn’t static. Because of the dynamic nature of requirements, it’s essential that each team be able to review and absorb any modifications made to the requirement in real time. That’s where integration comes in.

While each organization is unique, our experience has allowed us to uncover several common software integration patterns. In many cases, these patterns can build on one another to create a cohesive network of connection throughout an organization.

When getting started, we recommend that organizations first identify the scenarios that are causing the most immediate pain, and enable the associated integration patterns to ameliorate that pain. Over time, organizations can adopt and deploy more and more of these integration patterns, to work toward a wholly unified and cohesive value stream network.

Requirements management and Agile planning alignment

Often, business analysts and product managers create and manage requirements in a requirements management tool, such as Jama Software or IBM DOORS. If their organization uses agile methods, the development team may expect to see those requirements within an agile planning tool, such as JIRA or CA Agile Planning, where they can manage them and further break them down into user stories for execution.

In this pattern, requirements flow from the requirements tool to the agile planning tool as epics or features. They can then be broken down into user stories or sub-tasks within the agile planning tool. If desired, teams can maintain the relationship between the epic and its subtasks when flowing that data back to the requirements tool.

Aligning requirements with agile planning

Figure 1: Aligning requirements with agile planning.

Software integration and test planning alignment

In order to create their test plans, QA teams must have access to the requirements for each feature. The problem is that requirements and user stories are usually created by business analysts and product managers in a tool such as Jama or IBM DOORS, while QA teams create their test plans, test cases, and test scripts in another set of tools, such as HPE QC or Tricentis Tosca. By flowing requirements from the product team’s requirements tool to the QA team’s testing tool as test cases, organizations are able to keep these two teams aligned, while allowing each to use their own tool of choice. This eliminates the risk of miscommunication or bottlenecks as teams are able to seamlessly flow information on acceptance criteria, test status, and more between each tool.

aligning testing with requirements.

Figure 2: Test planning alignment.

Tracing requirements

Requirements, epics, and user stories span multiple disciplines and tools in the product development and delivery process. No matter where they’re created and managed, there are many distinct stakeholders who may need access to them. For this reason, “requirements traceability” can be viewed as a composite pattern, composed of several smaller integration patterns.

For example, a project manager may manage project deliverables within a PPM tool, which then flow to a business analyst’s requirements management tool. Once the business analyst has determined the scope and details of the deliverable, that information flows to the developer’s tool as an epic, which is further broken down into user stories and tasks. That epic then flows to the QA team’s testing tool to ensure that the testers understand the requirements that they’re testing. In this way, four separate teams are able to share information and collaborate, all within their own purpose-built tools and practices. By making use of each team’s existing processes and practices, they’re able to eliminate wasted ramp-up time needed to gain access and proficiency in each tool, while allowing each team to use the systems that best facilitate their unique goals.

Value stream network in action

Figure 3: The value stream network in action.

While there’s always been a need for integration, managing complex, dynamic requirements makes it all the more necessary. In order to demonstrate product compliance and auditability, practitioners must relate their features, test cases, and other work items back to the original compliance requirements.

Without integration, practitioners must spend valuable time on status calls, communication via e-mail, and on double-entry between systems, leaving them no time to do the work they were actually hired to do: the design, development, and quality assurance tasks needed to create and ship their product.

Harnessing requirements management and software integration

Integration serves as the glue between these separate disciplines, people, and tools. And when the integrations are domain-aware, they can do seemingly magical things like translate between tools that have conceptual and technical mismatches. Smart integrations are able to translate different work item types and field values between tools in order to facilitate communication between teams, regardless of differences in tool architecture.

Recognizing the inherent network that connects teams in an organization is the first step to enhancing product development. Once you’re able to identify the key areas where teams must share information, you’ll be able to navigate areas in which communication breakdowns are occurring. By implementing key software integration patterns, you’ll increase connectivity between each node of your organizational network. And as connectivity increases and, with it, each team’s ability to access key requirements, you’ll see enhanced success within your organization.



Co-author Rebecca Dobbin is a Product Analyst with Tasktop Technologies (@rebecca__dobbin)
Co-author Robin Calhoun is a Product Manager for Jama Software


Page 2 of 2112345...1020...Last »

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: