Coffee Talk: Java, News, Stories and Opinions

April 10, 2018  8:41 PM

Using Agile for hardware development to deliver products faster

Daisy.McCarty Profile: Daisy.McCarty

When metal and plastic are manipulated instead of ones and zeros, is it possible to pursue an Agile development process? Or is the idea of Agile for hardware development an misnomer?

The fact is, more and more organizations have given waterfall the could shoulder and turned to scrum, lean and kanban based models as they make Agile for hardware development a reality. A combination of rapid prototyping, modular design, and simulation testing makes Agile for hardware development a real possibility for modern manufacturing in technology.

From Agile software to Agile hardware

With Agile software development, creative processes are the cornerstone of the framework and constant change is the order of the day. It’s expected that alterations will happen continuously. Relatively speaking, there’s a low cost associated with constant change in software since it requires only time, knowledge, and the ability to type some code. However, the physical world is not as amenable to alteration as the digital world. Hardware changes are associated with high costs and significant resistance.

Yet according to Curt Raschke, a Product Development Project Manager and guest lecturer at UT Dallas, Agile for hardware development isn’t really a fresh idea. In fact, Agile had its roots in the manufacturing world to begin with. “These ideas came out of lean and incremental product development,” he said. In that sense, it is no surprise  the concept has come full circle and Agile has appeared in hardware development. It’s simply requires a fresh look at existing best practices.

For Shreyas Bhat, CEO of FusePLM, a full-cloud product lifecycle management system that uses a cards-based approach to the development process, the question isn’t if Agile for hardware development can be done, but instead, how? “Hardware tends to be self-contained and has too many dependencies,” he said. “How do you split it into deliverable sections? You need to be able to partition the project and come up with milestones for the customer.”

Prototyping and Agile hardware development

Rapid prototyping is one part of the answer to the question of how. “These days, you can build a prototype cheaply and early on to give a customer a feel for the eventual functionality,” Bhat said. “With 3D printing, the cost of prototyping has gone down significantly.” Contract manufacturers have turned prototyping into a commodity in an industry where cost is always a driving factor. Being able to turn to a prototype provider for quick, inexpensive modeling saves both time and money for the actual manufacturer later.

As Bhat explained, the requirements for hardware have to be stable since changing tooling increases costs. Prototyping helps pin down the proper tooling early on. “You have a better idea of what is needed for production. That way, you’re getting the right tooling for your manufacturing line up front rather than having to retool down the road.”

But that doesn’t mean prototypes can solve all problems. Raschke explained one of the prime limitations. I “It works for product development from a form factor point of view,” he said. “You can use it to get feedback ahead of time on look and feel. But you can’t use 3D printing alone for stuff with moving parts that require assembly.”

Practical thinking in Agile hardware design

One answer to the issue of Agile for hardware development is the nature of the design itself. The more modular an item is, and the fewer dependencies are involved within any given component, the easier it will be to make changes. Also, choosing the fewest material types possible to get the job done is key aspect to move hardware in a more flexible direction. Again, these are well-known practices in the manufacturing sector and lend themselves well to implementation within an Agile framework.

When simulation or other types of iterative hardware evaluation are part of the process, it is a smart bet to design with testing in mind. Test driven development (TDD) can readily be incorporated into hardware as long as the limitations and capabilities of the simulating environment are known. Cross-functional teams are the best fit for developing hardware in this way, with an eye toward validating design functionality through iterative testing.

In fact, Raschke revealed  this is one of the prime challenges facing hardware where agility is concerned, particularly in terms of pretesting and system integration. “Design, development, and delivery are not always done by the same team,” he said. “Even if pieces can be developed by the same team, they are different subject matter experts. In a hardware product, there are electrical, mechanical, and software engineers along with firmware specialists, and so on.” These diverse experts would all have to be brought together in order to create a faster, more iterative approach to hardware development. That would be quite a feat for any Scrum Master to pull off.

The future of Agile for hardware development

Startups and fast-growing companies with bright ideas are primed to take on the Agile for hardware development challenge. While it may be risky, there is the chance of a high payoff. In Bhat’s view, “Where there is innovation, there is uncertainty. That’s in the sweet spot for Agile. You can do a lot of ‘what if’ scenarios to try out ideas and quickly release them to the customer.”

However, Agile does have one obvious limitation that Raschke addressed. “Hardware products are developed incrementally, but they can’t be released that way,” he explained. That’s certainly true. Given the high percentage of customers who ignore factory recalls and upgrades on everyday consumer goods, it’s hard to imagine them happy about the constant need to update their hardware, particularly if it involves a trip to a local provider or installation of a kit received in the mail. So, while Agile for hardware development and design may play an increasingly important role in the industry, delivery will remain a fixed point with little room for trial and error.

March 9, 2018  1:38 AM

Acts of discrimination lets gender inequality in technology go unresolved

Daisy.McCarty Profile: Daisy.McCarty

Coming up in tech over the past few decades wasn’t easy. A successful entrepreneur told me a story of how she landed her first tech job as a sales rep for a telecom agency. This was in the early days of deregulation, long before gender inequality in technology was an issue organizations were willing to address. She had a background in sales but knew nothing about the industry. She did well in the interview, but the hiring manager contacted her with bad news. The word from the VP of sales was, “We just hired our first woman and we’re not going to hire another until we see if she works out.”

This vivacious woman wasn’t about to take that as a final answer. She told the hiring manager in no uncertain terms: “I want you to get me an interview with the sales VP.” During that meeting, she talked her way into a job. The VP was reluctant, and warned, “I’m going to give you a chance, but I’m going to be watching you.”

“I hope you do!” she answered.

The VP got to watch as she went on to fight gender inequality in technology and become a top performer. She learned as she went and hopefully made it easier for the next few females who tried to join the sales force.

A woman who is now an executive at a software firm revealed how dreadful it was to work at a different company many years ago as the sole female coder in her department.

“I was the only woman there, and I guess this was before people knew how to deal with us,” she said. The harassment was overtly sexual and very distressing. “As just one example, when they would have a company dinner and start telling off color jokes, my name was always the one inserted as the butt of the joke.”

She took these issues to her boss who simply advised, “Don’t tell your fiancé about it.” After all, she wouldn’t want to upset her husband-to-be with such trivialities when it was all just in good fun. When she couldn’t stand the harassment any longer, she left without even having another job lined up. Sadly, the recent Uber scandal reveals sexism is alive and well in some organizations in the tech industry. While it may be in vogue to be politically correct on the surface, a culture of male entitlement only serves to advance gender inequality in technology, while wreaking havoc in the lives and careers of women in tech.

Perhaps the second most annoying expression of sexism is the dismissal of the opinions of women—even those who have credentials and experience that put them in a position to be exceptionally knowledgeable. A highly educated woman in the analytics field discovered that being heard takes real effort.

“I’m the only female in this role within my organization,” she said. “I have to push a lot to be taken seriously. I communicate in writing with clients. Having a PhD can help with respect and it’s definitely easier to navigate the situation because of having that education level. It shouldn’t be that way, but when gender inequality in technology prevails, it is. I’ve still found I have to get pushy verbally and give evidence that I am right. It’s like trying to swim in mud.”

Women in tech are there to win

An entrepreneur who started and still runs a thriving business found that she faced hurdles in getting venture capital.

“In one meeting I remember the men looking at us with veiled amusement,” she said. “They didn’t come out and say it but the sense we got was that they felt, ‘You have a great idea, but you are women! Come back when you have a male CEO.’ Based on the questions they were asking, the VCs couldn’t wrap their minds around the fact that we had long term goals. They obviously thought we wouldn’t be around long.”

It’s not just men who can make life hard for women in tech. Female leaders can also contribute to gender inequality in technology related fields without even realizing it. One woman told me about getting passed over for one promotion after another because her female bosses simply assumed she wanted to stay home with her children. A failure to take female ambition seriously means that many organizations are missing out on the opportunity to invest in developing and mentoring high potential women.

The unpaid second shift

For some women, the environment doesn’t have to be sexist in any overt way to create an obstacle to advancement. It just has to ignore the responsibilities that fall on women’s shoulders on the home front.

According to an expert speaker on female empowerment in the workplace, “Women in technology are often faced with having to work in an environment where they don’t feel comfortable. Silicon Valley in particular has a startup culture. Working in a startup environment or a new department within a company requires intense, long work hours. A lot of younger women leave. They can’t adequately manage being a wife, mother, and so on.”

Women are still being forced to choose career versus taking care of children and aging parents.

“Even with all the focus on placing women in senior positions, when you see women who are highly successful they are often unmarried or have a stay-at-home husband who relocated to support them in their career,” this expert said.

She pointed out that men are facing hard choices as well in their careers. But women are usually the ones who end up stepping in to take on the unpaid work while their plans for promotion are sacrificed.

Individual encounters can rankle

According to a mid-level manager who manages a sizeable team, it pays to be choosy about where to work. Women really do research a company’s culture before accepting a position. They need to know if they are getting into a bad situation.

“I’ve worked in the military, automotive, and construction arenas,” the manager said. “The tech company where I am now does not tolerate harassment. It has a good reputation, and I made sure of that before taking the job.”

Yet she certainly knows what it is like to run into men who are jarred by the presence of a woman in their midst. They aren’t going to welcome an outsider with open arms.

“Some people don’t think women should be in that role. You can work to make that relationship professional, but it is not going to be cordial.”

There are also women who say they’ve never noticed any harassment or gender-based discrimination in a conventional corporate tech environment. According to one woman who worked in a larger organization for many years, “Either I’m extraordinarily dense, or I wasn’t being discriminated against.” Yet as a consultant who now deals directly with clients, she can easily think of that one person who routinely makes her feel uncomfortable. Not surprisingly, he’s an older gentleman who holds views that can most kindly be called “highly traditional.”

“He has told me to my face that ‘Women shouldn’t be in the workforce’ and ‘The reason women get married is so men can protect them,'” she said.

Even the client’s condescending greeting of “It’s so good to see your pretty face!” rightfully irritates this successful and accomplished business owner. She certainly can’t imagine a male counterpart being treated in a similar way.

Coping with a male-dominated workplace

As I’ve talked with women over the past year about what it takes to make things work as part of a team or as a leader, I have noticed an interesting trend. Women in the Baby Boomer generation are much more likely to say they had to “act like one of the guys” or simply treat gender as irrelevant in order to survive in their careers. They have learned to lead and collaborate in a masculine way to fit in and be seen as an equal.

Younger women in the 30 to 45 age-range are more likely to say they consciously bring female aspects and qualities to the table to help them succeed. Things like emotional intelligence, good communication, multi-tasking, and the ability to collaborate are traits they use to their advantage.

Both strategies certainly have merit and can be used depending on the workplace culture, a woman’s career goals, and her natural aptitudes. It will take persistence, experimentation, and courage to find the right mix for each individual. Here is some additional advice from the women in my tech network for how to make it work at work.

  • On fitting in: “If you are a minority in a group it’s partly your responsibility to fit in. As an example, I’m not a big follower of sports, but I will skim the headlines before going into the CEO staff meeting. We have to see what we can contribute to the conversation.”
  • On work/life balance: “We find that work-life balance is a day-by-day thing. Some days, works gets most of you. Other days, you have to make the decision to put family first.”
  • On earning respect: “You can’t be a shy or retiring person and get your voice heard. You have to speak up and step up. You can’t be uncertain, you need to have knowledge and come across as knowledgeable.”
  • On getting promoted: “Be bold. Let people know what’s important to you and go after it.”

While the fight against gender inequality in technology remains ongoing, most women see a positive outcome as inevitable. It’s just a matter of persistence, patience, and boldness.

February 27, 2018  11:27 PM

Re-introducing Jakarta EE: Eclipse takes Java EE back to its roots

DarrylTaft Profile: DarrylTaft

The Eclipse Foundation has chosen Jakarta EE as the new name for the technology formerly known as Java EE.

Last September, the Eclipse Foundation announced that Oracle would hand over the Java Platform, Enterprise Edition (Java EE), to Eclipse to foster industrywide collaboration and advance the technology. Since then, Eclipse has established the top-level Eclipse Enterprise for Java (EE4J) project to oversee the future of enterprise Java at the Eclipse Foundation. EE4J is a solid and even semi-cool name for that project, but it was never meant to replace the Java EE brand.

Oracle’s Java trademark

However, the new name for the brand – or for the top-level project, for that matter — could not be Java EE, or even start with Java. Oracle’s trademark usage guidelines require that they are the only entity that can use Java at the beginning off the name, so the phrase “for Java” was a requirement from the start, wrote Mike Milinkovich, executive director of the Eclipse Foundation, in a September 2017 blog post.

After a series of trademark reviews to secure a proper name that passed legal and trademark searches,   Eclipse came up with a few possible names. Jakarta EE won 64.4 percent of the Java community’s vote, while Enterprise Profile captured 35.4 percent.

“I am personally very happy that the community selected Jakarta EE,” Milinkovich said, happy because Jakarta EE is a simple name with potential for future technology, and a nod to the past. “This is just one small step in the overall migration, but the community reaction has been universally positive.”

Alternatives, including Enterprise Profile, were deemed too boring, and the word “enterprise” is viewed by many to mean stodgy and unapproachable to younger developers.

“The Jakarta EE name is a reasonable compromise, considering the objectives of the community and the objections that Oracle has raised,” said Cameron Purdy, CEO of, a stealth mode startup, and former vice president of development at Oracle.

Apache Jakarta project

Other Java thought leaders said the new name, which comes from an old Apache Software Foundation project, might actually be better for enterprise Java.

Jakarta EE is named after Apache Jakarta, which was founded in 1999 and retired in 2011. “Those were roughly the dates when Java EE mattered,” said Rod Johnson, CEO of Atomist and creator of the Spring Framework. “The Java EE brand is tired now, so while a new name will have less recognition, it might also escape some negative associations.”

It’s also a nudge to the ‘JEE’ shortened form, and taps into Jakarta‘s reputation in the Java ecosystem,” said Martijn Verburg, co-founder of the London Java Community and CEO of jClarity, a London-based provider of software that helps developers solve Java performance problems with machine learning.

The renaming of Java EE seems to fit the ongoing saga of Java, said Sacha Labourey, CEO of CloudBees and former CTO of JBoss. That product’s original name was “EJBoss: Enterprise Java Beans Open Source Server,” but after a cease-and-desist letter from Sun Microsystems’ lawyers over the use of ‘EJB,’ JBoss founder and CEO Marc Fleury pragmatically dropped the E.

Reminiscing, Labourey said the Java community has had to “learn how to dance with its trademark and community process [Java Community Process (JCP)], so what else could we expect?” He noted that the JavaPolis conference had to be renamed to Devoxx because of Java trademark issues. “We would have been so disappointed not to have one more chapter to add to the Java trademark book,” he said.

February 21, 2018  6:36 PM

Clear software development governance needed in this polyglot world

cameronmcnz Cameron McKenzie Profile: cameronmcnz

New architectures composed out of language agnostic software containers have made polyglot programming a new reality. But out of this newfound freedom chaos can ensue if clear software development governance policies do not exist that describe when, where and why certain programming languages can be used, and which other languages cannot.

When advocates talk about the various things organizations should be doing to prepare for the move towards modern middleware servers like Amazon and OpenShift, the increased use of serverless systems and all of the headaches that go along with DevOps adoption, a topic that far too often gets overlooked is software development governance. Organizations should address how they are going to manage these new software development stacks and deployment targets rather than discuss technology adoption or how to continuously deploy.

“If you look at all of the Cloud platforms, you know, open shared Cloud Foundry, Amazon, Google, Azure, OpenShift, they’re polyglot,” said RedHat Product Manager Rich Sharples. “They support multiple languages.” And that’s great for organizations that want to put together a tapistry of best-of-breed applications where each one is built with the language that best suits the application’s purpose. But best-of-breed environments can be incredibly difficult to manage over the long term, especially when an application written in a language that has fallen out of favor needs fixed, enhanced or just generally maintained.

Flexible software development governance

How can software development governance address the problems a poloyglot programming shop might encounter? One way is to simply make the organization monoglot and settle on a highly proven and capable programming language like Java. “Java is clearly the best language for running large, complicated, transactional applications in the traditional, long-lived server model,” said Sharples. “It is the active standard for enterprises building those kind of applications.”

But an inflexible and rigid governance model is exactly what caused the colonies to revolt and create a union of states. While settling on Java as a fundamental standard is a good start, a software development governance model should also allow for alternate languages to be used when certain extenuous conditions are met. For example, an organization might specify Scala to be the language of choice if it is understood that an application might have an exceptional requirement for parallelism or list processing speed. Similarly, Kotlin might be agreed upon for the development of mobile applications that require support for the latest Java version.

It’s understood that Java might not always be the right language for every situation, especially in new arenas where containers and microservices dominate. “Java must prove itself when it comes to building smaller microservices on a small stack. It’s got to proove itself in serverless where the environment is even more dense and constrained. And it’s got to compare with languages like GO, which is super lightweight.”

Containing the polyglot chaos

Ask a group of programmers to choose the best coding language and you’ll find it’s a great way to start a fight. To ensure peace and harmony on the development team, a software development governance policy needs to be well defined so programmers  know which languages are permissible, and under which conditions alternates can be used. Setting clear guidelines avoids the chaos that can ensue when every program is written in a different language. Furthermore, clear expectations in a software development governance model ensure egos don’t get bruised when a developer’s choice of programming options is limited to those the organization has standardized upon.

In this new age of poloyglot programming, DevOps adoption and serverless systems, the potential for chaos is a real threat to the wellness of the IT department. Software developers with a host of different languages from which to choose are understandably excited about developing microservices and deploying them into language-agnostic, container based architectures. But for the long term health of the IT department, software development governance models should never be agnostic about the languages they permit.



January 30, 2018  11:21 PM

Developers, learn from the iPhone battery glitch

MarkSpritzler Profile: MarkSpritzler

Apple is forcing all of its older iPhone and iPad users to buy new devices by throttling their batteries as they get older. Countries that perceive this as unsavoury forced obsolescence have demanded Apple executives explain themselves. Class action lawsuits were launched, as Apple finds its business practices coming under an unprecedented amount of scrutiny. Apple has always been a lightening rod for both passion and angst, and there have no doubt been times in the past where Apple’s business practices were worthy of the public lashings it may have taken. But when it comes to the controversy over the legacy iPhone battery glitch, the backlash and public derision the company has suffered is not warranted. But, it is an important situation for developers to keep in mind.

The new iPhone battery update for legacy phones is actually a great feature. It’s a feature that we have not only in phones, but also on our desktop and laptop computers. On desktops it’s simply called Power Save. It’s that feature that enables your computer to reduce the power consumption to increase the amount of time your battery lasts on a single charge. Windows has it, Linux has it and Mac OS has it. Even Android phones have it too.

Will it slow down your computer? Yes it will. Does it give you less functionality than when you are not in power mode? Yes it does. But it is a powerful feature to help in situations where the battery is about to die.

So how is this new iPhone battery “glitch” any different from the corresponding one that Linux and Windows desktop users already enjoy?

I know what you are thinking: Power Save mode is optional on the desktop, and it can be turned on and off, and it is just to make a single charge last longer, rather than  about the life of my battery. And it is only on rare occasions that I would ever use it. Power Save mode is just that. However, what if you wanted to make your battery life last longer over many charges — wouldn’t it be a great feature to have too? It is just an extension of this design model.

Let us say you have a battery that lasts two years and allows 500 charges. Batteries have different ranges, but let’s assume they are all the same for this discussion.

If you have an Android phone and never use Power Save mode, at the end of two years, your phone battery dies. You now have to go get a new phone.

If you have an iPhone and never use Power Save mode and Apple did not code special protection for the battery, at the end of two years, you too will have a dead phone and battery which you now have to replace.

Now, let’s take the special Power Save mode Apple has developed. As the battery is getting closer to its two-year end of life, Apple will put it into this special Power Save mode so that the battery/phone now lasts longer than two years. Now you are getting a better longer lasting phone and reducing any planned obsolescence. How is that a bad thing? It’s really not an iPhone battery glitch.

It is a bad thing in one way and only one way — performance. That is the trade off Apple chose to take here. We can give up some speed nearer to the two year time span and make the phone last longer. Is it a bad thing that the speed is slower? Is it a great thing that the phone actually lasts longer?

It all depends on your perspective. Extend the battery life at the expense of performance near the end of life of the battery, or keep the battery life as is and cause earlier upgrades — which would you choose? I think that determines what your perspective will be towards Apple.

As a software developer, we make these trade off choices everyday. For instance, I am always asking questions like: do I write an SQL query to grab the data that might create a very complex SQL String with many joins that is extremely fast; or maybe use a couple of simple queries to gather some Java Entity objects that will be easier to loop through and do work on, are slower than the query, but much easier to understand, maintain and write. This happens a lot when writing reports in Jasper. I could write a 300-500 line SQL query inside the Jasper Report file, or write 20 lines of code to gather a List of Entities I can pass to the report.

From my perspective, I would have chosen to make the batteries last longer and therefore allow users to use the devices for longer. Performance is also very subjective; sometimes what appears to be slow might appear very fast to someone else.

But you can’t say that extending the battery life is a form of planned obsolescence or an iPhone battery glitch. And in recent news, Apple says in an upcoming release of iOS that it will become an option to turn it on and off, making it exactly like Power Save mode on computers.

January 23, 2018  9:00 PM

Choosing the right container orchestration tool for your project

Daisy.McCarty Profile: Daisy.McCarty

The use of software containers like Docker has been one of the biggest industry trends this past decade. Hitherto, the focus has been to educate the development community on the capabilities of container technology and how enterprises might use container orchestration tools effectively. But it would appear that an event horizon in terms of container awareness has been breached, as familiarity has increased to the point where organizations have moved beyond simply entertaining the thought of bringing containers into the data center and have moved forward into actual adoption. But before companies actually deploy software into containers, the question remains about which software container, and complimentary set of container orchestration tools, an organization should adopt. Options such as Swarm, Cloud Foundry and Mesos make strong arguments for themselves so the container technology landscape is a difficult one to navigate.

Container orchestration with Kubernetes

Of course, Craig McLuckie, CEO of Heptio, is confident in the horse his organization is betting on to create cloud-native applications. “Kubernetes over the last three and a half years has emerged as the standard.” Mark Cavage, VP of Product Development at Oracle, also gives Kubernetes the thumbs up. “It’s the right open source option that abstracts away clouds and infrastructure, giving you a distributed substrate to help you build resilient microservices.”

Kubernetes has been compared favorably with other container orchestration solutions before, with one of its big selling points the vendor-agnostic nature of its open source availability. McLuckie poins out the lack of vendor lock-in as an appealing benefit. Other experts have praised Kubernetes for working well with Docker while requiring only a one-tier architecture. Because of its flexibility, developers who use Kubernetes may be less likely to face the situation of writing applications to match the expectations of a vendor-driven container orchestration platform rather than writing applications in the way that makes the most sense for what their organization hopes to accomplish.

Simplify operations

McLuckie speaks highly of Kubernetes’ ability to simplify ops with containers that are a hermetically sealed, highly predictable and portable unit of deployment. But deployment is just where the real work begins. Keeping the containers running and automating a container based architecture without a hiccup is where Kubernetes shines. “By using a dynamic orchestrator, you can rely on an intelligent system to deal with all of those operational mechanics like scaling, monitoring, and reasoning about your application.”

Increased uptime and resiliency are obvious benefits of having a well-orchestrated container system. And since the self-contained units run the same way on a laptop as they do in the cloud, they can make development and testing easier for transitioning DevOps teams. End users also enjoy the perks of Kubernetes without realizing it, since the orchestration tooling can perform automatic rollout and rollback of services live without impacting traffic, even in the event of a failure of the new version.

Container orchestration and middleware

One interesting point McLuckie makes pertains to how middleware is being “teased apart” as a result of containerization. “Systems like Kubernetes are emerging as a standard way to run a lot of the underlying distributed system components, effectively stitching together a large number of machines that can operate as a single, logical, ideal computing fabric.” As a result, “A lot of the other functionality is getting pushed up into application level libraries that provide much of the immediate integration you need to run those applications efficiently.”

What might the outcome of this fragmentation of middleware be? According to McLuckie, it could open Java apps to a polyglot world by making it simpler to peacefully coexist with other languages. In addition, it would make dependencies easier to manage, even supporting the ability to run something like Cassandra DB on the same core infrastructure and underlying orchestrator as the Java app that relies on that database.

Cutting down on system complexity could be what makes Kubernetes and containerization itself attractive to larger organizations in the long run. “This could address enterprise concerns for governance, risk management, and compliance with a single, consistent, underlying framework.” And enterprises that prefer on-premise or hybrid clouds could use this approach just as readily as those that are fully cloud-reliant.

With container orchestration tools becoming more sophisticated with time, don’t expect the hype around container technology to die down in 2018. It will only intensify, with the focus moving away from education and awareness and instead towards adoption and implementation.

January 12, 2018  5:36 PM

Four wise pieces of advice for women in technology

Daisy.McCarty Profile: Daisy.McCarty

One of my favorite things about interviewing women in technology has been hearing all their helpful tips and insights. Many of these women spent decades in the tech world, moved up the career ladder, innovated in their areas of expertise, started new businesses, and created more opportunities for the next generation of women. Here are some of  takeaways that resonated.

Tip #1: You can end up in tech from just about anywhere

Tanis Cornell, principal of TJC Consulting, offered this insight to young women and teens, “You don’t have to be an engineer to have a job in technology. What I discovered in my own career was an affinity for absorbing a technical concept, grasping the advantages and disadvantages, and becoming fairly technically adept quickly.” Her educational background in communications ended up translating well into the tech sector, where the ability to speak about complex ideas in terms business decision-makers can understand is a highly valued skill.

Jen Voecks, a former event planner and current CEO of the bride-to-vendor matchmaking service Praulia, found that jumping into technology gave her an advantage as a brand new startup owner. “I learned the ropes one step at a time, front end and back end.” she said. “For a while I was stuck in a learning curve, wearing all the hats and trying to build a product while running a business. Once I got into the ‘developer brain’ mode, I realized that it’s like a puzzle with a lot of pieces and things got a lot easier. But I also learned to appreciate what developers do and how to communicate with devs and engineers.” Now she has enough experience to make smart decisions about hiring specialists to handle the various aspects of her technology needs and can focus her own efforts on strategic growth. Her advice: “Just stay at it. Most women I’ve talked with are experiencing hurdles. We should stick together and push through. We are powerful.”

Tip #2: Communicate calmly and clearly

CeCe Morken, EVP and general manager of the ProConnect Group at Intuit, spoke highly of Raji Arasu, who is the senior vice president, platform and core services CTO. “She is an excellent leader, and she leads a lot of men,” she said. “She’s always very calm and never changes her demeanor. Raji takes in information and handles it elegantly even in high stakes situations.” Morken also pointed out that Arasu speaks in a language everyone can understand, translating concepts from software architects to business leaders in a way that makes sense.

Charlene Schwindt, a software business unit manager at Hilti, agreed that being able to customize communication for a given audience is critical for success. “You really need to be able to transition your communication style and tailor it to the level of understanding of your audience,” she explained. “When I’m talking with developers, I can be highly technical. With customer support, I talk about things at a higher, summary level. Business leaders want a conversation that’s results oriented, but I might drill down to more detail if questions are asked.”

Of course, being confident as a communicator doesn’t always mean you have to be right about everything. In fact, it’s completely OK to pivot as you grow. Mary McNeely, Oracle database expert and owner of McNeely Technology Solutions, shared this sage advice. “Don’t be afraid to change your mind. You have to decide something in order to move forward. But once you have more information and time to think, anything could change. You might reach a different conclusion. It’s OK to reconsider your decisions, perspectives, and opinions.” And when you do change your mind, remember to let people know!

Tip #3: Believe in yourself. Seriously.

What would it be like to grow up in a culture that simply accepts that women are awesome? Candy Bernhardt, head of design and UX for Capital One’s auto lending division, recounted her experience of being raised by her grandmother who is from the Philippines. “It’s a very matriarchal society, so I didn’t know any better. I just thought women ruled the world and our opinions mattered. I challenged authority because I thought it was my right.” That pluck and boldness served her well when she landed in a career she thoroughly enjoys.

For those who grew up a little less sure of themselves, it’s not too late to gain the confidence to grasp the brass ring. Meltem Ballan, a data scientist with a PhD in complex systems and brain sciences, encouraged this can-do attitude as well. “Get out in front and show that a female can do it,” she said. “Ask for mentorship and stand for your own rights. Women must support women.” For her part, she found that giving a presentation on short notice, even though she didn’t feel completely ready, was a turning point in her speaking skills. “It’s important to go out and have that moment where we leave our comfort zone. Then our comfort zone gets larger.”

Tip #4: Never stop improving yourself and helping others grow

Julie Hamrick, COO and founder of Ignite Sales, spoke about the value of continuous improvement for success as an entrepreneur. “The harder I work, the luckier I get,” she offered. “At first I had to work hard to make my product good. Now, I am still thinking every day about how to deliver even better results for my customers.”

What about lending a helping hand to other women? How can managers and leaders do better in this area? Morken offered this advice for those who want to be effective mentors: “Focus on developing the person first and then the goals.” It’s important to understand what energies someone because that is what fuels growth.

My own final tip is this: Grow your network starting today. There’s a wealth of information and insight available from the women in tech all around you, and they are happy to share. Start leveraging these resources to take you farther than you ever thought possible!

January 6, 2018  12:29 AM

Requirements management and software integration: Why one can’t live without the other

RebeccaDobbin Profile: RebeccaDobbin

With the complexity of products being built today, no one person can understand all the pieces. Yet everyone involved is still responsible for ensuring their work is compatible with co-workers and aligns with the company’s strategy. The only way to keep everyone moving toward the same goal is to have a shared understanding of what’s being built and why. This is where requirements management and software integration comes in.

Requirements define what a product should be. Equally important, though, they’re the single shared communication bridge between all disciplines. Everything involved in the pursuit of product development can be tied back to the definition of what you’re building — otherwise there’s misalignment somewhere. The goals of the market-facing teams translate into product requirements, making abstract desires achievable. The effort of design, development, and testing teams are driven by requirements. Product documentation, software integration, marketing, and spec all tie back to requirements at the end of a project, all derived from the same set of goals.

However, the relationship between requirements and everyone else’s work is not always easily visible. There might be degrees of separation, such as many levels of decomposition from market goals down to tasks. Information may be kept in multiple task-specific tools. While a shared understanding — in the form of requirements — is essential for alignment, it’s not sufficient in many cases. Companies with complex, multi-level, cross-tool product data run the risk of creating silos and disconnects from the original product goal.

This is why software integration and requirements management work hand-in-hand to realize the full potential of collaboration. When managed effectively, requirements are dynamic. Thoughtful integrations facilitate timely communication between those defining the product goals and those building the product, so those changes on either side are visible and part of continuous decision making.

Aligning teams with value stream networks

While it’s tempting to think of product delivery as a linear process of sequential steps, in reality it’s much more like a chaotic, organic network. Activities occur in tandem, spanning many parts of the organization. Requirements can change after implementation has begun, and modifications made to one part of the process can have far-reaching impacts to several different teams. It should be no surprise, then, that the root cause of failure in product development can be traced to an under-connected value stream network.

A value stream is a notion borrowed from lean manufacturing; it’s the sequence of activities an organization undertakes to deliver a customer request, focusing on both the production flow from raw material to end-product, and on the design flow from concept to realization. Counter to our common understanding, product management should be viewed as a value stream network, rather than simply as a value stream, as the process truly consists of many activities that occur in conjunction, overlap, and influence one another. In order to be truly successful, your value stream network must have a high level of connectivity between each node (or team) within your organization.

Given that requirements are central to driving actions and decisions throughout the product lifecycle, key challenges arise when they’re not integrated and easily accessible to all teams throughout the organization. Despite relying on separate, purpose-driven tools, each team must be able to access the same information in order to work cohesively. And that information isn’t static. Because of the dynamic nature of requirements, it’s essential that each team be able to review and absorb any modifications made to the requirement in real time. That’s where integration comes in.

While each organization is unique, our experience has allowed us to uncover several common software integration patterns. In many cases, these patterns can build on one another to create a cohesive network of connection throughout an organization.

When getting started, we recommend that organizations first identify the scenarios that are causing the most immediate pain, and enable the associated integration patterns to ameliorate that pain. Over time, organizations can adopt and deploy more and more of these integration patterns, to work toward a wholly unified and cohesive value stream network.

Requirements management and Agile planning alignment

Often, business analysts and product managers create and manage requirements in a requirements management tool, such as Jama Software or IBM DOORS. If their organization uses agile methods, the development team may expect to see those requirements within an agile planning tool, such as JIRA or CA Agile Planning, where they can manage them and further break them down into user stories for execution.

In this pattern, requirements flow from the requirements tool to the agile planning tool as epics or features. They can then be broken down into user stories or sub-tasks within the agile planning tool. If desired, teams can maintain the relationship between the epic and its subtasks when flowing that data back to the requirements tool.

Aligning requirements with agile planning

Figure 1: Aligning requirements with agile planning.

Software integration and test planning alignment

In order to create their test plans, QA teams must have access to the requirements for each feature. The problem is that requirements and user stories are usually created by business analysts and product managers in a tool such as Jama or IBM DOORS, while QA teams create their test plans, test cases, and test scripts in another set of tools, such as HPE QC or Tricentis Tosca. By flowing requirements from the product team’s requirements tool to the QA team’s testing tool as test cases, organizations are able to keep these two teams aligned, while allowing each to use their own tool of choice. This eliminates the risk of miscommunication or bottlenecks as teams are able to seamlessly flow information on acceptance criteria, test status, and more between each tool.

aligning testing with requirements.

Figure 2: Test planning alignment.

Tracing requirements

Requirements, epics, and user stories span multiple disciplines and tools in the product development and delivery process. No matter where they’re created and managed, there are many distinct stakeholders who may need access to them. For this reason, “requirements traceability” can be viewed as a composite pattern, composed of several smaller integration patterns.

For example, a project manager may manage project deliverables within a PPM tool, which then flow to a business analyst’s requirements management tool. Once the business analyst has determined the scope and details of the deliverable, that information flows to the developer’s tool as an epic, which is further broken down into user stories and tasks. That epic then flows to the QA team’s testing tool to ensure that the testers understand the requirements that they’re testing. In this way, four separate teams are able to share information and collaborate, all within their own purpose-built tools and practices. By making use of each team’s existing processes and practices, they’re able to eliminate wasted ramp-up time needed to gain access and proficiency in each tool, while allowing each team to use the systems that best facilitate their unique goals.

Value stream network in action

Figure 3: The value stream network in action.

While there’s always been a need for integration, managing complex, dynamic requirements makes it all the more necessary. In order to demonstrate product compliance and auditability, practitioners must relate their features, test cases, and other work items back to the original compliance requirements.

Without integration, practitioners must spend valuable time on status calls, communication via e-mail, and on double-entry between systems, leaving them no time to do the work they were actually hired to do: the design, development, and quality assurance tasks needed to create and ship their product.

Harnessing requirements management and software integration

Integration serves as the glue between these separate disciplines, people, and tools. And when the integrations are domain-aware, they can do seemingly magical things like translate between tools that have conceptual and technical mismatches. Smart integrations are able to translate different work item types and field values between tools in order to facilitate communication between teams, regardless of differences in tool architecture.

Recognizing the inherent network that connects teams in an organization is the first step to enhancing product development. Once you’re able to identify the key areas where teams must share information, you’ll be able to navigate areas in which communication breakdowns are occurring. By implementing key software integration patterns, you’ll increase connectivity between each node of your organizational network. And as connectivity increases and, with it, each team’s ability to access key requirements, you’ll see enhanced success within your organization.

Co-author Rebecca Dobbin is a Product Analyst with Tasktop Technologies (@rebecca__dobbin)
Co-author Robin Calhoun is a Product Manager for Jama Software

December 13, 2017  12:04 AM

Is there a hidden threat embedded in the Management Engine of your Intel chip?

George Lawton Profile: George Lawton

A couple of years ago, Intel invited me to a press luncheon to talk about how great their new chips were. They had new chips that were faster and used less power, and they were selling like hot cakes. The food was good and the new machines were smaller and ran a few minutes longer on batteries than last year’s models. Almost in passing I heard one of their product managers describing a secret operating system buried on enterprise computers, called the Management Engine (ME). They called it a feature, and all I could see was a hidden threat.

They said it only ran on “enterprise computers,” and I remember sleeping a little better at night imagining that this little gremlin did not run inside my consumer laptop at the time. I just found out they have a new test for this hidden threat that can determine if your computer is infested with this incurable disease. Yep, I have it. You probably have it too, along with most of the cloud servers keeping trillions of dollars of enterprise apps secure.

They have also released a so called cure for the symptoms, which is thus far only available from Lenovo. But it is not really a cure in the way an antibiotic eradicates an infection. Its more like those $50,000/year cocktails that manages AIDs, but leaves its hosts at risk of communicating it to others. The fundamental problem is that Intel has thus far not shared much about how this hidden threat works, or whether it can in fact be eradicated. They have just patched some of the vulnerabilities, which thus far are probably not a great danger to cloud apps since someone must physically insert a USB drive to compromise them.

All systems are vulnerable

The fundamental problem in other words is not the news that someone found a vulnerability and patched it. The problem is that Intel has relied on a very flawed theory that something running on virtually every enterprise and cloud server out there is protected because no one outside of Intel knows how it works. This was the same theory that the utility industry relied upon until the US and Israel figured out how Stuxnet could be used to take out the Iranian nuclear program and perhaps an Iranian power plant. But once this attack was shared, all the power infrastructure in the world became vulnerable to Stuxnet’s progeny.

I am sure Intel’s greatest minds did a great job of identifying and mitigating every vulnerability they could dream up at the time. So did the folks that developed SSL, and none of the craftiest minds in the security industry recognized that hidden threat until after the code had been in the public domain for two years.

One of the key developments over the last couple of years has been a move towards DevSecOps, which assumes that all code has vulnerabilities. It’s just that no one has figured out how to exploit them at the time of deployment. Therefore, a mechanism must be in place to quickly and automatically find and update these systems smoothly when a new patch is required. DevSecOps breaks down when it relies on 3rd parties like Lenovo, Dell, and HP to tune the update to their particular configurations.

Its not clear how bad this whole episode will end up being for Intel. Thus far, they have done a pretty good PR job of suggesting that these attacks requiring physical access are not a big deal. This whole thing might blow over by the time they release a new series of chips that leave the little demon out.

The keys to the hidden threat

But then again, the final impact of Intel’s foray into security by obscurity will have to get past the test of the NSA and Joe. The NSA because it seems credible that Intel decided it was important to share such important details to protect American cyber security. We all know that the NSA has the best resources and commitment to protecting these secrets from foreign states, angry contractors, and Wikileaks, so they obviously will never let the secret get out.

No, the real threat is probably someone like Joe. ME runs in a kind of always on mode that allows it to communicate on a network even when the power is off, as long as the computer is plugged in. It is protected by an encryption key. I would like to imagine that the only key to all the Intel computers in the world is locked inside a secret vault with laser beams protecting it from mission impossible style attacks.

It would not be surprising if the reality was much more mundane. Its probably on a little security token that Joe took home one day to debug a few components of the ME server. Joe is probably well meaning, but made a copy of this key one day when management was pushing him to meet an unrealistic software delivery target. Joe’s a good guy and would never do anything deliberately to hurt the company, much less all Intel users around the world.

Unfortunately for the rest of us, Joe has been trading Bitcoins lately. No one will come looking for the key to all the Intel computers when they penetrate his workstation trying to steal his Bitcoin wallet. But some nefarious hacker may see this discovery as a divine omen of his destiny to create a business around penetrating the most sensitive cloud servers in the world by exploiting this hidden threat. And maybe, just maybe, if Joe happens to be reading this, he’ll have the foresight to delete the keys before its too late.

November 28, 2017  2:20 AM

MVC 1.0: The perfect fit for microservice admin tools

cameronmcnz Cameron McKenzie Profile: cameronmcnz

The following is a transcript of the conversation TheServerSide’s Cameron McKenzie had with Ivar Grimstad out hot topics in the Java ecosystem, with an emphasis on MVC 1.0 and the new security specification, JSR-375.

Getting people talking about MVC 1.0 and JSR-375

Cameron McKenzie: TheServerSide was really lucky to catch up with Ivar Grimstad earlier this year. These days he’s evangelizing a couple of what I think are pretty important topics. One is the new MVC framework, and the other one is Java security.

The interesting thing, though, is that despite how important these specifications are, MVC and JSR-375 just don’t quite get the headlines like, say, microservices and containers do. So I wanted to know from Ivar, what are the big things that people need to know about the new MVC specification and JSR-375.

Ivar Grimstad: If I take MVC first, we had a lot of attention around that a couple of years ago when the spec was a part of the EE platform. And there was some noise about it when Oracle took it out. And then, happily, I was fortunate to be in the position that I could be the lead of that specification, so I got it from Oracle and keep on doing it. I also brought on Christian Kaltepoth. Since we were the two most active members of that spec, we were the best guys to take it further.

And there has been a little bit of silence around MVC, and we don’t get much attention anymore. The community really wanted MVC when it started and then they kind of moved away towards microservices and containers.

So while we are kind of in the back stream of the cool technology, MVC is still something I think will be used. We get a lot of community responses when I tweet or blog or say anything about it. We have a lot of contributors on the mailing list, and it’s doing fine.

Cameron McKenzie: Now, one of the things about MVC 1.0 is the fact that it seems to work really well with microservices. And I can see it being used heavily to create UIs for container-based applications. Is that where you see the focus being?

Ivar Grimstad: I also think it’s going to be used a lot in more enterprise, in-house applications, but that’s not the sexy topic that attracts the audience at conferences.

MVC 1.0 and JSR-RS

Cameron McKenzie: So in your eyes, what is that makes MVC 1.0 so special?

Ivar Grimstad: Well, the most important thing, the way I see it, is that it’s built on top of JAX-RS, so if you’re using JAX-RS to create your REST endpoints, the transition to also add some web interfaces to your applications becomes easy. Most REST applications also have some kind of admin tool going on along with it. With MVC 1.0 we can actually build on the exact same technology used by the REST application, because with MVC we just add some flavors to JAX-RS and then we’re good to go.

Cameron McKenzie: So is MVC the new UI framework for container-based applications?

Ivar Grimstad: Definitely. I mean, if you’re creating a containerized service that also has some kind of UI to it, it makes sense to use MVC. If you have developers that are on JAX-RS platform and know Java EE and you’re building on that infrastructure, I see MVC as a very good fit there.

Cameron McKenzie: Now, you are also an expert on JSR-375, the new security API that’s going into Java EE. What can you tell us about that?

Ivar Grimstad: This is a brand new security API for Java EE 8.

I think it’s an important specification because it bridges some of the gaps that were lacking in previous versions. We introduce a common terminology, so we are kind of talking about the same thing, no matter, when you’re talking about security such as the authentication mechanism. We also have the more application developer-managed support. So you can, with annotations, easily add security, and you don’t need to do any container or vendor-specific configuration to get it up and running.

Standardized security with JSR-375

Cameron McKenzie: Now, when I read the JSR-375 spec, I kinda say to myself, you know, “Really? Have we not standardized a lot of this stuff already?” I guess a lot of the stuff like custom user registry APIs and how we connect to user repositories, stuff that’s been managed by the vendor in the past. So the developer really hasn’t had to think about it. But, yeah, I mean, do you not get that impression, “Jeez, how did we get to 2017 and not have this stuff standardized already?”

Ivar Grimstad: Yeah, that’s true. And we have the same feeling. But now it’s there, and that’s a good thing. And it’s definitely a good foundation to build upon.

Cameron McKenzie: So what is it about JSR-375, the Java security spec 1.0, that makes it so conducive to working with microservices?

Ivar Grimstad: You do the security in the application, so you don’t need to configure it from the outside. So it’s contained in your application, the security configuration.

Cameron McKenzie: So what are the big topics you see going forward into 2018?

Ivar Grimstad: Since I’m moving around in the Java EE world, I think that one of the main topics we are gonna discuss is the Java SE 9 move to the Eclipse Foundation. And there’s also a lot of discussion already on Twitter about the naming because they released the name for it to be Eclipse Enterprise for Java, and people of course have opinions about that. So I think that’s gonna be discussed a lot.

Java: A curse or a blessing

Cameron McKenzie: Now, here is a question I have been asking a number of people lately. It’s this: looking past, over the past six or seven years, do you think being the steward of the Java Platform has been a blessing or curse for Oracle?

Ivar Grimstad: I think they are making big money on Java, So I think it’s been pretty good for them. So I don’t think it’s been a curse. I think the handling of EE 8 in 2016 was not good. And we saw the community react to that with the Java EE guardians and the MicroProfile which grew out of that. But the turn they have now taken to open-source things, like open-source NetBeans to Apache and the EE to Eclipse Foundation and also open-sourcing more of the JDK tooling, they’re taking a step in the right direction. I think it’s gonna be positive reception.

Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: