If you’ve been in software more than a few years, you have probably heard the announcement “from now on, all programming will be in (new_language)”, yet never heard any plans for the conversion. The idea is so naive (do we rewrite all our existing applications?) yet too common.
Process changes are worse. I once had a manager declare that we would start doing ambiguity reviews for documents. From what I can tell, no one actually did the review. Ever. Not even the manager championing in the change as an example.
When I heard that GeePaw Hill was going to do a keynote on change for Agile&Beyond this week, I was encouraged.
GeePaw writes code. As a programmer’s programmer, he has managed to stay relevant in a 40+ year career. He blew me away.
Here are a few of his ideas. Continued »
Twenty years ago, when a change program came in, it came in a three-ring binger. Today it is more likely to be a confluence page or a division-wide email. No matter the delivery format, they all seem to have about the same actual chance of making a lasting impact, somewhere between slim and nothing. I’m not trying to be down here. The effort to create the new program is usually noble. Most of these programs address real problems and organizational pain. We just can’t figure out how to make them stick.
Today I’ll talk about how to get there. Continued »
Today I’m going to tell you a secret about testing tooling, a sort of truth. It’s a kind of truth like the knowledge a partner is cheating. You need to know it, but you’d rather not, and sometimes you’ll wish you could go back to ignorance.
It’s about those automated tests which that one tester is making, over in the corner. You know, the ones you make for every story, the ones that may or may not run under CI.
Your testers are skipping a ton of tests.
That is, of the list of flows to automate, there are a bunch they are … Just. Not. Doing.
And they are getting away with it. Continued »
Yesterday, on the twitters, I had this conversation about communication with Michael Bolton:
It’s a simple answer. It is an easy answer.
And yet, when the tester asks “What exactly do you mean by soak testing?” they are likely to get non-answers. The manager may at first act like the answer is obvious, that everyone knows what it is. Another common response is for them to say “You know, soak testing”, with a slight laugh, making it socially awkward for you to say “Oh course I don’t know, that is why I asked.” If you persist the manager may shift, to say that you, the tester, are the expert. If they are willing to admit they don’t know either, you get to go to the next level. At some point, someone becomes afraid of you talking to “their” customer, and the communication stops.
After twenty years of doing this and helping others do it, I have developed a handful of tricks.
Read on, dear reader. Read on.
If you want to get better at software testing, you might start out looking for a test maturity model, maybe the one from the Illinois Institute of Technology that became the TMMi.
Wait! Before you click that link, stick around. I’ve got a better today.
Today I’m going to propose a way to think about tester maturity. One you can use to evaluate candidates, yourself, and yes, the overall culture of the organization. It is a model, which means it a general. All generalizations have exceptions. All generalizations are wrong. Still, if it can help you test better, interview better, hire better, then I would suggest that less wrong is better.
Here’s my proposal: A test maturity model that is less wrong. Continued »
This week I was at SauceCon, the annual conference for Sauce Labs. Sauce provides the grid (and cloud of mobile devices if you want them) to run selenium scripts on. Listening to a speaker talk about “testing” without drawing a distinction between regression testing and story-testing, I put out this tweet:
I know, I know, there are all kinds of testing, not just those two, how dare I forget about performance and security and so on. Here I am, being terrible, “just” talking about “functional” testing. Okay, fine, but not my point.
Today I just want to make two tiny distinctions, and explain why they are important.
It seems like every time we do large-scale test tooling, we also end up writing a data comparator program. That is, program to compare two different things to determine if they are the same in the ways that matter. No, that’s not a typo. The middle step, where we “zero out” the differences that do not matter, is something I call data swizzling. Sadly, when I talk about data swizzling, most people think I am talking about data on how to cook a steak.
This problem of how to compare the “different” to determine if the differences “matter” turns out to be an incredibly common problem in software testing. Without the terms and concepts, teams generally end up inventing their own way of doing it, writing code that eventually becomes a gnarly mess. They reinvent the wheel.
Here’s the basic use case for a comparator, and how to do it.
To a programmer, DevOps typically means automating operations with Build/Deploy pipelines. I find that definition unsatisfying. The “right” answer, that DevOps is a culture change toward collaboration, isn’t much better. That second definition is vague enough that nearly anything can call DevOps. It is also vague enough that people can fundamentally disagree on premise and conclusions, talk “past” each other, and think they agree. One term for this is shallow agreement, and it’s terrible.
Eleven years since DevOps became big enough to have its own conference … e don’t know what it means.
Today I’d like to give up on defining DevOps. Instead, I’ll skip past it and introduce something even better: BizDevTestSecHRPRMarketingOpsOps.
That’s not a joke. Or, at least, it is not entirely a joke.
Let me tell you about it, starting with DevOps, but going very quickly.
When we started to talk about the cloud, twenty years ago, it was a picture on a napkin. We would draw one physical, concrete system, then an arrow, then a cloud. The cloud represented “the internet.” Its great power was that it was not concrete.
We did not know where the internet was. We did not know how it worked — and that was a good thing.
For the most part, though, we rented servers in colocation facilities. These servers had IP addresses which would be listed in a lookup table called a Directory Name Service, or DNS. When you went to my website, www.xndev.com, your browser would do the IP lookup in DNS, translate www.xndev.com into 22.214.171.124, then go to that web page. To the casual yahoo! search-er, the computers were in the cloud. The rest of us knew all about the Linux Server in the data center. We had to choose between spending too much money on computer power, or occasionally getting overwhelmed.
That changed ten years ago with EC2. Suddenly the vendor claimed that could spin up as many servers as we needed. Run one server most of the time, then auto-scale when your company hit the front page of The Wall Street Journal. That was the promise, at least.
Today we have Kubernetes, an open source cluster manager. Kubernetes takes the mystery out of auto-scaling – showing that it is more art than science.
Eleven years ago I co-organized the first Workshop On Technical Debt with Steve Poling; Michael Kelly was our facilitator. Since that time I published a cover story for Better Software Magazine on the topic and continued to work in the field. It is common for me to perform consulting and contract assignments and hear the term. Sometimes management listens; sometimes they don’t. Programmers might slag on the work done by the previous programmers. Sometimes the programmers are talking about their own work.
In all that time, I have noticed something startling.
Whenever people are talking about “technical debt”, there is invariably a programming skill issue involved. Continued »