When I look through local software company lists and job advertisements, it’s easy to make a few generalizations. There are big companies that have been around a long time; they are stable and the people working there don’t have much to worry about. And then on the other side, there are very small, very new companies, start-ups. Working at a start up is a bit of a roller coaster, projects change faster that we can keep up with and the jobs are unstable.
The reality is that there is a spectrum. Where you land in the spectrum will effect your pay, the technology you’re working with, the culture, and what you get out of the deal.
Lets take a closer look.
More than a decade ago, at the cubicle to my right, I heard an argument between a project manager and a programmer. The project manager was saying that “the steering committee had decided” that new functionality “needed to be in this release” and the programmer had to figure out how to do it without moving the date. I believe the conversation ended with the PM saying “deal with it” and walking away.
Fast-forward two or three years, and I am a newly minted project manager. I’m the powerful one now right? At least I had a different attitude that our PM from the other story – I wanted to fix things, to do right by the technical staff. Have been one something like fifteen minutes ago, having a personal hand in the process documents for the IT department, I could do a lot more than push people around. Pushing people around was not my way, anyway, I had credibility, and knew if a request would take five minutes, five weeks, or wasn’t defined well enough to estimate.
On one of my earliest projects, I am paired with the same programmer. As that project starts to wind down, the programmer comes to my desk and explains that his team is late. “We’re not going to hit the deadline”, he says “likely miss it by two weeks, maybe four. You’ll have to explain this to the steering committee. Good luck.”
What. Just. Happened. Here? Continued »
Feburary 20th of 2014, HP published a patent (originally filed in 2012) for implementing performance tests in an environment using continuous delivery. The content of the patent is pretty general, there are definitions of benchmarking, explanations of tests being run in parallel against multiple systems, as well as mentions of test repositories and an engine that will drive everything.
Software patents are controversial, especially the patents that don’t specifically describe implementation and integration.
Let’s see how far reaching, and maybe overbearing this one is.
One of my daily frustrations is reading through technical documentation. Sometimes it is for my own product (and thus, internal). Much of the time, it also involves materials on the web, i.e. researching an open source tool. There is a brisk trade in the book market today for technology titles, especially those related to tools, to help us make sense of what we use each day. A lot the technical documentation available is written by programmers or engineers. I don’t mean to criticize, but the fact is, much of what is written is, at best, challenging to understand.
I am often called upon to write technical documentation, so I find that I become the person I complain about. It’s easy to put commands or instructions together as though it were a script, and leave it as is. After all, the people who will be reading my docs are smart people. They’ll figure it out, right?
ITWorld recently published an article about a study claiming refactoring usually doesn’t result in improved code quality. The authors of the study say “Even though some of those studies claim that refactoring improves the quality of software, most of them did not provide quantitative evidence”
There are other claims in the study that are a little concerning — refactoring doesn’t make code easier to change, doesn’t make code run faster, and refactored code isn’t more resource efficient.
My first instinct is to question the premise and techniques of the study.
But, does the study even matter?
I’ve been working with technology professionally now for twenty-four years, and in all that time, there is one “killer app” that just will not go away. This app may, frankly, be causing stress that is bad for our overall health. I am, of course, talking about E-Mail.
I have E-mail for business, for personal endeavors, and for community activities. My apps send me email to tell me I’m being a slacker about something I should be doing. I have aspired to live the “Inbox Zero” life, and have made several attempts, yet somehow, it always comes back, layers of sediment, to the point where, some days, opening my Inbox fills me with dread.
I’m writing this post from a fresh installation of Windows 8.1 on a MacBook Pro. I spent about half of Saturday researching, and downloading files, installing things, and waiting. There was a lot of waiting.
Systems administration isn’t my idea of a good time, so this was a little tedious for me. There were some lessons learned, and if gathering all of this in one place helps one person, then that’s great.
Here we go.
Well, sort of. To the extent that fail fast enables people to experiment, to try new things, to learn from them instead of continuing to bang their heads against the wall in the hope of getting through — sure. Fail fast is good.
And we can do better.
In the ITKE Discussion forums, a question was posed that I felt compelled to answer.
“Can someone explain the benefits of my life after completing my Java courses?”
I am no stranger to course work related to programming. In the mid 1990s, I earned a Certificate for UNIX Programming through the University of California at Santa Cruz. While I completed the Certificate program, much of the advanced C language work I focused on was never used again. The shell scripting I learned, however, I still use today. Why? The shell scripting skills had an immediate and tangible effect on my work.
It’s great to learn a new language, or get some time experimenting with a new framework, but experimenting with it alone isn’t going to help us keep those skills. In fact, the speed in which we lose skills that we develop but don’t use can be frighteningly fast. What can we do to make sure that we actually use what we are learning?
Last week I went to snowy Columbus, OH for a regional software / quality assurance conference called QA Or The Highway. QA Or The Highway is a one day conference, this year, it was preceded by a one day workshop on teaching software test design.
The workshop and conference were fantastic, but the conference keynote was particularly interesting to me. Gareth Bowles did the closing keynote on his experience working at Netflix and why they do a lot of testing in production rather than the normal pre-release testing we see.
Lets take a closer look at his keynote.