There was the source. The master copy of the master program, the omniscient stack of ones and zeroes that flitted across arrays of wires and gates. It effortlessly found space in the heap, and provisioned stacks if necessary. What language was its original dialect? It didn't matter. It just functioned.
And just as there was Adam, there was the Systems Engineer. The Holy Programmer Analyst III that held impressive certifications and had trekked to the high summit of Big Iron to bend the mainframe to his or her will.
Well, not really. Software has always been built by, with, around, and under teams of people. Whether that means two or four, most serious efforts that go somewhere are like the Beatles: If it was just John, then they wouldn't be the Beatles.
In order to facilitate orderly delivery of work product, different systems and schemes have made their mark on the industry. Waterfall, Six Sigma, ISO-WHOCARES, CRC Cards, Microsoft Project, and so forth. A veritable rogue's gallery of tools and methods that have had some imprint on the way we plan and execute things. One thing, one type of tool, above ALL by many miles, has endured: Source control and versioning. For those of you that don't know what I'm talking about, this brief paragraph does a pretty decent job of explaining.
I'm not going to go into the history of the subject but I've been around long enough to have endured Microsoft Visual Source Safe, which was only terrible in hindsight after I'd used a better product. Over the years, I've used BitBucket with Git, git locally, Subversion, and (mostly) Git and GitHub. I have my preferences, but I try to be less dogmatic in my old age.
I recently was part of a discussion at work that made me think about the importance of using Git properly. Why do we do it? Why is it that I just KNOW by instinct that badly implemented source control habits are worse than none at all? Nobody told me that, I just know it like I know my eye color.
It also made me think of the many vainglorious efforts I've been part of (or spearheaded) to herd engineers into a proper set of practices when collaborating, specifically in the way source control is implemented. Why sometimes it succeeded, sometimes it fails, and what the happy middle is. A few things came to mind:
The canonical version of your product should always be retrievable by cloning the master/main branch from the remote. It's that simple. It doesn't matter what cloud platform you use, what fat client, what React lib-of-the-day you fancy.
Consistency is important for many reasons. It makes things repeatable. It makes things learnable for new people and future visitors. It makes issues easier to find because there are no magic boxes waiting to explode. It dispels the haunted forest.
This is one area where it pays to have a little bit more structure and authority. You have to mandate the practice, and hopefully you are a) in a position to do so and b) have smart management that will back you up when the howling starts.
The worst thing a process can do is get in the way of practical software development. If a tool or process is cumbersome, needs context switching, has arcane access control barriers, or basically becomes a nuisance without apparent value, then it's back to the drawing board. The versioning tool should be almost invisible, as transparent as air and just as mission critical to the survival of the project.
The best way to do this in my experience has been to provide a raft of scripts that perform complex tasks that don't have you reading endless man pages. Node is great at this, and you can do clever mnemonics to categorize your scripts. build:dev, build:prod, build:qa, etc. They all build, but for different contexts. You have just saved every member of the team a few minutes each day. Minutes they would have spent running the subcommands in isolation, or making their own custom scripts that will be obsolete when the company changes their server topology.
Git is wonderful. It works. It's been honed in the trenches, and knows its way around a blade. It can cut through difficult merges like a dancer, weaving diffs and moving methods and declarations around almost magically. Even when it requires manual intervention it isn't a total chore. It's all text, it's all local, and 99.9999% of the time nothing is lost forever.
Unless it isn't used properly, but we'll get to that.
When I say it's seen things you people wouldn't believe, I mean it. It has been used to manage the Linux kernel, which is a bit like saying "it can fly a paper airplane in a tornado."
I'll admit it, I've been guilty of commit-spamming. It's paranoia mostly, a relic of the cold war and a gen-x neurosis. But if you're committing to the feature branch that you are working on, and you haven't opened a pull request, committing is fine. However, broken code should never get merged...but that is the responsibility of your review/static analysis/unit testing practice.
It's easy to say, meh, I don't REALLY need to do thorough testing of my feature. It'll get caught in QA.
All this does is slow everything down by adding more round trips. It also taints the main/master branch if QA finds problems with your implementation. You've essentially broken the build, made more work for other people, and people aren't stupid. They know what you just did.
Give them quality code, and in the words of a former colleague: "Dare them to break it."
Whether it's automated via PMD, an AI, or done by a human, code reviews are fundamental to the learning process. Even if you think you know everything (you don't) and your code is as pure as the driven snow (it isn't) new projects and new teams bring a new set of contexts, each one an opportunity to learn something new.
This brings me to another point regarding reviews and static analysis: I've encountered engineers who see no value in these efforts. It is viewed (by a minority I might add) as an afterthought, a bureaucratic checkbox that can be bypassed if the schedule is tight.
I completely disagree.
It's when the schedule IS tight that reviews are their most effective. In those instances, code can be rushed. Solutions can be shorn of functionality, or worse, new functionality added that wasn't initially in scope. It's in these desperate moments that a second set of eyes can prevent disaster.
Keep things ridiculously straightforward and don't overburden yourself with fancy tools. My biggest recommendation is that you automate as much of this as possible. If developers use disparate tools, make sure you know what plugins or extensions they need to grab. Create scripts, leverage the IDE's ability to extend itself, and just make it easy on everyone. PR templates are nice, but don't overburden people with paperwork.
Let's try a hypothetical. We have Team One, which consists of three developers: Anne, Brian and Charlie. This team is managed by Donna. They are in the middle of a big feature push, and a sprint has just kicked off. Each person has been assigned stories that have an equitable total point value.
Near the end of the sprint, there has been some pushback on Feature 1.1. This is currently assigned to Anne, but she has her hands full and has delivered the original story per the requirements. Brian's work is winding down, and although it looks like this feature will extend into next sprint, he's willing to take on the new functionality. Teamwork makes the dream work, right? Yeah, you can strangle me later..
While Brian is working on this new piece for Feature 1.1, it comes to light that there was a missing edge case that is more of a "non-zero chance of catastrophic failure" case. Anne quickly gets to work to cover it. Now, since they are both technically working on the same feature with the same codebase, how would this work? Branches.Anne could opt to spawn off of the feature branch as a hotfix and merge when she's done. Git should handle the merge, and whether or not Brian is done doesn't matter. Even if the merge is manual, it shouldn't be too tough.
"So what?", you might ask. "I see plenty of chore work in there."
Let's remove Git from the equation.
Now everything is done via email? Or is it FTP? Or do we copy from production? Now Anne is unsure which local copy of the source she should give Brian. not that one, she was experimenting with a new approach. He might think she's nuts. Is it this one? Maybe. Then, when Brian is done and she's not, which one should she merge his work into? This is all done by hand, of course, and nobody EVER fat fingers a copy/paste operation.
I know all of this is horrifyingly obvious stuff, but it bears repetition because I have seen a trend that indicates that developers are sliding back into bad habits. Much of that comes down to a simple skill gap that can be fixed with training, or better yet, mentorship from awesome people like YOU.
Love and Happiness,
Justin
Ovalhead : Justin Stroud : The views expressed here are my own
This site is hosted by RackNerd on Alma Linux
Pages served by a Node.js process and dirt simple Express/Moustache
Ovalhead : Justin Stroud : The views expressed here are my own
This site is hosted by RackNerd on Alma Linux
Pages served by a dirt simple Express/Handlebars app