Glossary
These are the terms we use all the time and would like to have a common understanding of.
BRANCH-BY-FEATURE SOURCE CONTROL
Currently, we use and recommend GIT as the preferred source control management (SCM) tool. The branch-by-feature model is best documented in GitFlow (https://datasift.github.io/gitflow/IntroducingGitFlow.html, https://www.atlassian.com/git/tutorials/comparing-workflows/gitflow-workflow). In essence, developers create a separate branch for each individual feature they work on. They merge the code related to the feature to a common develop branch only when their feature is done and tested in isolation. The develop branch, where all completed features are integrated with each other is then built and tested again.
BUSINESS CONTINUATION
This is the process and procedures that ensure that the business can continue to operate during a disaster. For example, shutting down the office and asking everyone to work from home for a period of time, or having identified available warehousing space in a neighboring town or state that can be rented temporarily. While communications and IT operations are always part of a business continuation plan, they are a supporting subset of the overall process.
CONTINUOUS INTEGRATION
Continuous integration is the usually automated process of building and deploying a fully integrated software system. This includes not only compiling code and/or copying scripts but also building and deploying servers, databases, loading bootstrap data, and ensuring correct software component versions. Typically, a continuous integration system monitors the develop branch in the SCM system and kicks off a build and deploy every time a new commit is made. The integrated system is then deployed to a dedicated set of resources (server, database, and other infrastructure instances) and is made ready for testing. Continuous integration is one of the cornerstones needed to support an acceptable level of quality in modern software development.
CYBER SECURITY
One can write volumes on this topic, and most practitioners of the art would agree that much more is being said than done in this area. When we talk about cyber security as part of the development process, we refer to those development practices that make a software system pentest (penetration test) ready. This includes, at a minimum, training the software engineers in the top-10 best security practices in the context of the toolchain and technology stack being used and making a static source code security scan part of the build and test cycle. This is another quality cornerstone.
DEVOPS + SEC
We define DevOps as the combination of processes and tools that make it possible to deploy new versions of the system at any time, with no outage or shortest possible outage This must be done while maintaining the operational systems at the appropriate level of performance and patch levels that meet or exceed the operational needs of the business and all normative compliance requirements. A good DevOps setup would allow an IT or product manager to decide to ship or deploy an incremental version of the system as soon as a new feature is completed, because they would have confidence that the system will not break after an incremental release. It is a critical time-to-market and organizational efficiency enabler.
DISASTER RECOVERY
The planned actions and processes to bring back online and make operational the communications, networking and IT components of a system after an outage. In the worst case, this involves standing up new hardware across the board, configuring it, building software from source code, deploying it, and configuring it. Particular attention is paid to configuring networking, routing, dns and firewall pieces to ensure system components can talk to each other. It is necessary to treat config files as source code and to plan on dealing with unavoidable delays in redeploying ssl certificates, loss of access because of 2FA account recovery, etc. In our experience, the best disaster recovery plans include a safe box at a location 50+ mi away from the HQ and data center locations.
OPERATIONS
Also known as “keeping the beast fed and purring”, for us "operations" means the people and processes that ensure that when a client needs to use a system, it is available, and when a user needs help with a system, there is someone available to help. Actually, help, not just take a message or log a ticket.
PERFORMANCE AND SCALABILITY CONSIDERATIONS
No system can be designed to perform and scale to an unlimited upper bound. As a rule of thumb, we design for “explosive growth on a 3-year horizon”. This means that we ask “How many users/accounts/transactions/devices / etc. do you think you will have in 3 years if your business grows beyond your wildest expectations?” We then double these numbers and design for them with one critical optimization target – we want the cost of scaling to be as linear as possible over the target period and proportional to either the market share or the revenue drivers or the business. Overdesigning (and the related overspending) under the guise of unquantified scalability targets or because of “best practices” considerations is an indication of inexperience and naivete.
PLANNED TECHNOLOGY OBSOLESCENCE
Depending on whether you are building software for internal use, traditional desktop, mobile app or a SaaS offering, it’s components will be technologically obsolete in 2 to 5 years. The only good news here is that if your system is reasonably well architected with a decoupled front end, application layer, and back end, you would not have to rewrite all components at the same time. Right now, mobile and front-end technologies (and fashion) are the least stable, with middle or application/microservice tier second, and back end (data store) longest-lived. Just like with performance and scalability considerations, however, the technology stack cannot be driven by purely "best-of-class-right-now" considerations. For one thing, you are guaranteed to get 4 different answers from 3 different CTOs if you ask them what the best tech today is. You should try to understand (and put down in writing) the marketing and competitive posture, investor perception, and cost of skilled labor considerations and trade-offs when you choose your tech with an eye on planning for technology obsolescence.
PULL-REQUEST DRIVEN PEER CODE REVIEW
The pull request process is part of the overall branch-by-feature and GitFlow source control management practices we recommend. When a developer is ready with a feature and would like to merge their code with the project development branch, they are not allowed to do that themselves. Instead, they create a pull request, which is essentially a message to a colleague who is tasked with merging the code. The process of merging a feature branch to the main product branch can be challenging and is typically delegated to the more senior and experienced engineers. What it does, however, is to force a de-facto code review. The person merging is the second pair of eyes on the code written and they can decline the pull request if the code does not meet some criteria, even if the functionality is implemented and “works”.
REAL-TIME MILESTONE TRACKING
There is one question that everyone on a project wants an answer to: “Are we there yet?” We have found a way to address this omnipresent inquiry in a definitive fashion. We prefer to use an issue tracking system (JIRA) that allows us to know exactly who is doing what and when. We log work to issues daily, and most importantly we re-estimate the time remaining for each issue every time we log work to it. Every issue is assigned to a release, and with the most current estimates for the time remaining on every issue, we guarantee that we know exactly when we expect to “get there”.
REQUIREMENTS MANAGEMENT
Requirements management is the process of collecting, elaborating, costing, and prioritizing product or system features that are yet to be implemented. Good requirements management ensures that all stakeholders have the same understanding of what a named feature does and enables a consensus on priorities to be reached. The output of a requirements management process is the minimum list of new features, bug fixes, and improvements that are needed to release the next version of the product. The very first time this is done, is when we define the MVP (minimum viable product).
The art of developing software projects lies in finding the balance between specification, invention and iteration. Used to be that we wrote painfully long, super-detailed specifications that took months to prepare, review, rewrite, discuss, edit, rewrite, ad nauseum. Often this resulted in what is fondly know as the “Analysis Paralysis”. Always this resulted in a specification that was obsolete on the day it was completed. Unfortunately, this is still the case with most government contracts, inherently a function of bureaucratic momentum. Eventually, people got frustrated, the pendulum slung to the other extreme and the Agile Manifesto was born. As all things that seem too good to be true, so was the Manifesto Utopia – it works great when you have a small team of A-players with tons of experience and motivation, who speak the same language, get along and have great communication skills, plus a powerful executive sponsor-enabler who shields them from distractions. Unfortunately, those teams are as mythical as the man-month.
SUPPORT
Writing some code and “throwing it over the fence” with no long-term responsibility for maintenance and support is, unfortunately, a rather typical outsourcing company pattern. Our approach is different. From day one we make sure that our clients own the key knowledge and IP and are fully capable of eliminating dependencies on us at any time – if, and when, the business calls for it. At the same time, we plan for and offer ongoing support – for as long as the client needs it after a project is complete, regardless of whether there are additional engagements. In other words, our clients have the confidence that their systems will continue to operate regardless of the usual needs to upgrade, bugfix, scale, and resolve user issues that will occur for years after a system goes live.
We offer a range of support options – from a small, 20 to 40 hours per month, dedicated support blocks of time to 24 x 7 x 365, always available teir-1 and tier-2 call center managed user and operational support.
TEST AUTOMATION
Instead of spending time philosophizing on the semantics of automated testing vs. test automation, we look at test automation simply as the method of efficiently and cost-effectively ensuring the quality of the integrated and deployed software systems we deliver. There will always be a place for human, hands-on, testing - user experience feedback cannot be gathered in any other way. We treat the rest of the repeatable testing that must assure correctness, stability, performance, and scalability with every system build, as a normal and standard software engineering task.
Our software testing staff are software engineers, just like all other developers, and we aim to minimize the percentage of manual “click and pray” types of tests.