Posted on 2021-01-30 byVlad Călin
Reading time of 1 minutes
Over the course of my software development career, I developed some personal rules of thumb that help me structure my code better and write more readable and maintainable code.
But first, what is a rule of thumb? For me, it's a principle that I adopted with the experience that when I appply it again and again, it actively helps me achieve my goals. These goals might differ: from code quality, to software architecture, but every time, it's something related to the maintainability of the code.
Let's jump right into it!
This is a core rule-of-thumb that I follow, and it will always result in better code. In almost all cases, when we write code, we deal with business logic. This business logic deals with the core problem we are trying to achieve, so the code should always reflect that.
When we name a variable, it should reflect what it represents, and not that it is a list or the type of the items in it.
When writing a business logic block, you want the next person that will end up working on it to be able to figure out the core of the problem. When a requirement comes in, it will surely be requesting some business logic implementation or change, using terms specific to its data domain.
When we write the logic for that, it's way easier to transcribe and adapt using the same language.
For a team working on the same project (or part of the project), the domain language is shared between all members, while implementation specific terms might differ and cause confusion. Some people will prefer array, some will prefer list, other will prefer vector and other will prefer iterable. But when you talk about the business logic, a group of collections means the same thing for everybody.
So, to recap, writing
collection_ids = [1, 2, 3, 4]
will always be better than
list_of_ints = [1, 2, 3, 4]
The same principle applies to naming classes, methods and functions. The purpose of a class should be as obvious as possible, and naming that include data or processor should be avoided at all costs.
Although it's a popular Object-Oriented Programming concept that is thought since the first year of college, I have seen way too many classes out there that do everything. It's still better than having a huge file with only functions that call each other and pass the same 3-4 parameters to each other every time, but it's not a good enough approach.
When a class/function has a single responsibility, it's more testable, can be refactored easier and in some cases, reused. All of these increase the overall code quality of the code-base and in turn increases the development speed.
As a quick example, a class like the following, tries to do too much.
class FileExporter: def to_xml(self, ...): ... def to_json(self, ...): ... def to_xlsx(self, ...): ...
It's better to have separate classes for each operation we want to do
class XmlExporter: def export(self): pass class JsonExporter: def export(self): pass class XlsxExporter: def export(self): pass
This way, we can test each functionality independently, and we can differentiate easier between them.
Some the exporter will require/support extra configuration the others will not: the
ExcelExporter would allow
formatting, exporting to multiple sheets, data validation, etc... while the
XmlExporter would allow specifying
xmlns, a thing that is very specific to the implemented file format.
Other two pretty popular OOP principles that I like to merge into one rule for simplicity are KISS and YAGNI.
It boils down to avoiding premature optimizations, refactoring again and again until you get the perfect abstraction (hint: the perfect abstraction doesn't exist) and implementing features just in case you are going to need them in the future.
Every time you write code, your priority is to implement the feature with good enough code. Perfect code doesn't exist, so you have to settle for the next best thing: good enough. You still have to produce good quality code, but obsessing about naming variables and classes shouldn't be your main priority.
Code can always be refactored later, but functionality is what matters, because code is meant to be used to resolve problems.
Having things as decoupled as possible is the best implementation decision one can make. This will allow you to refactor easier, to swap out different functionality when needed and test everything without excessive mocking and head-aches.
I found out that a good design pattern that helps you decouple components is the adapter pattern. Although it shouldn't be abused, it is useful for developing two separate components in parallel and then integrate them with an extra lightweight layer with an adapter, that makes their inputs/outputs compatible.
Although it's commonsense nowadays, some old architectures and deployment strategies still make the distinction between environments at the code-base level.
The code base should always be the same for each environment because it will allow easier production error reproduction locally in the development environment. The environment is responsible for providing the configuration for each environment and making the distinction between production, staging and development.
I personally prefer to configure my projects through environment variables, but there are other ways to achieve the same result: configuration files, centralized parameter store, etc.
I don't thing there is any conclusion to be drawn, as this post is merely a list of personal principles. I hope you enjoyed it!