9 best practices to make your software security future proof
By Bárbara Vieira.
How maintainable is the security of your (web) application?
Maintainable security. This topic that has been floating in my mind for a long time.
Information security is the hot topic of the moment, either because the GDPR demands it as prerequisite for privacy enforcement or because people are becoming more and more aware of its importance. The trust people put in online transactions is what makes it one of the most important and challenging things of the modern internet. Without information security, online services would not be possible.
As for Maintainability, although most people are not even aware of what that means and do not consider it yet as a blocking factor for software security, its importance is being underestimated. One of my favorite definitions of maintainability is:
Maintainability is defined as the probability of performing a successful repair action within a given time.
Thus maintainable security can be defined as the probability of fixing (and hopefully preventing) a certain vulnerability within a given time.
What does this entail then? In a world where zero-day vulnerabilities are right around the corner and the time that is given to fix a vulnerability is scarcer every day, increasing the probability of fixing a vulnerability, in a very short period of time, is of utmost importance.
The time required to implement an additional security functionality in an application is not always easy to estimate. Estimating whether such functionality requires any changes in the architecture of the system is even harder. For instance, if you think about refining the access control policies of your application, by moving from a function-level access control to a data-level access control (where access control is performed on the data field instead on the system functionality executed by the user), can you precisely estimate the amount of time required to perform such change or even if that change has an impact on the architecture of the system?
Having a resilient and scalable security architecture and design is crucial to prevent vulnerabilities.
What does that mean in practice?
This post aims at addressing some general practices that should be considered to enable a more maintainable application security. The approach followed is not extensive and does not aim to be complete. The 9 best practices described here reflect past experiences related to the assessment of the security and maintainability of software systems.
#1 Security frameworks are your new best friend
The most popular software stacks (e.g.: Java and C#) used in the development of enterprise applications provide very good and stable frameworks to implement the security requirements and easily prevent some of the OWASP TOP 10 vulnerabilities. For instance, through a very simple annotation such as @EnableWebSecurity, Spring Security automatically and out of the box, implements mitigations against some of the most common web application vulnerabilities (e.g.: CSRF, XSS, etc.). The essential HTTP headers to prevent certain types of vulnerabilities are added to the requests and validated in the responses (e.g.: same-origin policy for the synchronization token to prevent CSRF).
Although these frameworks may require some time to understand and use (the learning curve might be quite steep, sometimes), they have been improving quite a lot in the past few years, facilitating the implementation and maintenance of the security requirements.
Nevertheless, do not blindly trust all the defaults of the frameworks and libraries your application depends on. More than often frameworks rely on weak default encryption algorithms such as DES (encryption standard deprecated since 1998). Thus, always check all the frameworks and libraries defaults, before releasing the application.
Besides, some of the least maintained frameworks contain vulnerabilities that never get to be reported. This is because some bugs (this is an interesting discussion about a systemd null pointer dereferencing bug) are not even considered security vulnerabilities in the first place. Notice that categorisation of bugs as security vulnerabilities, is based on the ability of the development team to evaluate the impact of that bug in the security of the application. That’s why updating the libraries and frameworks of your application to their latest version helps improving its overall security. So, stay up-to-date.
#2 Deny first, authorize after
Broken access control is one of the OWASP TOP 10 vulnerabilities. This is because most applications fail to implement proper authorization mechanisms. Authorization design and implementation is probably one of the most complex security features that exists. Making it right and scalable is challenging and requires the effort of all the team members. Nevertheless, adoption of certain mechanisms and design strategies may facilitate authorization maintenance in the long run:
1. Authorise only afterwards: access must always be denied in the first place, i.e., access must only be granted if the user is actually authorized to perform a certain action. So, access control policies must be checked before processing the request and not when producing the response.
2. Use and abuse of filters: HTTP filters can be used to filter out all the requests before processing them (e.g.: if the user doing the request is authorized to perform the designated action, then the request can be processed). Certain frameworks (e.g.: Spring Security) even provide a very elegant way of binding the access control policies to certain system functions by means of code annotations in the methods that expose the desired functionalities. In combination with the HTTP filters these annotations provide a very clean and easy to change implementation of the access control mechanisms.
3. Always document what you’re doing: either using role-based access control or a more fine-grained access control model, the functionalities enabled in each roles must be clearly defined and documented. Although, the code should speak for itself, the lack of documentation brings risks for code portability and transferability, specially when considering access control management.
#3 Make the IT infrastructure your number one software project
A securely developed application is defenseless against an attacker with administrative privileges in the production environment. For instance, the WannaCry ransomware attack was based on an OS level exploit that was fairly easy to protect, just by upgrading the OS version. Nonetheless, it was very problematic because a lot of IT departments tend to disregard the importance and impact of OS security updates.
In general, the longer you wait to upgrade, the harder it gets and eventually a system can get ‘un-upgradable’, therefore, unmaintainable.
When deploying the application, it is necessary to make sure that all 3rd party components that are part of the production environment can be upgraded or recreated from backup (in case of ransomware) at any point in time. This includes databases and web servers, deployment frameworks, the identity manager, log manager application and alerting system, the operating system, etc.
The ‘upgrading is hard’ problem is similar to the ‘releasing is hard’ problem and should be solved the same way: by doing it often.
The way to do this is to treat the environment in which your application runs as part of the application and not something separate. As such it should be treated as a software project in itself.
#4 Validations should be done in the backend
#5 Let the framework manage the session for you
More than often, session management is customised and implemented in a naive way, leading to several vulnerabilities. The mistakes introduced vary from session cookies that aren’t random enough (because they rely on weak constructions), to using weak hash algorithms with fixed salts to store them (or not storing them hashed at all).
Some of these vulnerabilities can be mitigated by relying on off-the-self development frameworks. Over the years, development frameworks have been improved to easily implement session management by providing a default session manager that when used prevents any session handling implementation from the developer. For instance, in the .NET framework the usage of the default session manager only requires adding some configurations to the configuration file of the web application.
Although, a limited list of algorithms is provided by these framework (giving little freedom to more experienced developers) and the default algorithms do not correspond to the most secure primitives, this prevents less experienced developers from doing mistakes and introducing vulnerabilities in the session management of the web application.
Notice that the fact that in modern frameworks crypto primitives can be defined in a configuration file, makes it much easier and less time consuming, to, for instance, update those primitives when a new attack is discovered. Plus, if the default session manager has a known vulnerability, it is easier to patch the application by updating the library, then doing quick (and perhaps dirty) fixes in custom implementations, that can lead to even more exploits.
#6 Store configurations in files
Configurations are for configuration files and the code must not contain any configurations. For instance, the crypto algorithms used by your application must not be hardcoded. Why? The sooner a vulnerability is available, the harder it is to fix the issue.
Let’s think about SHA1. Although it has been considered insecure for more than 10 years and deprecated by NIST in 2011, very recently an attack to find collisions in SHA1 has been found. This triggered some urgency to update some vulnerable applications in order to use more secure hash functions. Nevertheless, when the construction is hardcoded it is necessary to contact the development team and wait for the patch or the next release, which might take weeks or even months. When these constructions are defined in a configuration file, the amount of time required to fix the issues is drastically reduced.
#7 Remove dead and duplicated code
Unused code is a way of misleading the maintenance team, or even your team members. If the code is not going to be used in production, it needs to be removed. Why? You won’t be able to prevent one of your team members from using a deprecated function (that has, for instance, a security flaw), if that is not properly communicated (in large teams this often the case). To check the history of different versions of your code, just use the version control system, that’s what they were made for.
Code duplication is in most of the cases a consequence of a bad and sloppy design and brings additional security risks. Duplicating code also means duplicating (security) bugs, and consequently, enlarging the attack surface of your application. Therefore, when the time to fix duplicated bugs comes, teams need double the time (at least) to fix them.
Remember that: cleaner and leaner code is more secure code.
#8 Do not insert backdoors
Coding different behaviours for development and production is a no go.
This behaviour is mostly characteristic of teams that need to develop some application that needs to be integrated in a certain existing organization, for which the environments and communicating systems are hard to mock. But, as much as I understand how handy is to have different behaviors for production and development environments, this is just a naïve way of introducing backdoors in the application. Having different behaviors, brings the risk that not all security requirements are properly tested.
#9 Write maintainable code
Writing maintainable code is a pre-requisite for everything else in software engineering. Notice that unmaintainable code is hard to test, difficult to analyse and to re-use.
Why is this so important? Let’s think about the recent Apple serious authentication flaw that allowed any user to login as ‘root’ with an empty password. Patrick Wardle has performed an analysis of the root cause of this flaw. The flaw appears in the macOS High Sierra and is introduced in a method, used in the verification of that password, that contains a high cyclomatic complexity. It is quite straightforward to realize that one of the reasons why Apple was not able to detect this flaw in advance, was because it was difficult to test this code properly due to its high complexity. Although, it is no secret that high complex code is hard to test (thus being more difficult to find bugs and vulnerabilities), some developers tend to forget this more than often. Therefore, writing maintainable code does prevent introduction of vulnerabilities in the code. Some tools, such as the Better Code Hub, can actually help development teams to write more maintainable code without much effort.
There are many more things to be said on this topic. However, the goal was to address the main high level design issues we always see happening when assessing the security of software systems. It is not easy to get security right and even the most mature development teams tend to make mistakes.
So, your take-home-message is, never take the security of your application for granted because there is no perfect security and most importantly, be aware that you need to be constantly improving the security of your application.
This blog was also published on Medium by Bárbara Vieira