3 Ways Enterprise Architects Can Bridge the Socio-Technical Gap
09 August 2023
Request your demo of the Sigrid® | Software Assurance Platform:
27 February 2018
4 min read
Security by design is the opposite of security after the fact – instead of testing the security of a system when it’s done, information security is built in from the very beginning. This reduces costs and mitigates risks because:
Security by design sounds very sensible. Unfortunately, however, it’s still in its infancy in practice. Every day in the news we read about how information security still leaves a lot to be desired. In my work as a software researcher I see the same security errors and a lack of attention to security with many of my clients over and over again.
Why does security still receive so little attention? Are developers lazy? Unprofessional? On the contrary – they are generally motivated and take pride in their work. The key is what they are held accountable for. In general, the emphasis is on building new functions. This is what developers focus on and this is the most visible part. If quality is not made visible, it typically is the first thing to go. You get what you measure. To make sure security is given the attention it deserves, this must be agreed upon with developers in advance.
Security by design therefore starts with a positive working relationship between client and supplier, with clear and appropriate requirements as well as the condition that the source code can be accessed to test whether security is properly built in. From that point on, software builders will also organise the process to include security by design. For suggestions on how to handle the dialogue between clients and software builders, see the “Grip on SSD” initiative by the CIP at www.griponssd.org. Laws and regulations, such as the GDPR, are a good reason to set requirements. The GDPR mandates security and privacy by design where personal data is concerned.
When setting up security by design, it’s important to realise that software development is work done by humans. People make mistakes. The trick is to see how you can realistically get programmers in a situation where they make fewer mistakes, and that the mistakes they do make are found. This can be achieved using the following nine steps:
1: Build on proven technology: Security is difficult, and you want the technology you use to handle as much of that as possible for you. Modern programming environments already provide a good level of security – if used correctly. Security by design starts with the choice of technology, and getting into how to use it properly. One in three vulnerabilities we find is caused by programming something that’s already available. It’s important to stay abreast of vulnerabilities in the technologies and libraries used, and to patch on time. Library management – keeping track of external code – is becoming one of the most important programming tasks.
2: Create awareness: Make developers aware of what’s required and what the typical threats are for the software they develop. Examples and demonstrations work well here. Involving developers in threat modelling adds to their experience.
3: Limit instruction: Knowledge of security is nice in a developer, but you don’t want to depend entirely on developers having the right knowledge at the right time. That’s desirable, but ultimately not feasible as no one person has so much knowledge available. In any case, it’s important to teach developers the principles of security by design. OWASP describes ten of them here.
Sometimes there are guidelines the development team must adhere to, but these cannot be automatically captured in the chosen technology or tooling. The best form of these guidelines is therefore reference material, arranged based on recognisable situations – so-called triggers. The development team must be able to recognise these triggers (for example “we are now processing user input”).
4: Manage maintainability: Spaghetti code that is difficult to change increases the probability of (security) errors. Maintainable source code is a prerequisite for security, so set requirements for maintainability and provide tools to measure this.
5: Automate checks: More and more verification tools are available to automatically test for certain security vulnerabilities by scanning source code or testing behaviour. Don’t underestimate the effort required to properly utilise these useful tools.
6: Carry out manual checks: Manual checks are important because automatic verification tools are only able to identify some of the vulnerabilities. Checks can be carried out by team members or internal or external specialists. It’s important to not limit the specialist to the specified security requirements, which will always be incomplete. Don’t underestimate the execution of a manual penetration test or code review. It is a profession in itself, as it requires creativity, experience, systematics, repeatability and, ideally, the ability to advise developers on how to structurally improve their work.
7: Expand to include privacy: Privacy by design is about the correct handling of personal data (and the security thereof). It’s about awareness, about knowing the principles, and about specific checks. A security by design program can therefore be expanded to include this topic.
8: Improve gradually: Put together a plan laying out how to continuously improve development and base it on an existing framework, such as OWASP SAMM.
9: Finally: Security by design is not just for new development, so don’t forget about existing systems. It’s especially advisable to carry out checks for vulnerabilities on systems that have not been developed with security by design. After all, we still have to fear the billions of existing lines of code in the near future, too.
Senior Director, Security & Privacy and AI
We'll keep you posted on the latest news, events, and publications.