Is Application Security the Golden Ticket?
In yet another heavy-handed, but well-intentioned directive in the world of cyber security, we have the latest industry darling, Application Security (AppSec).
AppSec is definitely not the Golden Ticket to security, nor is it applicable to all circumstances or environments – just like all aspects of security, AppSec has to be applied in a meaningful way in order achieve meaningful results. However, I would propose that is not how it is used or applied today; AppSec is often required by various cyber security standards or external partners without concern for the environment, how applications are used / deployed, system-level interactions, and what programming languages are used. It’s also common for AppSec to be introduced without considering defense in depth or how it might be layered with other facets of security.
Now that my opinion is clearly out of the way, I can dig into What is AppSec, why it isn’t a complete solution to security (much like many existing security processes), what security problems it addresses, how it works with defense in depth, and areas where we as security engineers might need to do a little extra work.
What is Application Security (AppSec)?
Application Security, or AppSec, is the process of adding some form of static and/or dynamic analysis of code, usually to a build pipeline or similar CI / CD environment. AppSec helps check code for various code quality issues and potential security vulnerabilities that have either caused a vulnerability in the past, appear in a list like the OWASP Top 10, or have the potential to be misused. Depending on the specific application, and how it is used / deployed, AppSec may be a value add to the overall system security, but it is not the whole story here.
In traditional programming languages such as ‘C’ or C++, AppSec is primarily focused on type and bounds checking. This really translates to: Are arrays properly sized? Are integers always treated as integers? Are the safe versions of functions being used? In other language variants, these tools are used to help sanitize user data, check or validate database queries, and help look for techniques that might enable cross-site scripting or various other vulnerabilities.
What I have described so far is primarily AppSec as a generalization of various static analysis techniques. The same tools and processes that were previously used in a standalone development environment are now commonly “cloudified” and integrated into various CI/CD systems and magically they become “Cloud Native”. Of course, as I’ll soon discuss, these approaches don’t by themselves solve all security challenges, but they can certainly help reduce the attack surface and they are applicable to some environments / workloads.
There is another form of AppSec products that use various forms of dynamic analysis to look at the (built) applications while they are executing. This enables additional attack methods to be detected, as well as ensure, for instance, that all errors / exceptions are properly handled and user-input is being sanitized. Dynamic methods also start to provide some insight into the larger system or systems of systems – things like load balancing, application proxies, etc.
Although less common, these AppSec tools can also be used to enforce license restrictions and help build up Software Build of Materials (SBOM).
AppSec works great for individual applications with specific threat models or concerns. However, AppSec often isn’t applied to all of an applications’ dependencies, let alone the larger system (e.g.the Linux environment it runs in, hardware interactions, or network security environment). Applying AppSec to a single application in isolation (b/c it’s the focus of what you’re trying to build after all and can conceptually be integrated to an existing development environment) doesn’t reveal the whole picture and provides a false sense of security. As is often the case, a single vulnerability (such as might be found with AppSec) doesn’t usually lead to a full compromise of the system. AppSec and more broadly “vulnerability scanning” needs to consider how the applications are used, how they interact with the larger system, and what does the actual (deployed) system or environment look like. AppSec does not generally include the system, external dependencies, or how an application is used, and this leaves a very large hole in the system design and overall security of the system.
As an example, let’s consider a common use case where we’re developing a WordPress Plugin, maybe a card processor or analytics platform. What happens if we use AppSec principles for the design / implementation and deployment of our new plugin? Were AppSec principles applied to Wordpress itself? What about the Apache / Ngingx / PHP environment that executes the Wordpress stack? What about any application-level proxies in front of the Wordpress environment, or any caching layers that might be used to provide faster performance? Taking it a step further, do we know that the Wordpress host (and presumably the entire upstream network infrastructure) have restricted access, such that if our Wordpress plugin was compromised or an attacker was able to use it to gain arbitrary execution, an attacker can’t reach out to external hosts (maybe as part of a denial-of-service attack)?
How Should AppSEC be Used?
With these examples in mind, we can start to build a set of realistic guideposts for the use of AppSec tools, and we can see where they begin to fall apart or need more consideration.
AppSec also has limited utility for type safe languages such as Rust or Go. While there are still aspects of these languages that are well served by AppSec, AppSec largely ignores that applications are written with Type-Safe languages.
As, for instance, a kernel developer (which broadly speaking is where most of the developers here at Star Lab are), how do I use AppSec for device drivers? Does AppSec work for device drivers that then provide user-level functionality? There are certainly AppSec-like tools part of the development workflow that consider the individual contributions or subsystems, but how are these tools applied to the entire kernel? Do these tools consider hardware level interactions that are maybe updated or modified using CPU microcode? How do these handle interactions with userspace or a GPU? Do these tools consider the kernel’s safe variants of functionality, and various opaque methods such as those involving IOCTLs? Let’s say our device driver uses devfs, or another facility to create access nodes, does it consider the application of mandatory-access control labels, or even discretionary access controls?
Much like other security tools (as we highlight in our 7 Tenets of Layered Security), AppSec does have its place in the larger security ecosystem. However, by itself or used in isolation it has minimal value. The use of AppSec should be as part of a toolkit and in conjunction with a threat model to help build more secure systems. It’s also probably evident here that before AppSec tools can be applied, a threat model needs to be developed for the system. The threat model can help address the entire system, Linux environment and other in-depth tools that may need to be applied.