Monday, May 1, 2017

DAST vs. SAST vs. IAST - Modern SSLDC Guide - Part I


Disclaimer
This article uses a relative ratio for the various charts, rather than an accurate one, to emphasize the ups and downs of various technologies to the reader. It also reflects the current situation to date (which may change as technologies mature), and relies on generalization’s and estimations on capabilities of technologies, and so, must be read in the proper context.

With the upcoming publication of the WAVSEP 2017 benchmark close at hand, I wanted to take the opportunity to provide my take on the role of DAST tools in the context of the various technologies and trends that have recently become dominant and prominent in the field.

Where do they fit in? What do they excel in compared to alternatives? When is the right time to use them?

Using a variaty of vulnerability detection soltuions have become widespread in software development projects, with the key of detecting crucial vulnerabilities as early as possible.

With the introduction and maturity of new vulnerability-detection technologies (IAST/DAST/SAST/HYBRID/OSS), and the expected streamline of (understandably) conflicting vendor claims, users may find it hard to discern which technologies may fit their needs, how to PRIORITISE their acquisition/integration, and when’s the right time to engage each solution category.

In the following article, I will be covering a few of the key aspects in the integration of these toolsets into an SSDLC (secure software development life cycle) environment – the OVERALL EFFORT and the IDEAL TIMING of each solution category, and the benefit of SUPPORTED TECHNOLOGIES and CODE COVERAGE they provide under different cirucmstances.

In addition to being exposed to a wide variaty of tools-of-the-trade, the article can also help the reader answer some basic questions when evaluating any one of these tools.

For those of us who somehow managed to escape the terms currently in use – this article covers the following technologies:

1) DAST – Dynamic Application Security Testing – Generic and Known Web Application Vulnerability Scanners that analyze a live application instance for security vulnerabilities. To further clarify – this is the category of tools that was covered in all the previous WAVSEP benchmarks.

  • This article specifically focuses on DAST solutions which are actively maintained and/or SSDLC adapted, with the ability to verify potential vulnerabiliteis through some sort of Exploitation/Verification process (referred to as EV for the purposes of the article), either external or in built into the detection algorithm, as opposed to "fuzzer" like tools based primarly on algorithms that rely on identifying specific keywords in the response.

2) SAST – Static Application Security Testing – Generic & Known Application Vulnerability Code-Level Scanners that analyze source code and application configuration files for security vulnerabilities.

3) IAST – Interactive Application Security Testing – Generic and Known Application Vulnerability Debug/Memory Level Analysis Solutions that attempt to identify vulnerabilities on live application instances while also analyzing code structures in the memory and tracking the input flow throughout the application sections. This category is further divided into the following subcategories:
  • Passive IAST – IAST solutions that rely on traffic already being generated to identify potentially vulnerable sections, WITHOUT performing additional attack/exploit verifications (e.g. sending input with all the necessary exploitation characters, etc).
  • Active IAST – IAST solutions that verify potential vulnerability sinks/sources through the use of requests that verify the actual exploitability of the potential vulnerability (again, by issuing requests that contain input with all the necessary exploitation characters, or through similar means).


4) OSS - Open Source Security – the SAST equivalent of the mythological CGI-scanner – these solutions that were, for the purposes of this article, integrated into the category of SAST, due to similarity of the chart positioning and role, although these solutions operate in an entirely different manner, and focus only on the identification of “known” vulnerabilities in 3rd party libraries. 


*) The various asepcts of hybrid analysis tools are NOT covered in the various article sections and charts, and the same goes for network vulnerability scanners with application level features but without SSDLC adaptation, or cloud security solutions without SSDLC integrations.

So Which Solution Category Is Most Important in SSDLC?


Technology Support vs. Code/Application Coverage

The most obvious differentiation between the various scanning solution categories is the amount of supported technologies. 
IAST solutions typically support only a handful of development technologies, SAST solutions can support a myrid of modern and legacy programming languages, and DAST solutions are rarely affected by the development technology -

Click to Enlarge


#
Supported Technologies
DAST
Any application with WEB/REST/WebService back-end.
Some exotic back-end listeners may be supported as well (web-sockets, DWR, AMF, etc).
The support also depends on compatibility with input delivery vectors, as well as compatible crawling OR session recording features.     
SAST
Java, ASP.Net, C#.Net, VB.Net, PHP, Noje.js, Html/JS, SQL, Ruby, Pyhon, C, C++, JSP, ASP3, VB6,  VBScript, Groovy, Scala, Perl, Apex, VisualForce, Android/iOS/WinMobile, Objective C, Swift, PhoneGap, Flex ActionScript, COBOL, ABAP, Coldfusion CFML
IAST
Java, .Net, PHP (few vendors), Node.js (few vendors), Ruby (few vendors), Python (experimental)

The charts and tables signify the result of high end DAST/SAST/IAST in the industry, and obviously, some SAST and IAST solutions may support a much lower subset of technologies than the listed scope. Comparing the support for scanning non-web application variants (custom non-HTTP-based protocols) will drastically affect the chart as well.

It is also important to mention that DAST/Active IAST solutions in automated modes also need to be able to "crawl" the technology, or at the very least support the creation of recorded "sessions" of a manual crawling process, and also support sending attack payloads through the "input delivery vectors" used by the application (e.g. query-string / body / JSON/ XML / AMF / etc).

Although SAST / Passive IAST solutions also need to support "tracking" the input delivery vectors, they could, theoretically, identify hazardous code patterns without tracking the entire input-output flow, with the price of potential false positives being reported.

The difference in technology support in IAST solutions partially related to the fact that IAST implementations are relatively new compared to DAST or SAST implementations, but also to the amount of effort required to "integrate" the IAST engine to each new technology, and furthermore, to maintain the implementation with the release of newer versions of the same supported technologies (adaptations may be required for major java JVM versions, newer .net framework versions, etc).

So, although this technology-support "GAP" may be smaller over time, the effort that will be required to maintain technology support will grow at a bigger pace, at least when compared to the pace of DAST and SAST technology compliance.

It is however, worth mentioning that most IAST vendors focus on widely used technologies that would cover as much ground as possible (Java / .Net / PHP / Node.js), and thus the actual importance of this "gap" will greatly vary for some organizations, and may even be insignificant for some.

And what of coverage ?

As it appears, being able to "support" a technology, does not necessarily allow the testing tool to automatically cover the larger portion of it's scope, which in turn, may dramatically affect the results. 

The type of technologies evaluated, the method of evaluation, the code deployment format, and even the "legal" ownership of the source code libraries may affect the actual sections being covered by the tool.

The coverage criteria will be easier to understand in the form of a table, rather than a chart:

Coverage
DAST
SAST
Passive-IAST
Active-IAST
Out-Of-The-Box
Wide Coverage Min Effort
?
In Unauthenticated/ Form/Basic/NTLM
ü
Most Scenarios
?
In Tested/Used Instances
û
Depending on Implementation
End-To-End Coverage
Scan/Correlate Issues in All Client/FE/BE Layers
ü
In Client Triggered Sequences
û
Depending on Implementation
û
Depending on Implementation
?
Depending on Implementation
3rd Party Code
Closed Source Libraries/Entry-Points
?
For “Visible” Methods
û
No DE-compilation
ü
Depending on Implementation
ü
Depending on Implementation
Dead/Blocked Code
Non-Web Executable
û
Depending on Implementation
ü
Most Scenarios
û
Depending on Implementation
û
Depending on Implementation

Conclusion:
DAST and SAST tools *typically* support more technologies, and as far as coverage is concerned -
  • DAST excels in end-to-end coverage AND "visible" 3rd-party coverage, but may require manual configuration for each application, or at the very least, an effective crawling mechanism that supports the front-end GUI technology.
  • SAST excels in out-of-the-box coverage, but lacks in 3rd party software coverage (assuming it does not perform de-compilation of 3rd-party libraries), and may requires manual syncing to "identify" associated end-to-end layers. That being said, early in development, it's probably the most likely method of getting early feedback on potential vulnerabilities.
  • IAST will typically be positioned somewhere between the two in the various coverage categories - it will require agent distribution to support end-to-end detection (if it is supported at all), but will require less effort to achieve a wide coverage of application entry points (particularly in the case of Passive-IAST), and might have the advantage of potentially providing an in-depth coverage for CLOSED 3rd-party code/libraries.

Integration Effort vs. False Positives Effort

Throughout the development process, in both the early and later stages, the amount of effort invested in detecting these vulnerabilities can, knowingly or now, play a key role in the success of the detection process.

For every vulnerability detection solution and for every scenario, resources are required to integrate the chosen solutions, maintain the integration (not as easy as it sounds), and go over the results to filter high-impact and relevant issues.

Since for all the phases there’s a limited amount of human and IT resources, overly complex integrations can DELAY (or sometimes even PREVENT) the detection of security issues to a point that the benefit of detecting them early won’t apply, while complex and tedious result analysis processes can easily cause the developers to ignore identified critical issues due the sheer number of irrelevant results.

The overall effort of using each tool, is not always properly estimated by potential consumers, and for various tool categories, is focused on different areas.

Although the most obvious effort seems to be the initial integration of the vulnerability scanning process (for live instances, code, or combination thereof), the process of verifying which of the results is REAL and EXPLOITABLE, to justify mitigation effort, may be just as tiditious and even more time consuming.

To put the upcoming results into proper perspective, it's crucial to understand that the relative ratio presented in the various charts is exactly that - relative, and that in fact - most modern solutions are FAR BETTER in terms of accuracy then previous generations of tools (early DAST / early SAST / fuzzers / parsers). To further emphasize the perspective, assume the accuracy of modern tools falls in the following context when compared to that of previous generetion tools:



And now, when the relative scale has been clarified, we can begin to compare modern technologies against each other.

To simplify the analysis, we will evaluate effort required to integrate, maintain, and analyze the results (False Positives vs. True Positives) of various vulnerability detection tool categories, in relation to each other:


Click to Enlarge

From the point of view of integration, some tools are easier to integrate than others, some have very little  or no effort required for maintanance, and some require a specific scan policy in order to maximize the result efficiency.

The justification for the chart diveristy of the various categories is as follows:
  • In an environment without any live application instances, SAST solutions can still be used to scan source code repositories, either directly or through the upload of source code projects, simplyfing the initial assessment process 
  • In an environment with live application instances, IAST solutions can be integrated simply by deploying an agent to the assessed application baseline framework. Although the initial integration may be difficult (dedicated servers, configuration, potential performance issues in shared environments, etc), once the solution is set up there's very little maintainance .
  • In an environemnt with live application instances, DAST solutions can be easily used to scan unauthenticated applications and will require minor configuration to scan applications with FORMS/HTTP authentication. More complex authentication methods, scan barriers (e.g. anti-CSRF mechanisms) or architectures (micro-service architecture, REST, WS, etc) may require the creation of a dedicated policy and/or manual crawling session recording, Active IAST solutions will require most of those prequisities, in addition to deloying agents to the various tested layers.


To complete the picture, we will now place the false positive effort aspect first - to present the *typical* ratio of false positives in the results provided by the various tool categories, and the corresponding effort required to identify actual issues throughout the analysis results:

Click to Enlarge



The relative ratio of false positives derives from the efficiency and number of methods that can be used by each technology to verify that identified vulnerabilities are not false positives:

Click to Enlarge



The justification for the chart diveristy of the various categories stems from the verification methods that can be used by each solution category:



DAST
SAST
Passive IAST
Active IAST
Execution URL
Client-Driven Exploitation
Entry-Point-To-Vuln-Code
ü
Exploit URL / Payload
û / ü
Framework Dependent
ü
Exploit URL
ü
Exploit URL and Payload
Execution CLI
Command Line Exploitation
CLI-Param-To-Vuln-Code
û
CLI Not Supported
ü
CLI Entry Point Detected
?
Theoretically Possible
?
Only via Passive
Flow/Taint Analysis
Track Sequence of Methods to Activate Vulnerable Code
?
Irrelevant for Technology
û / ü
Key-Word Dependent
ü
In Effect
ü
In Effect
Input Effect on Sink
Track Live Input Effect on the Vulnerable Code
?
Through Binary Methods
û
Not Performed
ü
Commonly Used
ü
Commonly Used
Modified Input Effect
Track Modified Input Effect on the Vulnerable Code
ü
Payload Effect Analysis
û
Not Performed
û
Not Performed
ü
Payload Effect Analysis
Execution POC
Time Delay, External Access, Browser Effect, Response Diff
ü
Commonly Used
û
Not Performed
û
Not Performed
ü
Commonly Used
Exploitation POC
Full Scale Exploitation: Data Extraction, RCE, Shell Upload
ü
In Some Solutions
û
Not Performed
û
Not Performed
ü
In Some Solutions


Additional importance is given to the information the various tools provide to a HUMAN trying to discern the relevance of the issues reported, and toolset (if any) provided to "reproduce" or manually "verify" the identified security issues.

Furthermore, the false positive factor will become EVER MORE IMPORTANT with the increase in volume of scanned applications. Weeding out false positive from actual issues will require time and effort from a security expert, and any mistintrepertations will cost even more for developers to mitigate.


There’s always the exception
A relative obsolete and unmaintained DAST vulnerability scanner, in which there is little or no effort to “verify” detected vulnerabilities will fare no better, and probably much worse, than a typical SAST/Passive-IAST solution, in terms of the ratio of false positive identification.
On the other hand, a relatively immature or unmaintained SAST/Passive-IAST solution will fare much worse than presented in the charts - even in the effort required for integration and maintenance, especially compared to a modern DAST implementation.


Part II Coming Soon...




1 comment:

  1. Useful information that i have found. don't stop sharing and Please keep updating us..... Thanks

    ReplyDelete