Invicti https://www.invicti.com/ Web Application Security For Enterprise Thu, 23 May 2024 13:55:14 +0000 en-US hourly 1 https://cdn.invicti.com/app/uploads/2022/03/08125959/cropped-favicon-32x32.png Invicti https://www.invicti.com/ 32 32 How bad is a missing Content-Type header? https://www.invicti.com/blog/web-security/how-bad-is-missing-content-type-header/ Thu, 23 May 2024 13:55:12 +0000 https://www.invicti.com/?p=53067 Warnings about a missing Content-Type header are a common sight in web application scan results. Invicti’s Sven Morgenroth explains how web browsers determine content types and shows how setting the right security headers can get rid of those warnings and eliminate one avenue of cross-site scripting attacks.

The post How bad is a missing Content-Type header? appeared first on Invicti.

]]>
If it walks like a duck and quacks like a duck, it’s still not a duck unless it has
an application/duck Content-Type header

Web design was a lot simpler 20 years ago. You had an invisible table over the whole height and width of the page, a few GIF images, and optionally some HTML. There were very few options to make your page stand out, apart from flashy images and choosing a full-page red background color (and the trusty old <blink> tag). And yet, some crafty designers were able to use what they had at hand, invented some clever hacks to bend the clunky old browser features to their will, and actually managed to build some modern-looking, easy-to-navigate websites.

I, on the other hand, don’t possess any of those skills. If you sent me back in time and I had to center a <div> in the middle of the page in 2004, I would probably spend the next four years waiting for Stack Overflow to be invented. 

In the present day, all of that has become way easier. You have features like Flexbox and entire CSS frameworks like Bootstrap that do all the heavy lifting for you. Browsers have come a long way since then, adding features that have allowed developers and designers to build web applications with desktop-level functionality. As people adopted them and invented new, creative ways to push the limits of existing solutions, even more features followed, including lots and lots of new data formats—but how do browsers know which format is which? 

Hey, what are you looking at?

If you open a modern news site like yahoo.com using the first version of Mozilla Firefox, you will notice some differences compared to what you’re used to, like missing content or the articles not being in the intended order. This is because many browser features we rely on for modern web design weren’t yet invented back in 2004. But on top of that, neither the magnifying glass of the search button nor the Yahoo logo itself are loading. And that’s a bit strange since, of course, images were clearly supported back then.

How the current version of yahoo.com is rendered in Firefox 1.0 from 2004
as compared to a modern Chrome browser

What was not supported, however, was the specific image format Yahoo uses for these buttons. They are not a GIF or JPG but rather an SVG file—an XML-based image format that has some unique advantages but was not yet supported in the first Firefox version. It is one of dozens of file formats added over the years, including image formats such as WEBP. With this ever-increasing number of image file formats that all need to be parsed differently, it can be hard for a browser to figure out what it’s actually looking at.

Sure, you could try going by the specific file extension, such as .png or .jpg, but sometimes these might not be available, like when multiple file types are served from a central endpoint. (For the security implications of this approach, see our post on local file inclusion.) Besides, the browser might not even be looking at an image as such, as with SVG files. SVG is an XML-based image format, so how can the browser be sure it is dealing with an image and not an XML document?

The simple solution to all these problems was to create a dedicated Content-Type header to state the data type upfront.

Meet the Content-Type header

The Content-Type header is a bit like the address on an envelope. To send the data to the right place internally, the browser first needs to read the header value to determine what kind of data it is dealing with. If it says image/png, the browser will try to process a PNG file. If it’s application/xml, it will try to display an XML file. (As a side note, XML has more than one possible Content-Type value: you have text/xml for XML data readable by humans and application/xml for data unreadable for the average user. Personally, I always use application/xml since I have yet to see an XML file that’s easily readable.)

When dealing with static files, your server will often automatically set the Content-Type header for you. To do this, it may deduce the type of content based on the file extension or by actually examining the file. If you’re ever unsure yourself, a great tool for figuring it out is the Linux file utility. Here’s a quick experiment to show how it works:

This example uses curl to download an HTML page from google.com and then saves it locally as a file called google.unknown. We then give that content to the file utility to figure out the content type—which it does, telling us correctly that it’s an HTML document. Smart, but how did it know? We certainly didn’t give it a known extension (in fact, we gave it an .unknown extension). A look at the relevant format definition file from the file utility repo provides the answer:

When examining file content, multiple indicators can suggest that a document is an HTML file. Since some of those are present in the file we downloaded, file knows it’s dealing with an HTML file, and this is one way a web server can automatically set the content type. 

How browsers determine the content type

Getting back to browsers, we already know they use the Content-Type header to figure out what kind of file they are dealing with. But what happens if that header is missing? Let’s test it out.

I wrote a simple script that just prints onto the page whatever you put into the message GET parameter:

Let’s try to add some HTML content, maybe a red heading for those 2000s vibes:

Even though the Content-Type response header is missing and the request doesn’t mention HTML anywhere, the browser still knows exactly what we are trying to achieve and renders the heading as expected.

Clearly, the browser (like the server) also has ways to automatically detect the content type. When the browser attempts to interpret the media type of an HTTP response by analyzing the response body, this is called MIME sniffing. But did it actually infer the type from the content? Maybe it just defaults to the text/html type? This calls for another experiment. 

Let’s take the same string as before and add the characters GIF89a at the beginning: 

Now, the browser shows a white box instead of HTML content. Let’s save this string under the name box.unknown and give it to our old friend, the file utility, to see what’s going on:

Both file and the browser apparently interpret it as a GIF image now. This is because GIF files always start with the string GIF8, followed by the version (in this case 9a) and then some bytes specifying the length and other data. The weird image size is caused by the browser (and file) interpreting some of the HTML content as size values. 

The dangers of uncontrolled sniffing

The weird thing is that, even with the prepended GIF89a characters, this is still all proper and valid HTML. There’s an HTML heading tag, there’s a style attribute, and even the tag content itself insists it’s a heading—and why would it lie to you? But still, browsers interpret it as a GIF.

It’s not hard to imagine how that might go wrong in the other direction. If you let your users upload any data they want and then you serve it without a proper Content-Type header, then—even if you do some upload filtering to ensure a file seems valid—there could still be surprises once served due to browser-side content interpretations. 

Of course, there’s also the security side. Depending on where dynamically generated user input is reflected on your page, your browser might be tricked into treating a harmless text file as something more dangerous. If it decides to treat some content as an HTML page, this might be abused to execute client-side JavaScript code within the context of your domain—a long-winded way of saying you are risking cross-site scripting (XSS) attacks. 

All this means you should always set a Content-Type header. Stating the correct content type upfront not only helps to ensure the proper functioning of your website but also makes it harder for attackers to trick your browser into performing unintended actions and internally directing input data to the wrong parser. But even assuming you always have the proper Content-Type header set, there is one other security feature you should also enable.

Content-Type alone is not enough

No matter how careful you are, browsers might sometimes straight up ignore your declared content type if they deem it to be wrong. For example, imagine you have a pretty strict Content Security Policy that only allows scripts from the same site to be loaded:

Content-Security-Policy: default-src 'self'

This prevents the browser from loading any external script but allows scripts on the same page. But even if you have a page with a proper Content-Type header that should not normally be interpreted as application/javascript, you might still be out of luck if the page allows dynamic user input. 

To see why, let’s assume you are the owner of example.com. An attacker could simply use a script block such as the following to bypass your CSP directive:

<script src = "https://example.com/api/message?data=alert(1)//"></script>

Even if the message API endpoint only returns data as text/plain, this will still lead to XSS because the browser is trying to be smarter than you. In this case, the browser assumes the Content-Type header is incorrect because it’s being used in the context of a script include, which you would only want to do if the data you’re including is actually a JavaScript file. Based on this, the browser decides it knows better, ignores the text/plain type, and treats the request like application/javascript

The solution to this problem is to not only explicitly state the Content-Type header value but also to disable MIME sniffing by setting the X-Content-Type-Options: nosniff HTTP header. This will leave no room for creative interpretation by the browser and CSP bypasses like the one above will no longer allow attackers to inject potentially malicious code.

X-Content-Type-Options is only one of several HTTP response headers that are essential for security. Read our white paper on HTTP security headers to get the full picture.

Never trust a browser with your content types

In summary, it’s never a good idea to allow the browser to decide the content type based on MIME type sniffing. For secure and predictable behavior, always ensure both of the following are done:

  • Explicitly set the expected Content-Type header value for each resource you are serving.
  • Always set the X-Content-Type-Options header to nosniff to prevent sniffing when a browser decides to ignore your declared content type.

While they might not be clear and exploitable vulnerabilities, it’s always worth paying attention to scanner warnings related to missing Content-Type and X-Content-Type-Options headers as part of basic security hygiene. On top of that, if your data includes user-controlled input, make sure you perform validation to ensure it is always escaped properly and, where appropriate, assign it a content type that cannot be used to execute JavaScript code.

The post How bad is a missing Content-Type header? appeared first on Invicti.

]]>
Why Predictive Risk Scoring is the smart way to do AI in application security https://www.invicti.com/blog/web-security/predictive-risk-scoring-is-the-way-for-appsec-ai/ Thu, 16 May 2024 15:00:00 +0000 https://www.invicti.com/?p=52853 Everyone is adding LLMs to their products, but Predictive Risk Scoring from Invicti takes a more thoughtful and effective approach to using AI in application security. We sat down with Invicti’s Principal Security Researcher, Bogdan Calin, for an in-depth interview about the internals of this unique feature and the importance of choosing the right AI models for security tools.

The post Why Predictive Risk Scoring is the smart way to do AI in application security appeared first on Invicti.

]]>
Invicti recently launched its Predictive Risk Scoring feature, which as a genuine industry first can generate accurate security risk predictions before vulnerability scanning even begins. To recap briefly, Predictive Risk Scoring uses a custom-built machine learning model that is trained on real-world vulnerability data (but not customer data), operated internally by Invicti, and can closely estimate the likely risk level of a site to aid prioritization. 

Following up on our initial post introducing this new capability and its potential to bring a truly risk-driven approach to application security, here’s a deeper dive into the technical side of it. We sat down with Bogdan Calin, Invicti’s Principal Security Researcher and the main creator of Predictive Risk Scoring, for a full interview not only about the feature itself but also about AI, ML, and the future of application security.

Companies in every industry, including security, are rushing to add AI features based on large language models (LLMs). What makes Invicti’s approach to AI with Predictive Risk Scoring different from everyone else?

Bogdan Calin: The most important thing about implementing any AI feature is to start with a real customer problem and then find a model and approach that solves this problem. You shouldn’t just force AI into a product because you want to say you have AI. For Predictive Risk Scoring, we started with the problem of prioritizing testing when customers have a large number of sites and applications and they need to know where to start scanning. It was clear from the beginning that using an LLM would not work for what we needed to solve this problem, so we picked a different machine learning model and trained it to do exactly what we needed.

Why exactly did you choose a dedicated machine learning model for Predictive Risk Scoring versus using an LLM? What are the advantages compared to simply integrating with ChatGPT or some other popular model?

Bogdan Calin: In security, you want reliable and predictable results. Especially when you’re doing automated discovery and testing like in our tools, an LLM would be too unpredictable and too slow to solve the actual customer problem. For estimating the risk levels, we needed a model that could process some website attribute data and then make a numeric prediction of the risk. LLMs are designed to process and generate text, not to perform calculations, so that’s another technical reason why they would not be the best solution to this problem. Instead, we decided to build and train a decision tree-based model for our specific needs.
 

Having a dedicated machine learning model is perfect for this use case because it gives us everything we need to get fast, accurate, and secure results. Compared to an LLM, our model is relatively lightweight, so processing each request is extremely fast and requires minimal computing resources. This lets us check thousands of sites quickly and run the model ourselves without relying on some big LLM provider and also without sending any site-related data outside the company.
 

The biggest drawback of using LLMs as security tools is they are not explainable or interpretable, meaning that the internal layers and parameters are too numerous and too complex for anyone to say, “I know exactly how this result was generated.” With decision tree models like the one we use for Predictive Risk Scoring, you can explain the internal decision-making process. The same input data will always give you exactly the same result, which you can’t guarantee with LLMs. Our model is also more secure because there is no risk of text-based attacks like prompt injections.
 

And maybe the biggest advantage compared to an LLM is that we could build, train, and fine-tune the model to do exactly what we wanted and to return very accurate results. Just mathematically speaking, those risk predictions are fully accurate for at least 83% of cases, but the useful practical accuracy is much higher, closer to 90%.

Could you go a bit deeper into those accuracy levels? We’ve been giving that number of “at least 83%,” but what does accuracy really mean in this case? How is it different from things like scan accuracy?

Bogdan Calin: The idea of Predictive Risk Scoring is to estimate the risk level of a site before scanning it, based on a very small amount of input data compared to what we would get from doing a full scan. So this prediction accuracy really means confidence that our model can look at a site and predict its exact risk level in at least 83% of cases. And this is already a very good result because it is making that prediction based on very incomplete data.
 

For practical use in prioritization, the prediction accuracy is much higher. The most important thing for a user is not the exact risk score but knowing which sites are at risk and which are not. From this yes/no point of view for prioritization, our model has over 90% accuracy in showing customers which of their sites they should test first. Technically speaking, this is probably the best estimate you can get without actually scanning each site to get the full input data, no matter if you’re using AI or doing it manually.
 

One important thing is that predictive risk scores are completely different from vulnerability scan results. With risk scoring, we are looking at a site before scanning and estimating how vulnerable it seems. A high risk score indicates that a site has many features similar to vulnerable sites in our training data, so the model predicts that it carries a high risk. In contrast, when our DAST scanner scans a site and reports vulnerabilities, these are not predictions or estimates but facts—the results of running actual security checks on the site.

Many organizations and industries are subject to various restrictions on the use of AI. How does Predictive Risk Scoring fit into such regulated scenarios?

Bogdan Calin: Most of the regulations and concerns about AI are specifically related to LLMs and generative AI. For example, there are concerns about sending confidential information to an external provider and never knowing for sure if your data will be used to train the model or exposed to users in some other way. Some industries also require all their software (including AI) to be explainable, and, as already mentioned, LLMs are not explainable because they are black boxes with billions of internal parameters that all affect each other.
 

With Predictive Risk Scoring, we don’t use an LLM and also don’t send any requests to an external AI service provider, so these restrictions don’t apply to us. Our machine learning model is explainable and deterministic. It is also not trained on any customer data. And, again, because it doesn’t process any natural language instructions like an LLM, there is no risk of prompt injections and similar attacks.

AI is undergoing explosive growth in terms of R&D, available implementations, and use cases. How do you think this will affect application security in the near future? And what’s next for Predictive Risk Scoring?

Bogdan Calin: We are lucky because, at the moment, it’s not easy to use publicly available AI language models to directly create harmful content like phishing and exploits. However, as AI models that are freely available for anyone to use (like llama3) become more advanced and it becomes easier to use uncensored models, it’s likely that future cyberattacks will increasingly rely on code and text generated by artificial intelligence.
 

I expect Android and iOS to have small, local LLMs running on our phones eventually to follow our voice instructions and help with many tasks. When this happens, prompt injections will become very dangerous because AI voice cloning is already possible with open-source tools, so voice-based authentication alone cannot be trusted. Prompt attacks could also come via our emails, documents, chats, voice calls, and other avenues, so this danger will only increase.
 

AI-assisted application development is already very common and will become the normal way to build applications in the future. As developers get used to having AI write the code, they may increasingly rely on the AI without thoroughly verifying code security and correctness. Because LLMs don’t always generate secure code, I would expect code security to decrease overall.
 

For Predictive Risk Scoring, I can say that we are already working on refining and improving the feature to get even better results and also to expand it by incorporating additional risk factors.

Ready to go proactive with your application security? Get a free proof-of-concept demo!

The post Why Predictive Risk Scoring is the smart way to do AI in application security appeared first on Invicti.

]]>
How to choose the right application security tools https://www.invicti.com/blog/web-security/how-to-choose-right-application-security-tools/ Fri, 10 May 2024 15:14:45 +0000 https://www.invicti.com/?p=32476 The right application security tools can improve your security posture, development workflows, and bottom line—while the wrong tools might only add complexity and inefficiencies. Learn how to choose the tools that make sense for your organization, especially for security testing.

The post How to choose the right application security tools appeared first on Invicti.

]]>
Modern application security must be built in from the outset and reinforced continually throughout the software development lifecycle. Even organizations with mature application development practices need automated tools to successfully and repeatably secure their software in complex, fast-changing environments.

Security leaders commonly focus on ensuring software security through runtime protection measures, with major cloud service and infrastructure providers even including basic security tools as standard for cloud-based deployments. Keeping track of CVEs to identify and patch vulnerable software products and components is also part of the AppSec routine. But to stay ahead of threats, application security also needs to incorporate vulnerability testing to proactively identify and mitigate security risks.

This post compares the most widely used categories of AppSec tools for testing: SAST and SCA for static analysis, DAST for dynamic application and API security testing, and IAST to bridge the gap between static and dynamic analysis. We explain how the tools work, explore their strengths and tradeoffs, and help you select tools that work for your organization.

Why you need AppSec tools

Multiple converging trends are making software harder to secure—and increasing the risks for users. Code bases keep growing larger and more complex, with more internal and external interactions than ever. Cloud-native and microservice-based development approaches that rely heavily on APIs present new challenges. Software draws on components from more sources, in multiple languages, with varying provenances.

All this adds up to application security complexity that’s beyond the abilities of any dev or DevOps team to manage through manual interactions with a security team alone, and growing and changing at a speed that far outpaces traditional approaches based on penetration testing. As those teams are transformed to build a DevSecOps culture, they need automation and smarter tools to identify security issues as early as possible and speed up remediation.

Attackers know how vulnerable you are, which is why they’re especially focused on web applications and APIs to extract valuable and sensitive data. With Verizon’s 2024 Data Breach Investigations Report showing a massive 180% jump in attacks involving the exploitation of application vulnerabilities in 2023, having the right application security testing tools and processes in place is critical if you want to avoid becoming the next data breach headline.

Types of web application security testing tools

No single category of tool can cover every aspect of web application security, so organizations typically combine multiple AppSec tools to protect applications throughout their lifecycle. Security testing is the crucial foundation of application security, allowing you to find and remediate issues all across the development and operations pipeline. While organizations have traditionally relied on internal and external pen testers for this, modern SaaS security solutions have made it possible to bring a lot of security testing in-house. Let’s go through the major categories of application security testing tools. 

Static application security testing (SAST)

SAST tools automatically analyze the source code, bytecode, or binaries before the application is deployed, attempting to identify vulnerabilities so they can be fixed before they get into production. SAST tools can be used to define code security requirements alongside other code quality checks, either directly in the IDE or as separate steps in the toolchain.

SAST is also called white-box testing or inside-out testing because it has access to the internal application code, allowing SAST tools to pinpoint potential security vulnerabilities in the code. The main advantage of SAST is the ability to test code fragments and components even if they’re not yet part of a runnable application.

Being tied to static source code is a virtue but also a limitation for SAST tools. For one thing, they require access to that code. Sometimes that can’t be provided, especially for third-party modules or products, meaning you can only test first-party code. They are also highly prone to false positives because they cannot know the wider context in which the code will operate, increasing the risk that real vulnerabilities will be ignored in the alert noise.

Different programming languages require separate SAST tools, complicating the workflow and potentially increasing costs. Finally, static code analysis cannot prove that a suspected issue will indeed be exploitable, and it won’t find issues that only appear at runtime, such as misconfigurations, business logic vulnerabilities, or vulnerabilities introduced by dynamic dependencies.

Dynamic application security testing (DAST)

DAST tools check the security of running applications in staging and real-world environments, probing them from the outside by safely mimicking attacker behaviors. Dynamic analysis works from the outside with no visibility into source code, so it’s sometimes called black-box testing. Because it doesn’t require access to source code, DAST is technology-agnostic and can be used to test web applications regardless of the specific language or combination of languages.

DAST tools can uncover misconfigurations, encryption or authentication problems, and exploitable OWASP Top 10 vulnerabilities, including cross-site scripting (XSS) and SQL injection. As such, they provide far more visibility into the overall security posture, covering all running first-party and third-party code, no matter where it originated or how it’s deployed.

With the most modern DAST solutions, it is now possible to perform vulnerability scanning not only in staging and production but also earlier on in the development pipeline, for instance on build candidates. They can also integrate directly into CI/CD pipelines at multiple stages of the DevOps process to trigger scans automatically and retest fixes.

In terms of accuracy, DAST tools tend to have far lower false positive rates than static code analysis, with leading solutions like Invicti even being able to test whether an issue is remotely exploitable. Being technology-agnostic, DAST is far easier and quicker to set up than AppSec tools that require source code access, often delivering the first results within hours of deployment. It’s also the only approach to AppSec testing that can be used in development, staging, and production.

Interactive application security testing (IAST)

IAST tools (aka gray-box testing systems) can be considered to occupy the middle ground between SAST and DAST, as they aim to combine runtime insights with access to software internals. Some standalone solutions require code instrumentation while others act like a plugin to the app runtime, with the latter approach also being taken by integrated IAST. Depending on the product, testing might be triggered by a test suite during the build process or by a DAST scanner.

While for most products the “interactive” in the name can be misleading, implementations such as Invicti’s DAST-based true IAST are indeed interactive because the DAST scanner and server-side IAST sensor work in tandem at all times. For each IAST-enabled security check, DAST initiates a specific test and analyzes the app’s external reactions to it. At the same time, the sensor is plugged into the application runtime and observes the same reactions from the inside, with the combined results being presented to the user in a unified format.

Unlike standalone IAST products, which have become much less popular in recent years, the Invicti approach doesn’t require any code instrumentation and does not limit the whole security testing process to any specific language or technology. The DAST scanner still provides technology-agnostic test coverage, with Invicti’s true IAST delivering additional insights on top of already detailed DAST results for supported languages.

Software composition analysis (SCA)

While it doesn’t have “testing” in the name, software composition analysis (SCA) is another vital part of the AppSec toolbox, allowing organizations to identify open-source code components within the application, highlight known vulnerabilities within them, and often also ensure license compliance. Note that while conventional SCA relies on source code and dependency analysis and thus shares many of the limitations of SAST, some DAST solutions (such as Invicti) can perform dynamic SCA and web technology analysis to detect vulnerable libraries, frameworks, and server versions.

API security testing

Many organizations (and online resources) treat application security and API security as completely separate areas when, in fact, the two are closely interrelated. An API, or application programming interface, is merely a non-GUI way of accessing application functionality, so you should treat APIs as an integral part of your attack surface and use app security testing tools that can cover both GUI and API vulnerabilities.

Read more about building API security testing into the SDLC

How to choose the right tools for your team and organization

As you plan investments in AppSec tooling, there are many factors to consider. Here are just a few of them:

  • Effectiveness. How do the tools you’re considering stack up on authoritative industry measurements? Taking DAST as an example, how did the tool perform in tests such as Shay Chen’s web vulnerability scanner benchmark? Can the tool find everything you need to see? Can it crawl and scan JavaScript-heavy SPAs? Can it run authenticated scans? Is it configurable and customizable enough to safely probe deeply into each of your unique applications?
  • Accuracy. If your security engineers and developers can’t trust the reports they get from a tool, they will need to manually validate everything it tells them—which is expensive and fundamentally incompatible with rapid development. Features like Invicti’s Proof-Based Scanning can sidestep the problem of false positives by automatically confirming the majority of exploitable vulnerabilities.
  • Ease of deployment and time to value. After acquiring an AppSec tool, companies often struggle to operationalize it and start seeing actual value. What will it take to get started and then go live with the tool? Compared to other testing approaches, DAST stands out for its ease of deployment and tech-agnostic testing coverage. Leading DAST solutions can go from a standing start to effective mitigations in a matter of days, if not hours.
  • Visibility and flexibility. Will the tool only cover a narrow section of your overall attack surface, as in the case of SAST tools that are necessarily limited to specific languages and constrained by code availability? Can you easily make test results a part of your vulnerability management process, whether integrated into the tool or using a dedicated security platform? Can you use it as a security gauge for the entire organization?
  • Workflow integration. No security solution stands alone, but AppSec tools need to integrate especially tightly and efficiently into development workflows. Ask yourself how a tool will fit into your existing processes and toolchains. Will it help you run security testing not only earlier in the SDLC but also at multiple points across it? Will it help improve collaboration and eliminate silos? Is it going to deliver vague information that triggers finger-pointing, or will you get detailed bug reports accompanied by proof?
  • Maturity and support. The AppSec market is full of commercial and open-source tools, but a tool never runs or maintains itself. Can you count on vendor support and guidance from onboarding to full-scale production? Is the tool mature enough to safely run all the testing you need and deliver results you can automate without fear of false alarms and delays?
Get the free Web Application Security Buyer’s Guide for more in-depth tips

The importance of choosing the right AppSec tools and processes

All AppSec tools are not equal, and neither are all approaches to implementing a cybersecurity program in practice. From picking ineffective tools just to check a compliance box to neglecting proactive testing, having the wrong AppSec tools and processes can add complexity, cost, and frustration to an already challenging development process. Conversely, having the right security tools in place can not only improve your security posture but also help your development, operations, and security teams work more effectively—all while reducing the costs of external security assessments by bringing most of your security testing in-house.

Having the right application security tools is now a practical necessity to proactively manage cybersecurity and business risk. Learn how Invicti has brought proactive prioritization to application security testing with Predictive Risk Scoring.

The post How to choose the right application security tools appeared first on Invicti.

]]>
What is DevSecOps and how is it evolving? https://www.invicti.com/blog/web-security/what-is-devsecops/ Wed, 01 May 2024 17:15:14 +0000 https://www.invicti.com/blog/uncategorized/what-is-devsecops/ DevSecOps has matured from a radical new approach to the cornerstone of practical application security. Integrating security checks and practices into DevOps processes has proved to be a necessity to keep up with rapid development. Modern security testing tools have also matured to where they can be embedded into agile workflows without hindering dev work.

The post What is DevSecOps and how is it evolving? appeared first on Invicti.

]]>
DevSecOps is a software development approach that aims to integrate security practices into DevOps processes. Implementing DevSecOps efficiently requires organizations to make security an integral part of software quality by using automated security tools in their CI/CD pipeline. Crucially, the DevSecOps approach to software development offers a way to embed application security into the entire development and operations process. With the right security tools built into the DevOps pipeline, you can make security an integral part of the software delivery processes and address security risks as early as possible.

Changing the place and role of security in application development

Evolution is the key concept when looking at DevSecOps. The growing pace and business importance of software development first forced a rethink of traditional waterfall methodologies, leading to the widespread adoption of DevOps as a far more efficient way to build more software faster. The downside of this leap forward was that security processes were still isolated from the main software development process, resulting in security often being an afterthought—even as the world increasingly came to rely on web applications where security threats are far more numerous than for desktop software.

The logical next step was to also bring security into DevOps. Unlike QA testing, security testing was traditionally seen as completely external to development and not easily automated, so attempts at DevSecOps only became possible once the right security tools were available. At the same time, applications were becoming more complex and distributed, commonly using service-based architectures with microservices communicating via APIs. To build new business functionality at the required speed, developers came to rely extensively on third-party application frameworks and open-source components, so securing your own code could no longer guarantee that your whole app was secure.

To build secure software while keeping up with business requirements, organizations needed the right combination of tools and cultural changes to make security a part of software quality—but also to tie DevOps into the wider cybersecurity process in the organization.

Adding security to DevOps needs more than a new acronym

With DevOps in place, smaller teams are expected to deliver results faster and at a lower cost, making automation a necessity, not a luxury. New features can be added to operational production software at any time, potentially many times a day, so development and IT operations can no longer work in isolation. The DevOps approach takes the principles of agile programming and applies them to the entire development and operations pipeline. Instead of a slow progression from initial requirements to a finished product release, the development process uses continuous integration and continuous delivery (CI/CD) pipelines in a continuous and highly automated loop of modification, verification, and release. 

Instead of technology silos for each isolated phase, development and operations tools and processes are now tightly integrated and interrelated. If security testing is to operate in this automated workflow, it, too, must leave its silo and integrate deeply into the SDLC so that security flaws are found and remediated without slowing down releases. In other words, bolting security onto DevOps is not DevSecOps.

What makes DevSecOps different from DevOps

While better suited to rapid release cycles than more traditional methodologies, DevOps still does not integrate security into its processes, and security teams continue to work separately from developers. Security vulnerabilities are handled differently from other issues, and development teams often treat them as someone else’s problem, leaving security to the “security people.” Apart from the security implications, this limits the agility of DevOps processes because security issues are discovered and fixed manually, interfering with the automated flow of development and operations.

DevSecOps practices aim to incorporate security throughout the DevOps workflow. DevOps teams need to make some crucial cultural and technical changes to become DevSecOps teams:

  • Devs, operations teams, and security teams must work together and take shared responsibility for any security flaws in the project.
  • DevOps relies heavily on process automation, so security checks and related tickets must also be automated to maintain efficiency.
  • Security issues must be found and collaboratively remediated (by patching or otherwise) as early as possible to avoid delays and rework further downstream.
  • Visibility into the DevOps process also needs to incorporate security, including organizational security measures.

Picking DevSecOps tools that work

Effective DevSecOps requires security tools that can be integrated with the software development life cycle for automated web application security testing in a continuous process. While many automated security testing tools can be used, SAST and DAST are the most common choices:

  • Static application security testing (SAST): Software security starts with secure code, so static source code analysis tools continue to be used in the development pipeline. While they can pinpoint issues in the code and are a natural fit for automated dev toolchains, static analysis tools are known to deliver a lot of false positives. They are also limited in scope to the available source code, so they cannot test external dependencies or APIs. Being static, they won’t find runtime issues such as misconfigurations, so they are limited to early development phases.
  • Dynamic application security testing (DAST): Dynamic analysis tools probe a running application from the outside to provide a wider view of application security. Unlike simpler web application security scanners, modern enterprise-grade DAST tools can be used at multiple stages of the SDLC. When integrated into a CI/CD pipeline, DAST can check for a wide range of vulnerabilities, including some that wouldn’t show up in static testing, like misconfigurations, inadequate security controls, and other runtime issues. Advanced tools can even show which issues are exploitable, greatly speeding up triaging and remediation while minimizing false alarms.

But as important as it is to have the right tools for the job, DevSecOps is about culture as much as it is about technology. Developers, operations staff, and security experts all need to work together with the common goal of delivering functional and secure software on schedule. This includes developers being more aware of security considerations such as secure design and threat modeling but also security staff being familiar with the development process—and the right tech can streamline their work and eliminate friction.

How Invicti supports DevSecOps

Invicti Enterprise is an industry-leading DAST solution designed with scalable automation in mind. When integrated into the software development lifecycle, it helps organizations implement DevSecOps approaches by providing a single vulnerability testing and management platform that covers both development and operations. Issue tracker integrations and best-in-class accuracy enable process automation in existing development workflows. With efficient and accurate testing, you can ensure a secure development lifecycle and seamless collaboration between teams to maximize the benefits of DevSecOps.

The same Invicti DAST can also do double duty for scheduled external vulnerability scanning in a continuous process. Combined with web asset discovery and proactive prioritization with Predictive Risk Scoring, Invicti’s approach to security scanning is as close as you can get to having a real-time view of your application security risk.

Frequently asked questions

Is DevSecOps the same as shift left?

Although they are both related to integrating security into development, DevSecOps and shift left are two separate concepts. Shifting left is a general term for all efforts to start security testing earlier in the development process, while DevSecOps is a workflow and culture that aims to integrate traditionally separate development, operations, and security teams.
 
Learn more about shifting left and right.

Can you use DAST in a DevSecOps process?

Advanced DAST tools can be used at multiple points of DevSecOps workflows, making them uniquely suitable for this process. Apart from the security benefits, having a common DAST platform for all stages of the DevSecOps process also improves visibility and can not only streamline application security testing but also improve the overall security posture.
 
Read more about DAST.

Do you need special DevSecOps tools?

While DevSecOps is mostly about process and culture, allowing the use of existing DevOps and security tools, some tool types and functionalities are especially beneficial when integrating development, security, and operations into a unified process. Modern DAST tools, in particular, can provide automation, accuracy, and workflow integrations that mesh well with the entire process, from the first runnable builds to production environments
 
Read more about DAST in the SDLC.

The post What is DevSecOps and how is it evolving? appeared first on Invicti.

]]>
AppSec prioritization goes proactive with AI-backed Predictive Risk Scoring https://www.invicti.com/blog/web-security/predictive-risk-scoring/ Tue, 23 Apr 2024 13:35:00 +0000 https://www.invicti.com/?p=51598 Predictive Risk Scoring is a new feature from Invicti that infuses your security and development workflows with the power of advanced insights. Engineered as a new and early pre-scan step in your security strategy, it uses machine learning to help you anticipate and prioritize your biggest application security risks before you even start testing, preserving critical resources and proactively enhancing your security posture.

The post AppSec prioritization goes proactive with AI-backed Predictive Risk Scoring appeared first on Invicti.

]]>
Imagine you have to check for danger on the other side of an impassable mountain you cannot walk around. What would you do? A low-tech solution would be to tunnel through and have a look. Swing by swing with a pickaxe to break the stone, and then shovel by shovel to haul the broken rock away. You hope you will get there in the end, but it’s quite literally a mountain of a task. Even though you’re making progress, it’s a seemingly endless, taxing effort.

Now, imagine you’re digging away, and someone comes to you with a high-tech solution: a camera drone. Boom—the task has been enormously simplified, and within minutes, you know what’s lurking on the other side.

This is exactly the kind of impact that Invicti’s new Predictive Risk Scoring feature can have on your AppSec efforts. Instead of your security and development teams figuratively swinging pickaxes and shovels to inch their way through a mountain of vulnerabilities, you can now use Predictive Risk Scoring to first focus their efforts on your most at-risk web applications. 

The earlier you know your risks, the more proactive you can be

Knowing and managing risk is a cornerstone of cybersecurity, while accurate prioritization is the key to controlling and reducing those risks with the resources you have. Make no mistake—your resources will always be limited relative to the scale of security measures required to fully protect organizational assets. In application security, risk and prioritization have long been sticking points, leaving security leaders forever on the lookout for more efficient and reliable methods to guide the efforts of their AppSec teams. 

Currently, application security prioritization only comes in late in the testing process, when you’ve done your testing and are looking at the long lists of reported vulnerabilities. Assigning severity levels across potentially hundreds of vulnerabilities is necessary to get your teams working on remediation in order of severity. It’s a reactive and suboptimal process, where you’re waiting for test results to arrive and only then reacting to them. Moreover, this type of triage lacks the risk context crucial in establishing which assets and vulnerabilities truly need priority treatment.

Invicti’s Predictive Risk Scoring changes the game of vulnerability prioritization with a proactive rather than reactive approach. Now you can see which assets carry the highest risk before you even run a single test—and that’s as early in the process as you can get.

Zeroing in on real risk with data science and AI

Remember how that camera drone helped you change the entire approach to the task at hand and sidestep a massive manual effort by taking a smarter and more technologically advanced route? In Predictive Risk Scoring, AI/ML is the drone that adds a new dimension to your security vision and saves your teams hundreds of hours of manual work.

Leveraging a custom AI prediction model trained on real-world data, Invicti has added Predictive Risk Scoring to its existing asset discovery functionality to automatically calculate a risk score for each web asset. The model takes a number of technical parameters for each site or app and uses them to make a data-based prediction of the risk level correlated with that combination of parameters and values. Every time the discovery tool runs, any newly identified web assets also automatically get a risk score. 

Invicti’s Predictive Risk Scoring calculates risk scores using a dedicated in-house machine learning model. It does not use a large language model (LLM), process sensitive customer data, or send any data to external AI providers.

In effect, Predictive Risk Scoring says: “This web application presents similar indicators to applications that were found vulnerable in the past, so this is a high-risk asset for you.” Gaining any risk insight in the application security domain is already a massive win (as CISOs well know), let alone with the scale and level of confidence that the Invicti model provides. Perhaps most importantly, Predictive Risk Scoring assigns that risk rating proactively before any application is even scanned. This feature is an industry first and yet another win for application security programs. 

How Invicti proactively calculates web asset risk

Predictive Risk Scoring leverages the analytical and predictive capabilities of machine learning to provide a data-based estimate of the security risk for each of your web assets. By getting this insight before you scan, you’re arming yourself with additional intel about your most likely risk areas so you can efficiently prioritize testing and remediation efforts. 

The machine learning model that underpins Predictive Risk Scoring was carefully selected to maximize confidence in the results and trained to recognize signs of security risk based on analyzing over 150,000 real-life websites and applications. Starting with thousands of site risk indicators, the model was gradually refined to focus on just over 200 of the most impactful ones. These include many things a pentester would typically look for first, like site age, number of form inputs, support for deprecated SSL/TLS versions, and so on.

Screenshot of Invicti Enterprise showing Predictive Risk Scoring

After extensive fine-tuning, the model can currently predict the risk level of a site based on non-intrusive requests, delivering a risk score with at least 83% confidence overall and over 90% confidence for web applications with critical vulnerabilities. With such accurate recommendations, you get ample predictive insight into what needs testing and fixing first. 

Reinventing the application security testing process

In terms of the security testing process, this new step comes in early—in fact, before any vulnerability testing is even initiated. Following the automated asset discovery phase, each of your identified web assets is now also assigned a risk score. 

When you’re dealing with hundreds or even thousands of assets, Predictive Risk Scoring provides an invaluable guide for deciding which assets to focus on next for optimal testing and remediation. Even before seeing the first vulnerability scan result, you’re already making decisions based on credible risk levels, not guesswork.

Predictive Risk Scoring in the continuous process of application security testing
Invicti’s Predictive Risk Scoring gives you an automatic risk score before security testing even begins

Fact-based decision-making in web application security used to be elusive, but advances in automated testing are finally making it a reality. Predictive Risk Scoring joins Invicti features such as proof-based scanning to add another dimension to your security posture visibility. In effect, you’re getting a picture of your potential attack surface hotspots before you spend any time or commit any of your resources. Plugged into the security testing process, this lets you make informed security decisions every step of the way.

One small step for Invicti, one giant leap for AppSec

The ability to predict risk before spending valuable time and resources to scan, identify, and remediate vulnerabilities is key to improving efficiency and boosting confidence in your security program. Armed with this insight, you can quickly prioritize work to secure your most at-risk web apps and assets first, gaining the upper hand over threat actors—who might themselves already be using AI to find your weaknesses. 

Predictive Risk Scoring benefits in a nutshell:

 

  • Fully automated risk-based prioritization of testing and remediation resources
  • Confidence from the top down that your AppSec program is risk-centric
  • Using machine learning to counter the threat of AI-augmented attacks
  • Scalable and continuous fact-based security when paired with Invicti’s automated discovery and scheduled scanning 

Ready to get started? Predictive Risk Scoring is already available in Acunetix Premium, Acunetix 360, and Invicti Enterprise. Get a demo now, or contact your customer success rep with any questions about the feature.

The post AppSec prioritization goes proactive with AI-backed Predictive Risk Scoring appeared first on Invicti.

]]>
Invicti Launches First AI-Enabled Predictive Risk Scoring for Application Security Testing https://www.invicti.com/blog/news/invicti-launches-first-ai-enabled-risk-scoring-for-application-security-testing/ Tue, 23 Apr 2024 13:30:00 +0000 https://www.invicti.com/?p=51615 Invicti Security has announced a new Predictive Risk Scoring feature to help organizations proactively prioritize their most at-risk web assets. Based on a custom in-house AI/ML model, the feature indicates which of your websites and applications are most likely to be vulnerable to attacks.

The post Invicti Launches First AI-Enabled Predictive Risk Scoring for Application Security Testing appeared first on Invicti.

]]>
Unique capability accelerates risk identification with proactive prioritization of web application vulnerabilities.

AUSTIN, Texas—(April 23, 2024)—Invicti, the leading provider of application security testing solutions, today announced its new AI-enabled Predictive Risk Scoring capability. The feature assigns predicted risk to applications and helps organizations gain a strategic view of their overall application security risk. 

Predictive Risk Scoring allows organizations to determine which web applications should be scanned first and proactively prioritize remediation efforts. This new capability remaps the application security testing process to profile and calculate a risk score on all discovered web applications—before any scanning begins.

Risk management and prioritization are ongoing challenges in application security with the high volume of vulnerabilities that are discovered across web applications and APIs. While vulnerability severity helps order which vulnerabilities might require attention over others, there’s still a lack of information around exploitability and risk.

“Everyone working in cybersecurity needs to work faster, with more confidence that they are doing the right thing to protect their organizations. This new advancement in AppSec testing helps make that a reality,” said Neil Roseman, CEO at Invicti. “CISOs can now look at their application attack surface using a risk-based approach, guaranteeing that their AppSec program is focusing efforts in the right areas.”

Predictive Risk Scoring addresses the gap in vulnerability severity information by applying an AI model on discovered assets and calculating a risk score from a set of 220 parameters with a minimum 83% confidence level. Among the many advantages from this innovation, no scanning resources are required and no customer data is needed to assess the risk score.

“Protecting applications is crucial for companies of all sizes, but it’s challenging with the complexity and noise in the application security market, amplified with the adoption of AI. Now more than ever, security teams need to prioritize their efforts to address the riskiest issues, with speed and scale.” said Melinda Marks, Practice Director, Cybersecurity at ESG. “Risk-based prioritization can help organizations best deploy their resources and optimize efficiency to secure their environments to support business growth.”

Predictive Risk Scoring is currently available to Invicti customers using both Acunetix and Invicti (formerly Netsparker) product lines.

About Invicti Security

Invicti Security—which acquired and combined DAST leaders Acunetix and Netsparker—is on a mission: application security with zero noise. An AppSec leader for more than 15 years, Invicti provides best-in-DAST solutions that enable DevSecOps teams to continuously scan web applications, shifting security both left and right to identify, prioritize and secure a company’s most important assets. Our commitment to accuracy, coverage, automation, and scalability helps mitigate risks and propel the world forward by securing every web application. Invicti is headquartered in Austin, Texas, and has employees in over 11 countries, serving more than 4,000 organizations around the world. For more information, visit our website or follow us on LinkedIn.

###

Media Contact

Kate Bachman
Invicti
kate.bachman@invicti.com

The post Invicti Launches First AI-Enabled Predictive Risk Scoring for Application Security Testing appeared first on Invicti.

]]>
NIST CSF 2.0: The world’s favorite cybersecurity framework comes of age https://www.invicti.com/blog/web-security/nist-csf-2-0-cybersecurity-framework-comes-of-age/ Fri, 12 Apr 2024 16:07:07 +0000 https://www.invicti.com/?p=51457 The NIST CSF 2.0 is a long-awaited update to the NIST cybersecurity framework, bringing the document in line with the realities of modern information security. Reorganized and expanded to apply to all types and sizes of organizations, the CSF now also comes with examples and extra resources to aid implementation.

The post NIST CSF 2.0: The world’s favorite cybersecurity framework comes of age appeared first on Invicti.

]]>
The NIST cybersecurity framework has been a go-to resource for defining cybersecurity strategies, policies, and activities ever since version 1.0 was published back in 2014. Originally intended specifically for US companies operating critical infrastructure, it soon gained popularity across all industries and is used by CISOs worldwide. February 2024 saw the launch of version 2.0 of the framework, renamed and restructured to bring it in line with real-life usage and modern cybersecurity challenges. Just as importantly, the NIST CSF 2.0 comes with practical implementation examples, quick start guides, and extensible community profiles for specific industries and use cases.

A brief history of the CSF

The original Framework for Improving Critical Infrastructure Cybersecurity was published in 2014 by NIST (The National Institute of Standards and Technology) in response to an Obama administration executive order calling for a standardized cybersecurity framework to help structure efforts around securing critical infrastructure. Originally intended to guide organizations managing critical infrastructure services in the US private sector, the framework proved popular with organizations of all sizes worldwide. Later updated to version 1.1, the document became informally known as simply the NIST cybersecurity framework.

In the wake of mounting supply-chain attacks a decade later, notably against SolarWinds and Colonial Pipeline, the Biden administration issued its own executive order on cybersecurity. Among its many provisions, the order also once again obligated NIST to prepare and issue suitable guidance. Two years later, in October 2023, NIST released a public draft of version 2.0 of its framework, followed by the final document in February 2024 that included enhancements based on community feedback. 

Now officially renamed the Cybersecurity Framework (CSF), the current document is intended to “…reflect current usage of the Cybersecurity Framework, and to anticipate future usage as well.” Let’s take a look at the changes made to the framework itself and its accompanying resources in an effort to expand its usefulness far beyond the originally intended scope.

Changes in version 2.0 compared to CSF 1.1

The most obvious change to the framework core is that while v1.1 divided cybersecurity efforts into five core functions, version 2.0 has six: Govern, Identify, Protect, Detect, Respond, and Recover. The Govern function is the newcomer, mostly incorporating existing outcomes (subcategories) pulled from other functions. This new high-level home for governance functions highlights the importance of top-down planning and oversight in ever more complex environments.

The new Govern function also reflects the focus of the document, expanding beyond only protecting critical infrastructure and towards wider applicability. Every organization needs to first understand its unique operating context before defining its governance needs, risk management expectations, and strategies. The Govern function includes the following categories, the majority of which come from the Identify function of v1.1:

  • Organizational Context
  • Risk Management Strategy
  • Roles, Responsibilities, and Authorities
  • Policy
  • Oversight
  • Cybersecurity Supply Chain Risk Management (C-SCRM)

It’s interesting to see that managing supply chain security risk is considered so important that it gets its own governance category—a reflection both of the CSF’s roots in critical infrastructure security and of the growing dangers of supply chain attacks. Looking at recent security scares such as the xz-utils backdoor, prioritizing supply chain security as an integral part of governance is definitely a good idea for any organization.

To further underscore the expanded scope and applicability of the CSF, NIST clearly states:

The Functions, Categories, and Subcategories apply to all ICT used by an organization, including information technology (IT), the Internet of Things (IoT), and operational technology (OT). They also apply to all types of technology environments, including cloud, mobile, and artificial intelligence systems.

NIST resources to help apply the CSF in practice

The original NIST framework was more a formal guideline document than a practical guide. When using it for their own purposes outside its original scope, organizations would need to mix and match the high-level outcomes to suit their specific needs. They’d also have to interpret the abstract language in the context of their industry to arrive at the controls and actions to be implemented. In contrast, the CSF v2.0 provides a wealth of additional assets or (to quote NIST) “a suite of resources (documents and applications) that can be used individually, together, or in combination over time as cybersecurity needs change and capabilities evolve.”

Within the framework core itself, the subcategories (i.e. lowest-level items) now come with examples that illustrate how outcomes can be implemented in different situations. This makes the framework core far easier to read, adapt, and apply to your specific organization. New in version 2.0 are quick start guides covering various tools provided to help use the CSF in practice, including:

Informative reference mapping resources are also provided to show how various frameworks and other documents map to other relevant NIST documents and guidelines. 

Getting familiar with the NIST cybersecurity framework 2.0

Compared to the previous version, CSF 2.0 is far more accessible and user-friendly, so anyone involved in cybersecurity would do well to visit the CSF resource center and get familiar with the available tools and resources. The interactive framework core CSF 2.0 reference tool is the best place to start seeing the structure of functions, categories, and subcategories, especially with the new examples giving some substance to the abstract formal definitions.

Every organization that has a cybersecurity program needs a framework to make sure there are no gaps in its security controls and policies—and its resulting cybersecurity posture. With all the changes introduced to make it more universal and easier to use, NIST CSF v2.0 should be at the top of every CISO’s bookmarks list, whether or not using it is mandatory for your organization’s cybersecurity compliance.

Frequently asked questions

What is the NIST Cybersecurity Framework?

Currently called the NIST CSF 2.0, the NIST Cybersecurity Framework is a guidance document that helps organizations from all industries and sectors to manage cybersecurity risks. The latest version adds a wealth of additional resources and practical examples to the core framework document.
 
Read about applying a cybersecurity framework to web application security.

Why do organizations need to use a cybersecurity framework?

By design, a cybersecurity framework helps to consider every possible aspect of systems and data security when planning and implementing security policies and controls. Following a structured framework helps to minimize the risk of security gaps and vulnerabilities that could lead to data breaches and other incidents if exploited.
 
Read about high-profile data breaches and the lessons to learn from them.

Who can use the NIST CSF?

The updated NIST CSF is intended as a resource for organizations of all sizes regardless of industry or location. As with the previous version, organizations can mix and match the security functions and categories to apply them in various scenarios, from full-scale enterprise risk management to a basic cybersecurity program for a small or medium business.
 
Read about five steps to improve your cybersecurity posture.

The post NIST CSF 2.0: The world’s favorite cybersecurity framework comes of age appeared first on Invicti.

]]>
The xz-utils backdoor: The supply chain RCE that got caught https://www.invicti.com/blog/web-security/xz-utils-backdoor-supply-chain-rce-that-got-caught/ Fri, 05 Apr 2024 12:25:59 +0000 https://www.invicti.com/?p=51399 The xz-utils backdoor could have been the most serious software supply chain compromise since the SolarWinds Orion hack. Carefully hidden in a widely-used open-source library, the sophisticated backdoor could have allowed remote code execution (RCE) on millions of systems if it hadn’t been accidentally discovered. This post summarizes the story so far and asks what this latest attempt means for the future of software security.

The post The xz-utils backdoor: The supply chain RCE that got caught appeared first on Invicti.

]]>

What you need to know

 

  • The xz-utils package in versions 5.6.0 and 5.6.1 includes a malicious backdoor that could, in specific circumstances and configurations, allow remote access to SSH sessions for remote code execution (RCE) on selected Linux systems.
  • As a precaution, all Linux users are advised to ensure their xz-utils version is earlier than 5.6.0 and downgrade if necessary, especially if running public sshd. While only a small percentage of systems worldwide could be directly vulnerable, this may change with further analysis.
  • All signs point to a multi-year, carefully planned supply chain compromise operation by an advanced threat actor that may have also tampered with other open-source packages.

On March 29, 2024, software engineer Andres Freund reported finding a backdoor in the liblzma library, part of the xz-utils package. What started with investigating a drop in OpenSSH performance on a pre-release Debian Linux system turned into a global security scare that is still unfolding. Luckily, the backdoor was discovered before the compromised library version became more widely used, so relatively few systems could be immediately affected. The bigger story is how the backdoor was created, hidden, and distributed—and how it could have compromised the security of millions of systems if it went into widespread use.

How xz-utils got backdoored

Open-source software is commonly downloaded in packages called tarballs that are compressed using one of several popular compression utilities—most often Gzip (making .tar.gz files), but XZ is also used (resulting in .tar.xz files). XZ compression is also used internally by some programs, making the xz-utils package a necessary part of any Linux system.

The xz-utils project was created and maintained by Lasse Collin until a helpful and very insistent contributor going by the name of Jia Tan recently succeeded in fully taking over the project on GitHub. Among Jia’s latest commits were alleged compression performance improvements to the liblzma library, published in versions 5.6.0 and 5.6.1 of xz-utils. These are the versions that included the backdoor, but the compression utility was only a stepping stone to a much bigger prize.

One piece of software that depends on the liblzma library is OpenSSH, though only in some system configurations, specifically where it’s been patched to play nicely with system notifications from the systemd process manager (notably in Debian Linux). In that setup, any running SSH server depends on liblzma—and getting control of those remote shell sessions was the ultimate goal.

The payload: Malicious code? What malicious code?

The backdoor was reported by Red Hat as CVE-2024-3094 as “malicious code” in the package. What makes it different from most software vulnerabilities is that the source code itself is clean and secure. The backdoor is hidden in separate “test” files and only reassembled and inserted into the library during compilation. What follows is a hugely simplified overview of what is known about the backdoor, especially considering that every step is obfuscated and performed with fiendishly clever tricks using innocent text-processing utilities.

Before source code written in a language like C or C++ can be executed, it needs to be compiled from a text file into a binary file. This is a complicated process, so most open-source projects also include ready compilation scripts (makefiles) alongside the source code and any additional files. For convenience, the whole thing can be downloaded as a single tarball package—and this is where Jia Tan put the malicious code.

To avoid detection by scanners, the malware binary was, in effect, cut up into several pieces, and the gaps filled up with junk. For additional stealth, it is only included in the packaged tarball, so it’s not there if anyone examines the individual files in the repository. But if the package from an infected tarball is compiled on a system that meets specific configuration requirements, the build scripts reassemble the malicious code and attach it to the liblzma library, where it waits for a specific function call from a remote secure shell (SSH) session.

If all the conditions are met, a malicious actor can activate the backdoor by connecting to a compromised system over SSH and sending their encrypted access key. When successful, this could allow them to bypass the entire authentication process and gain unauthenticated remote access to the system.

Now imagine what would happen if this wasn’t caught and the backdoored unstable versions became stable versions that were gradually incorporated into all major Linux distributions during the next few years, spanning thousands if not millions of Linux servers and workstations worldwide… No wonder this CVE scored 10 out of 10 for severity.

The helpful contributor who took over and then vanished

If the maintainer of a long-standing and widely used open-source project putting a backdoor in that project sounds unthinkable, that’s because it is. As noted, the malicious code was introduced by the mysterious Jia Tan, aka JiaT75, who only became the maintainer shortly before. When the story broke, people started piecing together the online activity and history of this Jia—and discovered someone who seemingly only popped into existence in October 2021.

Around that time, JiaT75 started making small contributions to various open-source projects, most likely to build credibility rather than engage in malicious activity. (Although having a curious preference for projects that somehow touched SSH.) Getting involved in xz-utils, Jia gradually became more and more active, eventually gently persuading the founder to relinquish control of the venerable project in the name of innovation (with the aid of several other suspiciously eager contributors). With that, Jia was finally ready to upload the backdoored bits and pull off what Michał Zalewski has called “one of the most daring infosec capers” he has ever seen.

While the “Jia Tan” moniker was clearly intended to look Chinese and nearly all of Jia’s logged activity is from a Far East time zone, researchers have pointed out several oddities that don’t fit the “Chinese software enthusiast” cover story. Notably, Jia’s active hours correspond very closely to 9 am to 5 pm in Central Europe. The user was also active during some major Chinese holidays but inactive during some European holidays. Finally, a handful of login timestamps include the CET time zone rather than the usual one, as if someone forgot to change the system time before logging on.

One theory is that the JiaT75 account is not an individual but an advanced threat actor group, with many pointing to APT29 (aka Cozy Bear) as a group with similarly stealthy operational patterns and sufficiently advanced tech skills. You may remember them from the SolarWinds Orion hack—also a supply chain attack, as it happens. Whatever the case, Jia (unsurprisingly) vanished into thin air when the backdoor was reported and has not been seen since.

A new era for exploiting the reliance on open-source software

Compared to the devastation of something like the MOVEit Transfer data breaches, this whole story might seem like a non-issue: nobody was hacked (that we know of), nothing was lost, and the compromise attempt was foiled. On top of that, only a narrow subset of systems could currently have been targeted, and only in specific circumstances. While that’s all true, the details of this incident should be ringing the loudest software supply chain security alarm bells since that SolarWinds Orion incident.

The technical innovation of the attack was to hide malicious code not in the source but in innocent-looking additional files packaged with it. The sophistication, stealth, and multi-year patience of Jia Tan points to an advanced threat actor group with the resources and motivation to gamble on a long game where the prize could be persistent RCE on thousands of systems. Yes, the xz-utils backdoor was found, but mostly by coincidence and sheer luck, as Andres Freund himself is quick to point out. Though an experienced software engineer, Freund is not a security researcher, nor was he even investigating that specific package. It was a very lucky find for everyone.

It’s pretty clear there’s a high risk that a similar future attempt may succeed. Given the scale of the operation, it seems unlikely that a global threat actor would invest all that time and effort into compromising only one niche package, targeting (at least initially) a very narrow group of systems. Which begs the question: How many other open-source packages have already been backdoored by extremely helpful contributors with no prior history?

“While the audacity of the whole operation is striking, it’s not surprising that someone managed to hide a backdoor in plain sight, given how much developers have to rely on third-party components and libraries that often come with their own dependencies,” notes Sven Morgenroth, Staff Security Engineer at Invicti. “It’s like with Node.js projects, where you might have relatively few direct dependencies but get a node_modules folder full of additional ones. This is a problem for security because even small coding mistakes (not to mention deliberate backdoors) can quickly propagate from dependencies to your otherwise secure application.”

The open-source ecosystem was built on mutual trust and support. As both erode and the maintainers of crucial software components are left to their own devices, it looks like Jia Tan and friends are actively stepping in to backdoor and wire-tap the very foundations of the information age. The xz-utils incident merely serves as a reminder and proof point that supply chain attacks are indeed the #1 global software security threat. “Given the sheer amount of third-party code powering our applications and the lack of volunteers to audit these components, it’s close to impossible to assess the security of an application without using some form of automation,” concludes Morgenroth.

In the meantime, we’re keeping an eye on this story and will update here as new details emerge.

The post The xz-utils backdoor: The supply chain RCE that got caught appeared first on Invicti.

]]>
Why DAST makes the perfect security posture gauge https://www.invicti.com/blog/web-security/why-dast-makes-the-perfect-security-posture-gauge/ Thu, 28 Mar 2024 05:31:00 +0000 https://www.invicti.com/?p=51005 The variety of available DAST tools that differ widely in purpose and quality has resulted in many security leaders underestimating the flexibility and usefulness of modern DAST. And that’s a shame because the right solution in the right hands can serve as an accurate gauge of application security posture while also unlocking efficiencies all across the organization. This post showcases just a few highlights from the Invicti white paper “DAST: The CISO’s Security Posture Gauge.”

The post Why DAST makes the perfect security posture gauge appeared first on Invicti.

]]>
Focused on detection and response, security leaders might not think of DAST tools as an essential component of their AppSec toolbox. All too often, external vulnerability scanning is only performed during periodic third-party tests, giving you snapshots of your security posture that can be months out of date. What if you could run your own tests as often as you need and at no extra cost per test? Welcome to fact-based application security, where a quality DAST becomes your security posture gauge.

Read the Invicti white paper “DAST: The CISO’s Security Posture Gauge”

Don’t take someone else’s word for it—run your own security testing

CISOs and other security leaders are expected to maintain an impregnable security posture and accurately report on it, yet for application security, they often have to rely on second-hand data and other people’s assurances. Getting your own data typically requires a compliance audit or a third-party assessment like a penetration test, which means you have to wait weeks or months for your vulnerability reports—and even then, you are depending on that third party to deliver accurate information. Worse still, that information will become outdated very soon, and until the next test rolls around, you will only know your security posture in the past, not here and now.

Ideally, you would want to run your own tests whenever you want an update. That way, you can make fact-based decisions based on current information, without taking anyone’s word for it and without asking anyone’s permission. But how can you even do that? To assess your realistic exposure, it would be best to probe every corner of your public-facing application environments and look for vulnerabilities that could be exploited by malicious actors. Oh—and do this safely, accurately, automatically, and independently of the development and deployment internals. However you slice it, the only realistic way to do that is with a good, reliable DAST solution.

The perfect tool for self-service AppSec assessments

The limitations of some web vulnerability scanners have given rise to myths and misconceptions that keep DAST tools off the radar for many security leaders—after all, aren’t they only used by QA internally and then pentesters externally? In reality, the “DAST” label applies to many different tools that were designed for different purposes. For example, a vulnerability scanner designed to aid manual penetration testing might excel in that role but be of little use to a CISO looking for an automated way to gauge security posture. To do that, you need an advanced and scalable DAST solution that can run hands-off on any required schedule and deliver the right data to the right people.

Compared to a more traditional approach based on commissioning external penetration tests, a reliable self-service DAST gives you up-to-date vulnerability information as often as you need it, and can repeatably run thousands of test payloads against thousands of attack points in a fraction of the time. Leading solutions even include automatic exploitation functionality to safely check which vulnerabilities are remotely exploitable and need fixing first. And all this on your own schedule and without taking anything on trust, giving you a first-hand overview of your actual security posture.

Intrigued? We’ve put together a detailed white paper that takes an in-depth look at all these topics and more, dispelling common DAST myths along the way, demystifying the market, and showing how the versatility of advanced DAST solutions can unlock efficiencies and savings—not only for the security organization, but also for engineering.

Read the Invicti white paper “DAST: The CISO’s Security Posture Gauge”

The post Why DAST makes the perfect security posture gauge appeared first on Invicti.

]]>
Invicti Launches New Integration with ServiceNow to Deliver Automated Workflows for Vulnerability Discovery Through Remediation https://www.invicti.com/blog/news/invicti-launches-servicenow-integration-delivers-automated-workflows-vulnerability-discovery-remediation/ Tue, 26 Mar 2024 13:31:35 +0000 https://www.invicti.com/?p=50895 Invicti Security has announced a new integration with ServiceNow to use Invicti’s DAST and IAST scan data in ServiceNow’s Application Vulnerability Response (AVR) for a seamless experience with the two platforms. The joint effort enables Invicti to create better experiences and drive value for customers built with ServiceNow.

The post Invicti Launches New Integration with ServiceNow to Deliver Automated Workflows for Vulnerability Discovery Through Remediation appeared first on Invicti.

]]>
AUSTIN, Texas — (March 26, 2024) — Invicti, the leading provider of application security testing solutions, today announced an integration with ServiceNow to pull scan data from Invicti’s leading DAST and IAST into ServiceNow’s Application Vulnerability Response (AVR) for a seamless experience between the two systems. The joint effort enables Invicti to create better experiences and drive value for customers built with ServiceNow.

ServiceNow’s expansive partner ecosystem and new partner program is critical to support the $500 billion market opportunity for the Now Platform and associated partner services. The revamped ServiceNow Partner Program recognizes and rewards partners for their varied expertise and experience to drive opportunities, open new markets, and help joint customers in their digital transformation efforts.

As a Registered Build Partner, the certified integration allows for greater prioritization and potential impact assessment of code flaws that may lead to an exploit. This ability to better show developers and security teams where to focus their efforts furthers its mission to provide AppSec with Zero Noise to customers and the industry. The integration is available in the ServiceNow Store.

“Being a part of ServiceNow’s ecosystem is a major benefit for customers working to streamline and automate their vulnerability management and overall application security programs,” said John Mandel, Chief Engineering Officer at Invicti. “Strong integration between our tools has been an ask from our customers and we’re excited to deliver on this value driver for them.”

“Partnerships succeed best when we lean into our unique skills and expertise and have a clear view into the problem we’re trying to solve,” said Erica Volini, Senior Vice President of Global Partnerships at ServiceNow. “Invicti extends our reach well beyond where we can go alone and represents the legacy and goals of the Now Platform. I am thrilled to see the continued innovation we will achieve together to help organizations succeed in the era of digital business.”

Invicti also has integrations with ServiceNow’s Vulnerability Response system, allowing bi-directional functionality and customizations for customers to gain better visibility and automation from vulnerability discovery through remediation, saving developer time and improving security posture through stronger vulnerability management and application security.

 About Invicti Security

Invicti Security—which acquired and combined DAST leaders Acunetix and Netsparker—is on a mission: application security with zero noise. An AppSec leader for more than 15 years, Invicti provides best-in-DAST solutions that enable DevSecOps teams to continuously scan web applications, shifting security both left and right to identify, prioritize and secure a company’s most important assets. Our commitment to accuracy, coverage, automation, and scalability helps mitigate risks and propel the world forward by securing every web application. Invicti is headquartered in Austin, Texas, and has employees in over 11 countries, serving more than 4,000 organizations around the world. For more information, visit our website or follow us on LinkedIn.

ServiceNow, the ServiceNow logo, Now, Now Platform, and other ServiceNow marks are trademarks and/or registered trademarks of ServiceNow, Inc. in the United States and/or other countries.

Use of Forward‑Looking Statements
This press release contains “forward‑looking statements” about the expectations, beliefs, plans, intentions and strategies relating to the market opportunity and growth of the Now Platform. Forward‑looking statements are subject to known and unknown risks and uncertainties and are based on potentially inaccurate assumptions that could cause actual results to differ materially from those expected or implied by the forward‑looking statements. If any such risks or uncertainties materialize or if any of the assumptions prove incorrect, our results could differ materially from the results expressed or implied by the forward‑looking statements we make. We undertake no obligation, and do not intend, to update the forward‑looking statements. Factors that may cause actual results to differ materially from those in any forward‑looking statements include, among other things, any changes to the partner program and unexpected delays, difficulties and expenses in achieving market growth and/or opportunity. Further information on factors that could affect our financial and other results is included in the filings we make with the Securities and Exchange Commission from time to time.

###

Media Contact

Kate Bachman
Invicti
kate.bachman@invicti.com

The post Invicti Launches New Integration with ServiceNow to Deliver Automated Workflows for Vulnerability Discovery Through Remediation appeared first on Invicti.

]]>