Table of contents
It's been a year since PCI DSS requirements 6.4.3 and 11.6.1 became mandatory. By now, most QSAs are well acquainted with what these requirements entail. The challenge isn't understanding the requirements themselves; it's assessing whether what a merchant has implemented actually works.
Over the past year, we've seen a wide range of approaches in the wild. Some are strong, layered implementations that genuinely improve security. Others are, frankly, checkbox exercises that might satisfy a cursory review but wouldn't withstand a targeted attack for more than a few seconds. And as we've seen from our own research into campaigns like the Blobs to Blockchain attack, the attackers targeting payment pages are sophisticated, well-resourced, and constantly evolving.
In this blog post, we'll walk through the most common approaches QSAs encounter, examine their strengths and weaknesses, and provide practical guidance on what to look for when assessing whether a merchant's implementation is effective.
The Spectrum of Approaches
The approaches to meeting 6.4.3 and 11.6.1 span a wide range, from free browser-native features to dedicated client-side security platforms. Each has trade-offs, and understanding those trade-offs is key to assessing whether a merchant's choice is appropriate for their risk profile.
Content Security Policy (CSP)
Content Security Policy is a browser-native mechanism that allows website operators to define an allowlist of permitted script sources. When a script attempts to load from a source not on that allowlist, the browser blocks it. It's free, it's widely supported, and it's a good foundational control.
The problem is that CSP is more of a prevention mechanism, not a detection mechanism. Unless you've built out a reporting infrastructure using report-uri or report-to directives, CSP won't alert you when something is blocked; it just silently prevents it. And for 11.6.1's requirement to detect and alert on unauthorized modifications, prevention alone isn't enough.
In practice, CSP configurations on payment pages are often far too permissive. We regularly see unsafe-inline, unsafe-eval, and overly broad wildcards that undermine the protection CSP is supposed to provide. Maintaining a tight CSP is genuinely difficult; it risks breaking functionality, requires constant tuning as third-party scripts change, and doesn't protect against compromised allowed sources. If a CDN or third-party provider on your allowlist is compromised, CSP will happily allow the now-malicious script to execute, because it still comes from an "allowed" source.
We also demonstrated in our Blobs to Blockchain research that attackers are using blob: and data: URIs to bypass CSP configurations that don't explicitly block these sources, and most don't.
For QSAs, the key questions when assessing a CSP-based approach are: Is there a report-uri or report-to directive configured, and is someone actually monitoring violations? How often is the CSP reviewed and updated? And when was the last time that update broke something?
But what about CSP Level 3?
You might hear that newer CSP directives solve some of these problems. Here's the reality.
Strict-dynamic, now supported in all modern browsers, including Safari 15.4+, allows trust to propagate from a nonce-validated script to dynamically loaded scripts. In theory, this solves the tag manager problem, nonce your GTM script, and everything it loads is automatically trusted.
The catch is that this is a double-edged sword. You're effectively delegating your security decisions to whoever controls the tag manager. If the GTM container is compromised or misconfigured to load a malicious script, CSP will happily allow it. You've traded one problem, CSP breaking tag managers, for another: CSP trusting whatever tag managers decide to load.
Trusted Types is a more promising development for preventing DOM XSS, as it requires type-safe handling of dangerous sinks like innerHTML. However, it's currently only supported in Chromium-based browsers (Chrome and Edge), with no support for Firefox or Safari and no clear roadmap for adoption. It's not a viable cross-browser solution today.
What it boils down to is that CSP Level 3 makes CSP easier to deploy, but doesn't address the fundamental limitation: CSP validates identity and source, not behavior. A trusted script that turns malicious will still execute.
Subresource Integrity (SRI)
SRI provides a cryptographic guarantee that an external script hasn't been tampered with. You include a hash of the expected script content in the <script> tag, and the browser refuses to execute the script if its content doesn't match. On the face of it, this sounds like a strong control for 6.4.3's integrity requirement.
In practice, SRI has significant limitations. It only works for externally hosted scripts that are served with the correct CORS headers. It breaks whenever a script is legitimately updated, because the hash no longer matches, which means there needs to be a process to authorize the script and update the hash every time a vendor pushes a change. For tag managers, analytics tools, A/B testing scripts, and anything else that changes frequently, SRI is simply impractical.
SRI also provides no alerting mechanism. When a hash doesn't match, the script fails to load, sometimes silently, sometimes breaking visible functionality. But there's no alert, no notification, and no audit trail. And it does nothing whatsoever for first-party scripts that are compromised on the server itself.
Most merchants we encounter have given up on SRI for anything beyond a small number of static, rarely-changing scripts. QSAs should check actual SRI coverage and ask about the hash update process. If the answer involves manual updates, ask how frequently those updates actually happen and whether there's evidence to support that.
Manual Script Inventory and Reviews
Some merchants maintain a spreadsheet or document listing the scripts on their payment pages, with periodic manual reviews to check for changes. This approach meets the letter of 6.4.3's inventory and authorization requirements, and it forces someone to actually think about what's running on the payment page, which has some value in itself.
However, a manual inventory is a point-in-time snapshot. It becomes stale the moment it's completed. It provides no detection capability whatsoever for 11.6.1, and it doesn't scale. For merchants with multiple payment flows, different technology stacks across regions, or frequent deployments, a manual inventory quickly becomes a maintenance burden that falls behind.
QSAs should ask when the inventory was last updated, and be skeptical of the answer. Ask how changes are detected between reviews. Is there evidence of actual review and investigation, or is it a dusty spreadsheet that gets updated the week before the assessment?
CDN-Based Solutions
Several major CDN and WAF providers now offer client-side security features. For merchants already using one of these CDNs, these solutions appear convenient, with no additional vendor relationship or separate deployment.
The trade-off is that these solutions operate at the CDN or edge layer, which limits their visibility. They can track scripts served through the CDN and, in some cases, offer CSP management and script blocking at the edge. But they have limited visibility into what actually happens inside the browser. Inline scripts, scripts injected into the DOM dynamically, and first-party scripts may not be visible. And data exfiltration that happens purely client-side, via cookies, form manipulation, or localStorage, occurs in the browser, beyond the edge's line of sight.
There's also a vendor lock-in consideration. Your client-side security becomes tied to your CDN choice. If you switch CDN providers, you lose your client-side protection and need to start again with a new solution. For global merchants, there's an additional risk: Akamai, for instance, has been pulling out of certain regions, which means that merchants with a global presence risk finding gaps in coverage or performance.
More broadly, it's worth considering whether client-side security is a core competency for these vendors or a side feature. CDN/WAF providers have to split their roadmap priority across DDoS protection, bot management, WAF rules, content delivery, and many other features. Client-side security is one small part of their offering, and feature depth and innovation often lag behind pure-play vendors for whom this is the entire focus of the business.
QSAs assessing a CDN-based solution should ask: Does this cover scripts not served through the CDN? Some scripts fail to work when proxied by the CDN, so how are these evaluated? What happens to your client-side security if you change CDN providers? And for global merchants: Are there any regional availability concerns?
Agentless External Scanners
External scanning services periodically visit a merchant's payment pages from outside infrastructure, analyze the scripts present, and report on any anomalies. They're easy to deploy, require no code changes, and provide automated monitoring that goes beyond manual reviews.
The weakness of external scanners is their susceptibility to evasion techniques that attackers already use. Attackers routinely employ techniques to avoid detection: geo-targeting skimmers to only activate for users in specific countries or regions; filtering on user-agent strings to detect headless browsers, automated tools, and other non-browser clients; activating malicious code only during specific time windows; targeting only logged-in users; and requiring specific user interactions before the malicious code activates. These techniques aren't necessarily designed with scanner evasion in mind, but external scanners are vulnerable to them because they see a synthetic view of the payment page, not what real users see.
If an attacker's evasion techniques prevent the scanner from detecting the malicious code, the scanner returns a clean result, while real users continue to have their card data skimmed.
Detection latency is another consideration. Scanning frequency determines how quickly an attack is detected. Even hourly scans leave a window during which an attack could be active and undetected.
It's worth noting that not all agentless solutions are created equal. Some vendors, including Jscrambler, offer agentless scanning that goes beyond simple external page visits. Jscrambler's agentless approach, for example, runs the full Webpage Integrity detection engine from our servers, which means it retains the same behavioral detection capabilities as the agent-based deployment described below. This provides a meaningful step up from basic external scanning and meets the requirements of both 6.4.3 and 11.6.1. That said, agentless scanning, even with a more sophisticated engine behind it, still faces the inherent limitation of not running inside real user sessions. For merchants looking for the broadest possible detection coverage, the agent-based deployment provides visibility into what's actually happening in customers' browsers, including the geo-targeted, authentication-gated, and interaction-dependent attacks described above.
For QSAs, the critical question is: How does the implemented solution handle selective evasion? If the vendor can't articulate how they (or their tool) deal with geo-targeting, user-agent filtering, and authentication-gated attacks, a clean scan result doesn't necessarily prove the payment page is secure.
MITM/Proxy-Based Inspection
A newer approach used by some vendors involves sitting between the browser and script sources, intercepting and inspecting JavaScript as it's requested. The idea is to analyze script content in transit before it reaches the browser, without requiring a full JavaScript agent on the page.
This approach can inspect external scripts before delivery, which has some value. However, it raises several questions about detection gaps.
First, what about first-party and inline scripts? If a script is already embedded in the HTML or served from the same origin, it's not being fetched through the proxy, so does the proxy even see it? For many Magecart attacks, the initial loader is injected directly into the page's HTML, which would likely bypass this type of inspection entirely.
Second, scripts added dynamically via the DOM, for example, using document.createElement('script'), may not pass through the proxy infrastructure if they're generated and executed client-side.
Third, and perhaps most critically, a significant portion of modern skimming attack behavior happens entirely within the browser. Data exfiltration via cookies, localStorage manipulation, form field hijacking, and WebSocket-based command-and-control communication all occur on the client side, where a proxy sitting between the server and the browser has no visibility. In our Blobs to Blockchain research, the attackers used WebSockets for C2 communication and blobs for in-memory execution; a proxy-based approach would struggle to detect either.
This approach has significant blind spots for attacks that don't involve fetching an external script, which describes a growing proportion of modern Magecart techniques. And proxy infrastructure introduces its own architectural complexity, latency, and potential points of failure.
QSAs assessing a proxy-based solution should ask: How do you detect modifications to inline scripts? How do you detect data exfiltration via cookies or localStorage? What about scripts loaded dynamically by tag managers after the page has loaded? If there aren't clear answers to these questions, there are significant detection gaps.
Agent-Based Real User Monitoring
Agent-based solutions place a JavaScript agent directly on the payment page, monitoring real user sessions in real-time. Because the agent runs in the same context as the user's browser, it sees exactly what the user sees, including inline scripts, dynamically injected scripts, DOM modifications, cookie access, and network requests.
This approach eliminates many of the evasion techniques that plague external scanners. There are no scanner IP addresses to detect, no synthetic user-agents to filter, and no way to geo-fence the monitoring, because the monitoring happens inside real user sessions.
The criticism of agent-based approaches is that, if the agent is implemented naively, attackers could potentially detect, remove, or tamper with it. This is a legitimate concern, but it applies specifically to agents that lack self-protection mechanisms, not to the approach as a whole.
QSAs should ask agent-based vendors a direct question: How is your agent protected from tampering? If there's no good answer, that's a genuine gap.
Jscrambler's Hybrid Approach: Webpage Integrity
Jscrambler's Webpage Integrity (WPI) takes an agent-based approach but addresses the tampering concern head-on, using the same technology that Jscrambler has built its reputation on: code protection.
The WPI agent is hardened using Jscrambler's Code Integrity technology, the same obfuscation, anti-tampering, and anti-debugging protections that Jscrambler provides to customers protecting their own JavaScript applications. The agent is delivered polymorphically, meaning its code changes on each deployment, so there's no static signature for an attacker to identify or target. And RASP (Runtime Application Self-Protection) provides active anti-tampering measures that detect and respond to any attempt to interfere with the agent at runtime.
Beyond self-protection, WPI's core strength is behavioral monitoring. Rather than simply checking whether a script is "authorized" or matches a known hash, WPI monitors what scripts actually do at runtime. We'll discuss why this matters in the next section.
Importantly, WPI is designed to complement existing controls, not replace them. For merchants who already have CSP configured, WPI adds a detection layer. For merchants struggling with SRI on dynamic scripts, WPI's behavioral monitoring fills the gap. For those concerned about scanner evasion, real user monitoring eliminates the issue entirely. And for QSAs needing evidence of continuous monitoring for 11.6.1, WPI provides built-in alerting, dashboards, and an audit trail.
As a pure-play client-side security vendor, this is Jscrambler's core business, not a side feature bolted onto a CDN or WAF product. That means dedicated R&D, a specialized research team, and a roadmap driven entirely by client-side security needs.
What Does This Script Actually Do?
There's a deeper question that runs through all of the approaches above, and it's one that QSAs should keep front of mind: 6.4.3 requires merchants to authorize scripts, but authorization based on what?
Most approaches verify script identity, where it comes from, if the hash matches, and whether it's on an approved list. But very few verify script behavior, what the script actually does once it's running in the browser.
Consider a script from a trusted analytics vendor. It's on the approved list. It's been there for months. CSP allows it. SRI, if configured, confirms its hash. The manual inventory documents it as "analytics tracking." Everything checks out.
But what does it actually do? Does it only send analytics data to the vendor's servers, or does it also read payment form fields? Does it access cookies? Does it make network requests to domains other than the vendor's own infrastructure?
Now consider what happens when that vendor gets compromised: a supply chain attack. The script still comes from the same approved source. CSP still allows it. The SRI hash will change, but if SRI isn't configured for that script (and for dynamic scripts, it usually isn't), nothing flags the change. The manual inventory still lists it as "analytics tracking." Every identity-based check passes. But the script's behavior has fundamentally changed.
This isn't a theoretical concern. The Polyfill.io incident demonstrated exactly this scenario: a legitimate, widely-used CDN was compromised, and scripts from an "authorized" source became malicious. Magecart attackers frequently compromise legitimate third-party scripts rather than injecting entirely new ones. And tag manager abuse, where attackers inject malicious tags through compromised GTM or Adobe Launch accounts, means that the tag manager itself is authorized, but the scripts it loads certainly aren't.
This is where Jscrambler's behavioral approach provides a fundamentally different capability. Because WPI monitors what scripts actually do at runtime, it can detect when an "analytics" script starts accessing payment form fields, when a script's behavior changes from what was observed previously, or when a script begins making unexpected network requests. This closes the "authorized but compromised" gap that identity-based approaches simply can't address.
For QSAs, this should prompt three questions for any vendor or approach: How do you verify that authorized scripts only do what they claim? If an authorized script gets compromised, how would you detect it? And how do you handle scripts loaded dynamically by tag managers, where the tag manager is authorized but the individual scripts it loads aren't?
Putting It All Together
To summarise the above, here's how the approaches compare across the areas that matter most for effective 6.4.3 and 11.6.1 compliance:
Approach | Sees inline JS | Sees DOM-injected | Detects cookie exfil | Detects form hijacking | Behavioural analysis | Evasion-resistant | Vendor focus | CDN-independent |
|---|---|---|---|---|---|---|---|---|
CSP/SRI | N/A (prevention) | Partial | No | No | No | Partial | DIY | Yes |
Manual inventory | Point-in-time | No | No | No | No | N/A | DIY | Yes |
CDN-based | Limited | No | No | No | Limited | Partial | Bolt-on | No |
External scanner | At scan time | At scan time | No | At scan time | No | No | Varies | Yes |
MITM/Proxy | No | No | No | No | No | Partial | Pure-play | Yes |
Agent (naive) | Yes | Yes | Yes | Yes | Varies | No | Varies | Yes |
Jscrambler WPI | Yes | Yes | Yes | Yes | Yes | Yes | Pure-play | Yes |
What QSAs Should Be Looking For
Based on the above, here are some practical considerations for QSAs assessing 6.4.3 and 11.6.1 implementations:
Ask questions that test real-world effectiveness, not just compliance posture: "How would you detect an attack that only targets users in Germany?" will tell you far more than "Do you have a script inventory?" Similarly, "What happens if a script on your CDN is compromised?" tests whether the merchant has actually thought through supply chain risk, and "Show me your last three alerts, what triggered them, who responded, and what was the outcome?" reveals whether the monitoring is actually operational or just deployed and forgotten.
Watch for red flags: A CSP-only approach with no violation monitoring. A manual inventory with no change detection mechanism. A scanner-based solution where the vendor can't explain how they handle evasion techniques. An agent-based solution with no self-protection measures. A CDN-dependent solution for a merchant with global operations. Any of these should prompt further investigation.
Consider whether the merchant's approach matches their risk profile: A small merchant with a simple, static payment page and a single PSP integration has a different risk profile from a large retailer with multiple payment flows, dozens of third-party scripts, tag managers, A/B testing, and operations across multiple regions. The former might get by with a well-configured CSP and regular manual reviews. The latter almost certainly needs dedicated, real-time behavioral monitoring.
Look for defense in depth: The strongest implementations we see combine multiple approaches: CSP as a preventive baseline, with real-time behavioral monitoring layered on top for detection and alerting. The right combination of controls provides both prevention and detection, which is ultimately what 6.4.3 and 11.6.1, taken together, are driving at.
Conclusion
A year into 6.4.3 and 11.6.1 being mandatory requirements, the market has matured, but confusion persists. Merchants have more options than ever, but not all options are equal, and the gap between "compliant" and "secure" remains significant.
Attackers continue to evolve. They exploit legitimate browser features. They evade scanners, bypass CSP, and compromise trusted third-party scripts. They even use technologies such as blockchain smart contracts for persistence. The approaches that merchants and their QSAs choose need to match this reality.
What it boils down to is this: Does this approach provide broad enough detection to catch the wide variety of techniques that attackers actually use? Attacks can take many forms, from compromised third-party scripts to inline injection, from cookie exfiltration to WebSocket-based C2 communication. An approach that only covers some of these - or worse, can only cover some of these due to technical limitations - leaves gaps that attackers will walk through, not because they're deliberately evading the solution, but simply because their chosen technique happens to fall outside its line of sight.
Jscrambler's Webpage Integrity was built to answer that question with a confident yes by monitoring real user sessions, analyzing script behavior rather than just identity, and protecting the monitoring agent itself with the same code-protection technology we've been refining for over a decade. For QSAs guiding merchants toward effective compliance, that's the standard to measure against.