Apr 24, 2026

What "Inside-Out" Actually Means in Third-Party Security

"Inside-out" has a specific technical meaning in security: a platform with authorized access inside a vendor's environment, reading live infrastructure state directly. It describes a method.

The term is now being applied to approaches that do not involve any access to vendor infrastructure at all. Cross-referencing questionnaire answers against external scan data is useful work. It is not inside-out monitoring. Calling it that borrows the credibility of a technically rigorous concept to describe something fundamentally different, and it creates real confusion for security leaders trying to understand what they are actually buying.

The distinction is not pedantic. The two approaches answer different questions and produce different types of evidence. A CISO who believes they have inside-out visibility into a vendor's cloud environment, and actually has questionnaire correlation, is carrying a risk they do not know about.

How outside-in security ratings work, and where they stop

Outside-in security ratings platforms scan publicly observable signals: exposed ports, DNS health, certificate issues, known vulnerabilities tied to IP ranges, domain reputation. The data is collected without vendor participation, which is what allows these tools to cover tens of thousands of companies at scale.

The strengths are genuine. Coverage is broad, setup is near-instant, and signals update continuously. The limitation is architectural: these tools only see a subset of the external attack surface. IAM controls, endpoint security, internal network security, software supply chain security, cloud misconfigurations (i.e. public S3 buckets or databases), insecure integrations with SaaS providers, or even Internet-exposed services that live in ephemeral unattributable IP addresses… None of those are visible to outside-in scans. But they are often at the root cause of major security incidents.

What "inside-out" questionnaire validation actually does

Several security ratings platforms now offer a layer they describe as inside-out: vendor questionnaire responses are automatically mapped against the platform's external ratings data to check whether what a vendor claims aligns with what the platform observes externally. If a vendor asserts they patch critical vulnerabilities within 30 days but their external signals show open CVEs, the discrepancy gets flagged.

This is a real improvement over spreadsheet-based questionnaire management. The automated correlation reduces manual review time and can surface obvious inconsistencies at scale.

The constraint is architectural. The validation layer checks questionnaire answers against outside-in signals. The source of truth is still the vendor's self-reported response. What is visible externally is still limited to what has a public footprint. A vendor could have perfectly clean external signals and still have serious internal problems with no external signature.

It is also important to state that companies have to have a healthy level of skepticism over questionnaire answers and supporting documentation provided by third-parties. Even though most people are honest, the financial incentives to expedite a business transaction could lead to all sorts of issues. The recent scandal involving SOC2 reports is an extreme example of that. Mature TPRM processes should be designed to catch bad actors, and thus should prioritize analyzing high-quality and trusted data.

What actual inside-out monitoring requires technically

Real inside-out visibility means the platform has authorized, programmatic access to read configuration state directly from inside the vendor's environment. No agents, no software installation, but a granted permission that lets the platform query the cloud provider or SaaS application through its API.

In practice, this works through read-only credentials with minimal privilege. For AWS, for instance, that means a CloudFormation-provisioned IAM role with a restricted permission set. For Azure, a custom role scoped to metadata read access. For Google Cloud, a service account with defined scopes. The platform uses those credentials to run configuration checks against the actual live environment, not against a vendor's answer to the question "are your IAM roles configured correctly?"

Zanshin, Tenchi Security's solution, operates this way. It connects through authorized read-only API integrations across IaaS and PaaS providers, SaaS platforms, security tools, and identity providers. The solution evaluates configuration against over 1,400+ security rules on a continuous basis, with results updated daily by default.

No business data is collected. Scans read metadata and configuration signals only: whether a control exists and is correctly configured, not the content it protects. Resource identifiers in alerts are masked when shared with the first party, so the vendor's privacy is preserved even while the risk picture is visible.

The key point: a check that asks "does this AWS account enforce MFA for root?" goes directly to the AWS IAM API and reads the actual answer. A vendor cannot self-report their way around it.

Third-Party Risk · Coverage Analysis

What each monitoring approach can actually see

Not all visibility is the same.

✓ Verified — independent of vendor claims ⚠ Self-reported — unverifiable ✗ Not visible — no external footprint
Misconfiguration type Outside-in scan Questionnaire correlation Zanshin · API-based
Public surface Exposed ports / public-facing services ✓ Verified ✓ Verified ✓ Verified
Known CVEs on external assets ✓ Verified ✓ Verified ✓ Verified
DNS and certificate issues ✓ Verified ✓ Verified ✓ Verified
Internal controls Overpermissive IAM roles ✗ Not visible ⚠ Self-reported ✓ Verified
MFA disabled for admins ✗ Not visible ⚠ Self-reported ✓ Verified
Unencrypted cloud storage ✗ Not visible ⚠ Self-reported ✓ Verified
SaaS permission oversharing ✗ Not visible ⚠ Self-reported ✓ Verified
Endpoint coverage gaps ✗ Not visible ⚠ Self-reported ✓ Verified
Identity provider misconfigurations ✗ Not visible ⚠ Self-reported ✓ Verified

The practical gap: What each approach can and cannot see

Consider a vendor running production workloads on AWS. They have an S3 bucket with public access enabled on a specific prefix used for a legacy integration. An outside-in scan won't detect the misconfiguration unless the bucket itself generates an external signal visible from outside the account. A questionnaire answer saying "all S3 buckets are private" passes validation against external data because there is nothing externally to contradict it.

An authorized API-based check queries the S3 bucket ACL and bucket policy directly, surfaces the misconfiguration as an alert with severity and remediation guidance, and retests it the next day to confirm whether it was fixed.

This is not a theoretical difference. The 2025 Verizon Data Breach Investigations Report, analyzing 12,195 confirmed breaches, found that third-party involvement doubled year-over-year and now accounts for 30% of all breaches. 

Third-party involvement in confirmed data breaches

Share of all breaches attributable to a third party or vendor — Verizon DBIR

15%
2024 DBIR
+100%
Doubled
30%
2025 DBIR

Source: Verizon 2025 Data Breach Investigations Report — 12,195 confirmed breaches analyzed

Cloud misconfiguration continues to be one of the leading causes, and those misconfigurations are largely invisible to outside-in scanning precisely because they have no external footprint. The same report found that credential abuse was the single leading initial access vector, present in 22% of breaches, with nearly one in three intrusions relying on valid, legitimate credentials rather than exploits. The controls that prevent this, MFA enforcement for privileged accounts, active password complexity policies, are internal configuration states with no external signature. 

An outside-in scan cannot tell you whether a vendor's Okta tenant actually enforces MFA for all admin roles, or whether their Azure AD password policy is active and correctly scoped. A questionnaire can ask. The answer, however, comes entirely from the vendor — and the same DBIR found that only 3% of compromised passwords met basic complexity requirements, which suggests a significant gap between what organizations claim to enforce and what their environments actually apply.

What each approach is actually good for

Questionnaire automation and authorized infrastructure monitoring solve different problems. Framing them as competitors to the same outcome obscures what each actually does.

Questionnaire automation reduces the operational burden of sending, tracking, and reviewing vendor assessments. For organizations managing hundreds of vendors, that efficiency gain is real and valuable. The data it produces is structured and auditable, useful for compliance reporting. Its weakness is that it remains dependent on what vendors say about themselves, validated against what is externally visible.

Authorized infrastructure monitoring bypasses self-reporting entirely. It produces evidence, not answers. The data is live configuration state, retested daily, with a score that reflects actual control coverage across the vendor's environment. The required tradeoff is vendor consent and a brief setup step, which is also what makes it meaningfully different from passive scanning.

For regulated organizations, the accountability framing matters: many regulations make companies responsible for the security posture of their entire value chain. That responsibility is hard to demonstrate through questionnaire correlation alone when regulators are asking for evidence of actual control effectiveness.

Monitoring cadence: questionnaire vs. continuous

How often vendor posture is actually checked under each approach

Questionnaire cycle

Questionnaire sent to vendor
Vendor completes responses
Review and validation
Risk score updated
Next review: 6–12 months

API-based continuous monitoring

Vendor grants read-only API credentials
Day 1: 1,400+ configuration checks run
Alerts with severity and remediation surfaced
Day 2: all checks rerun automatically
↻  Repeats every 24 hours (or 6–12 hrs)

What to ask when a vendor claims "inside-out" visibility

Three questions that surface the technical reality quickly:

What authorized access has the platform obtained from the vendor's cloud environment, and through which credential mechanism?
If the answer describes API keys, IAM roles, or service accounts, that is genuine inside-out access. If the answer describes questionnaire mapping or scan correlation, it is not.

What configuration checks run against that environment, and against which services?
A platform doing real inside-out monitoring should be able to list specific checks: MFA enforcement in Okta, public access settings in AWS S3, admin consent policies in Microsoft 365. Generic answers suggest the checks are not running against live configuration.

How often does the platform retest?
Configuration changes. A vendor can pass a check on Monday and fail it on Wednesday after a misconfiguration in a deployment pipeline. Daily retesting of all findings is the minimum for meaningful continuous monitoring.

Third-Party Risk · FAQ

Frequently asked questions