How to Assess a Salesforce Commerce Cloud Instance
Summary
The difference between a healthy SFCC instance and a troubled one comes down to cartridge discipline, integration hygiene, and drift from platform conventions. Surface-level reviews miss the real risks, you need to understand how decisions compound across cartridge layers, Business Manager configuration, and integration patterns.
SFCC Has Opinions, Your Assessment Should Too
SFCC isn't just another ecommerce platform. It has its own architecture patterns, deployment model, and extension philosophy. Assessing it requires understanding:
- The cartridge layering model and how customizations stack
- Controller and pipeline architecture (SFRA versus SiteGenesis lineage)
- Business Manager configuration sprawl
- Job framework dependencies
- Integration cartridge patterns and external service coupling
What to Look For
Cartridge Stacks Break in Predictable Ways
The cartridge model is SFCC's greatest strength and its most common failure point. Look for:
- Cartridge count: More than 15-20 custom cartridges often signals architectural drift
- Override depth: How many layers deep are template and controller overrides?
- Abandoned cartridges: Still in the path but no longer actively maintained
- Vendor cartridge versions: Are Link cartridges current, or years behind?
A healthy stack has clear separation between core, custom, and third-party cartridges. A troubled stack has overrides of overrides and no one remembers why.
SiteGenesis Lineage Changes Everything
This is the first question in any SFCC assessment. Implementations still running on SiteGenesis (or hybrid approaches) carry different risks:
- Pipeline architecture versus controller architecture
- Template inheritance patterns
- Client-side JavaScript patterns and build tooling
- Upgrade path complexity
SFRA migrations are consistently underestimated. Knowing the current state determines what's actually possible.
Business Manager Is Where Configuration Goes to Hide
Assess:
- Custom object proliferation: How many exist, and are they documented?
- Service configurations: Are credentials rotated? Are timeouts sensible?
- Job schedules: How many jobs run, and does anyone know what they all do?
- Site preference sprawl: Feature flags and settings accumulate over years
Configuration drift here causes production issues that are hard to trace.
Integration Debt Surfaces Late
SFCC integrations typically fall into a few patterns:
- Service framework usage: Are integrations using the platform's service framework, or raw HTTP calls scattered through code?
- Cartridge-based integrations: Link cartridges, custom integrations, or both?
- Middleware dependencies: Is there an integration layer (MuleSoft, etc.) or direct connections?
- Credential management: Hardcoded values versus proper service configurations
Integration debt is where most SFCC replatform projects discover hidden complexity.
Headless Implementations Hide Business Logic
If the instance has headless components or heavy OCAPI usage:
- Which APIs are exposed, and to what clients?
- Are there custom OCAPI hooks, and what do they modify?
- What's the authentication model?
- How much business logic lives in OCAPI customizations versus the storefront?
Critical logic often ends up in unexpected places.
Patterns That Indicate Risk
"We customized everything": Heavy controller and template overrides make upgrades expensive and bug fixes unpredictable.
"The original agency left": Knowledge transfer gaps in SFCC are severe because so much context lives in cartridge layering decisions.
"We're on SiteGenesis but want to upgrade": This is rarely an upgrade. It's a rebuild with data migration.
"Jobs keep failing but the site works": Background job failures often indicate integration or data sync issues that will surface at the wrong time.
"We don't touch Business Manager": Fear of Business Manager usually means no one understands the configuration state.
What a Thorough Assessment Covers
- Codebase review: Cartridge inventory, override analysis, code quality signals
- Architecture mapping: Integration dependencies, data flows, external service catalog
- Configuration audit: Business Manager settings, jobs, services, custom objects
- Stakeholder interviews: Who owns what, what's painful, what's undocumented
- Performance baseline: Current response times, job durations, known bottlenecks
- Upgrade path analysis: What's blocking platform updates, what's the SFRA gap
The output should connect findings to business impact. A long list of technical issues without prioritization doesn't help anyone make decisions.
How DigitalStack Structures SFCC Assessments
SFCC assessments generate data that typically gets lost across slides, spreadsheets, and diagrams that no one updates. DigitalStack keeps it connected:
- System inventory tracks cartridges, integrations, and dependencies as a linked model, not a static list
- Stakeholder surveys capture input from development, operations, and business teams with consistent scoring you can compare across projects
- Findings link to objectives so cartridge debt gets weighed against business priorities, not just flagged as technical risk
- Architecture documentation stays connected to discovery findings, so six months later you can trace why a recommendation was made
When your assessment data is structured, your recommendations are traceable.
Next Step
If you're preparing for an SFCC assessment or replatform discovery, request access to see how connected discovery works in DigitalStack.