Integration Test Policy
For: Xcavate Protocol & realXmarket App Version: 1.0 Owner: QA & Engineering Leads Last Updated: 16.08.2025
1. Purpose
The purpose of this Integration Test Policy is to:
Validate that individual modules and services work correctly when combined.
Ensure reliability of interactions between protocol (on-chain) and app (off-chain) components.
Detect issues that unit tests alone cannot catch, including API mismatches, data inconsistencies, and real-world blockchain behaviors.
2. Scope
This policy applies to all integration testing for:
Xcavate Protocol: Smart contracts interacting with each other, external protocols (bridges, oracles), and client SDKs.
realXmarket App: Backend services, APIs, database, mobile/web clients, blockchain node integrations, and external services (e.g., payments, wallets, KYC providers).
3. Integration Test Principles
End-to-End Validation – Cover workflows that span across multiple components.
Realistic Environments – Tests should run in environments closely mirroring production (testnet, staging).
Determinism – Minimize flaky tests by controlling randomness, timing, and network dependencies.
Automation First – Integration tests should run automatically in CI/CD pipelines.
Fail Fast – Test failures should block merges until resolved.
4. Test Types
4.1 Protocol Integration Tests (Xcavate)
Smart Contract Interactions: Validate interactions across multiple contracts (e.g., governance + staking + rewards).
Cross-Chain/Oracle Feeds: Ensure correctness of data from external oracles and bridges.
Gas/Performance: Validate contracts execute within expected gas limits.
Event Emissions & Indexing: Confirm emitted events are captured by off-chain indexers.
Upgrade/Deployment Tests: Test upgrade paths, migrations, and backward compatibility.
4.2 Application Integration Tests (realXmarket)
API ↔ Database: Ensure queries, mutations, and transactions behave correctly.
API ↔ Blockchain: Validate that on-chain transactions flow correctly through the app’s backend and UI.
User Journeys (E2E): Cover flows such as registration → KYC → asset purchase → portfolio view.
Third-Party Integrations: Payments, wallets, notifications, and KYC services.
Concurrency & Scaling: Validate behavior under load, race conditions, and network latencies.
5. Test Environments
Local: Developer machines with mocks/stubs for fast iteration.
CI/CD: Automated integration tests with in-memory DBs, simulated blockchain (Ganache, Hardhat, Foundry).
Staging: Full system integration with testnets, staging DB, real external services.
Pre-Production (optional): Mirror of production infra for final validation before deployment.
6. Test Data Management
Seed Data: Use controlled, reproducible seed data for consistency.
Anonymized Real Data (Staging): When needed, anonymize production snapshots for staging tests.
Data Reset: Automated cleanup/reset between test runs to avoid state bleed.
7. Ownership & Responsibilities
Developers: Write integration tests alongside feature development.
QA Engineers: Design cross-service/system-level scenarios, validate coverage.
DevOps: Maintain test environments, ensure reliability of pipelines.
Team Leads: Approve merging only after integration tests pass.
8. Coverage & Quality Metrics
Critical Workflows: 100% integration test coverage.
High-Risk Modules (protocol & payments): Must include integration tests before release.
Regression Suite: Integration tests for all high-severity bugs to prevent reoccurrence.
Flaky Test Handling: Flaky tests must be resolved within 1 sprint (not ignored).
9. Execution & Reporting
Frequency: Integration tests run on every PR merge request and nightly builds.
Blocking: Any failure in CI integration tests blocks merge to
main/release.Reports: Test dashboards must track pass/fail rates, coverage, and execution time.
10. Exceptions
Emergency hotfixes may bypass full integration testing but require retroactive testing within 24 hours.
Experimental features may use mocked integrations until stable.
11. Continuous Improvement
Policy reviewed quarterly by QA + Engineering leads.
Metrics like test coverage, flakiness rate, and time-to-detect issues will guide improvements.
Last updated