Skip to content

Data Sovereignty & Institutional Knowledge Protection

CENTAURON enables multi-institutional AI development without requiring clinical data or expert annotations to leave the contributing institution. The network is designed to protect data privacy and institutional knowledge assets by ensuring that slides, metadata and ground-truth annotations remain under the full control of their originators at all times.

To achieve this, CENTAURON employs a decentralized trust architecture combining:

  • Local data custody: Whole-slide images and annotations remain on institutional infrastructure; no central repository is required.
  • Client-side encryption: Sensitive annotation content is encrypted before leaving the browser; plaintext ground truth never becomes visible externally.
  • Permissioned blockchain audit layer: All access grants and evaluation events are immutably recorded for verifiable accountability.
  • Smart-contract-enforced access control: Computational requests only execute if cryptographically verified against agreed-upon usage terms.
  • Selective metadata sharing: Institutions can expose general slide metadata globally while keeping diagnostic details and annotations encrypted or fully private.

In this model, data providers retain granular control over what is visible, what is encrypted and what remains entirely local. Models are distributed to participating institutions for evaluation, and only performance metrics return, ensuring sensitive data and annotation knowledge remain protected while still enabling collaborative validation and improvement of AI systems.

Sovereign Decentralization vs. Federated Learning

DimensionCENTAURON Node ModelClassic Federated Learning
GovernanceFully decentralized; no central coordinatorCentral server or coordinating authority required
Who controls the dataInstitution (Node owner) always retains full controlCentral coordinator controls aggregation and orchestration
Data locationData never leaves the institution; encrypted ground truth remains localData remains local but metadata/model updates always flow to a central point
Execution modelExternal AI models move to the data; executed in isolated containersLocal models trained, updates returned to central server for aggregation
Access transparencyAll actions logged on blockchain; cryptographic enforcementTrust placed in central coordinating entity
IP & annotation protectionGround truth encrypted; never revealed to model providersLabels/annotations often implicitly exposed through model gradients/updates
Membership modelOpen to qualified institutions; identity-verified, peer-to-peerParticipation controlled by the central server/operator
Fault toleranceNo single point of failureCentral aggregator is a single point of failure
Regulatory alignmentDesigned for medical compliance & auditabilityNot inherently compliant or auditable in regulated healthcare environments
Value participationInstitutions keep value in data + annotations + executionCentral owner typically captures most value

In short:

CENTAURON flips the federated learning model: Instead of institutions relying on a central aggregator, AI comes to the data, trust is programmatic, and every participant remains sovereign.