The CUDOS decentralised compute network is a multi-layered platform that offers extra computing power for blockchain and non-blockchain projects. It has a 3-layer flow process that allows tasks submitted by consumers at the blockchain layer-1 to go through different validation checks before it is successfully processed. Layer-1 is the blockchain smart contract layer that interfaces with different blockchain networks such as Algorand and Polkadot. Blockchain dapps interact with the Cudos network via the Cudos layer one smart contract. After that, the Cudos layer-1 smart contract submits the tasks to the layer-2 Cudo platform made of validation nodes which compete to receive tasks submitted.
The layer-2 platform consists of blockchain nodes which run the Cudos software. Each node has to meet a minimum quality standard and must stake about 2,000,000 Cudos token to be able to process jobs. Once a job is received at the layer-2 level, it is validated and submitted to the layer-3 network, which is a pool of computing resources and apps that can perform different high-powered jobs. Once the job is completed at the layer-3 level, it is sent back through layer-2 and back to layer-1.
What is unique and differentiating about the Cudos platform and network is the validation checks that the job goes through. For each level, there is a validation module which checks to make sure the tasks submitted have met all the standards before it is processed.
The Cudos Validation Module
A job type’s validation module runs inside the Cudo platform upon a completed workload being uploaded to it. Validation comprises the following methods:
A Consensus check
Consensus in the Cudo platform describes the process of running a job N times on different suppliers and ensuring that the results are equivalent. This guarantees that none of the suppliers are faking work. The cost to a consumer increases by a factor of N with respect to a job that does not use consensus.
Job type-specific check
These are defined per job type. Possible examples include: Time based If the job has an estimation function and the actual time to complete is significantly lower, the job can be considered failed. If the job was split into equal sized chunks and the job type creator specifies that they expect each chunk to be equally computationally intensive, a sub-job can be considered failed if it finished significantly faster than the others while running on similar hardware.
A workload can be designed so that it includes a hash representing the expected output. This hash can then be compared with the provided results, to ensure that the full workload has been completed. This type of validation is relevant for test workloads for example.
Custom validator A custom validator can be written by the developer which can be used to validate their own workloads. These can be set to either validate all or some of the workload results. These validators normally run before the workload is as a pre-test to ensure the validations are correct.
Workloads which are not tolerant of incorrect data or tampering can be raised to a higher security level. These workloads can be prevented from running on the lowest level of devices which are anonymous. The primary workload can be run on a high-security level such as ISO 27001. For further validation, if the data is non-private data, consensus validation can also be run at a low cost on the lowest level of devices.
It is important to note that jobs/tasks that fail to meet the validation standards are not processed. If a job meets the standards but the nodes or resources fail to process it, there are punitive measures designed by the network. But the validation module plays a key role in maintaining the standards and quality of jobs submitted.