DCP leverages compute resources from arbitrary computational devices, regardless of OS or hardware specs. These compute resources form both public and private networks, which dynamically allocate their processing power to applications.
Job deployment without provisioning or orchestration
Single run time, without the performance loss of VMs or containers
Compute work scheduled for optimal performance & cost
Compute jobs launched through DCP’s API shields users from the complexity of managing distributed environments and container deployments. This API is generalized to work with any combination of CPUs or GPUs, regardless of the underlying hardware platform. With DCP, developers can rapidly prototype new code and stay more productive.
DCP’s unique scheduling process allocates tasks to heterogeneous compute nodes, whether they are on-premises or in the cloud. It automates complex load balancing and orchestration work to ensure the most efficient computation possible.
These scheduling algorithms also allow underutilized compute and memory resources to be shared. This enables virtual GPUs and CPUs to be reallocated in real time within private enterprise networks.
DCP supports applications built for any type of user, whether embedded within a web application or run through a Jupyter notebook. These applications can use any type of compute backend, from private on-prem networks to data centers.
Developers also do not have to manage dependencies across compute nodes.
Accelerates toolkits like NumPy, SciPy, and Autograd
Private DCP networks grow an organization’s on-prem capabilities. This ensures that data and sensitive algorithms do not need to be transferred to the cloud.
Please request the DCP whitepaper to learn more about security and compliance with modern standards.
Learn how DCP is used in a variety of deployments including public, private, and hybrid networks.
See what tasks DCP is well-suited for in industry, from smart manufacturing to healthcare.