Juice: Giving Network Service Providers the Edge

Key points:

  • NSPs can turn existing POP real estate into a flexible, scalable GPU Edge that reaches way beyond current limitations
  • GPU compute flows over owned bandwidth where NSPs can massively undercut hyperscalers on egress
  • Compute is pulled in to where your customers want their sensitive data - local
  • The flexibility of dynamic GPU resourcing makes it easy to demonstrate ROI throughout NSP ramp-up, from POC to production

The AI revolution is rapidly transforming industries, but AI infrastructure remains a bottleneck due to high GPU costs, inefficient utilization, and centralized cloud dependencies. Traditionally, AWS, Azure, and GCP have dominated AI compute through their hyperscale cloud platforms, offering GPU-as-a-Service (GPUaaS) to enterprises and AI developers. However, Network Service Providers (NSPs) are uniquely positioned to disrupt this market by leveraging their vast real estate, power, and edge networks to deliver cost-effective, low-latency AI compute.

Juice Technologies is the key enabler in this transformation. By utilizing GPU-over-IP technology, Juice allows NSPs to offer a scalable, flexible, and efficient GPUaaS model, competing directly against hyperscalers in AI and edge inference workloads.

The challenges of competing with hyperscalers in AI compute

While AWS, Azure, and GCP currently dominate AI compute, their models are not optimized for all AI workloads, especially at the edge. NSPs face key challenges when entering the GPUaaS space:

  1. Underutilized Infrastructure – Telecom operators have massive real estate, power, and cooling capacity, but these assets remain underutilized due to rigid network architectures.
  2. High GPU Costs & Inefficiencies – Traditional on-prem GPU setups suffer from low utilization rates, often below 50%.
  3. Latency Issues in AI Inference – Cloud-based GPU solutions introduce latency bottlenecks, making them unsuitable for real-time AI applications such as autonomous vehicles, industrial automation, and IoT.
  4. Scalability Constraints – Hyperscalers offer flexible AI infrastructure, but NSPs need a way to rapidly scale AI compute without heavy CapEx investments.

Juice’s GPU-over-IP technology directly addresses these challenges, allowing NSPs to offer a cost-effective, scalable, and high-performance GPUaaS solution.

Why Juice’s unique flavor of GPUaaS is a game-changer for NSPs:

1. Unleashing underutilized infrastructure

  • NSPs have thousands of underutilized central offices and data center locations that can be converted into AI compute hubs.
  • Juice virtualizes GPU resources, allowing NSPs to monetize idle capacity by offering it to AI developers, enterprises, and startups on demand.
  • Unlike hyperscalers, NSPs do not need to build new infrastructure—they can simply repurpose existing assets with Juice.

2. Enabling AI at the Edge – a competitive advantage over hyperscalers

  • AI inference workloads demand real-time processing at the edge to reduce latency and improve user experiences.
  • AWS, Azure, and GCP operate centralized cloud models, increasing network latency.
  • Juice allows NSPs to deploy GPUaaS closer to users by leveraging 5G and fiber networks—enabling real-time AI for applications like autonomous vehicles, telemedicine, smart cities, and industrial automation.

3. Lowering costs & maximizing GPU utilization

  • AWS, Azure, and GCP charge premium prices for AI compute, including expensive GPU instances, data egress fees, and reserved capacity.
  • Juice maximizes GPU efficiency through dynamic sharing, pooling, and resource allocation, increasing utilization from 40-50% to near 100%.
  • This enables NSPs to offer GPUaaS at a significantly lower cost than hyperscalers, attracting cost-conscious AI enterprises.

4. Offering more flexible AI compute without vendor lock-in

  • Hyperscalers force AI developers to use proprietary cloud services and frameworks.
  • Juice enables GPUaaS that works with any AI framework (TensorFlow, PyTorch, CUDA, etc.), allowing NSPs to provide more flexible, vendor-agnostic AI solutions.
  • This is crucial for enterprises looking to deploy AI workloads across hybrid and multi-cloud environments.

5. Scaling AI compute across distributed data centers

  • Juice virtualizes GPU resources across multiple locations, allowing NSPs to scale GPUaaS without needing additional hardware investments.
  • This creates a federated AI infrastructure, seamlessly connecting data centers, edge locations, and enterprise sites.
  • As AI demand grows, NSPs can expand their GPU capacity dynamically, avoiding hyperscaler lock-in.

The Future of AI Compute: NSPs vs. Hyperscalers

Compare and contrast from a competitive POV

The Takeaway: NSPs can win AI compute business

By integrating Juice’s GPU-over-IP technology, NSPs can: 

  • Offer a more cost-effective alternative to AWS, Azure, and GCP.
  • Win business in AI inference & edge computing, where hyperscalers struggle.
  • Provide GPUaaS that is scalable, flexible, and optimized for real-time applications.
  • Monetize existing assets without heavy CapEx investments.
  • Deliver AI infrastructure solutions that cater to high-compliance industries.


Conclusion: Juice is the catalyst for a new AI compute market

The AI compute market is shifting, and NSPs are uniquely positioned to challenge hyperscalers by offering edge-based, low-cost, and scalable GPUaaS solutions.

By leveraging Juice’s GPU-over-IP technology, NSPs can create a next-generation AI infrastructure that:

  • Reduces costs
  • Eliminates inefficiencies
  • Enables edge AI inference
  • Delivers AI compute services that hyperscalers cannot match

With Juice, NSPs can disrupt the AI compute market, winning business from AWS, Azure, and GCP while unlocking billions in new AI revenue opportunities.

Nick Marcisz
Specializing in the development, marketing, and sales of software solutions leveraging composable computing and global networks.