The perfect cloud streaming platform would be one that is perfectly elastic, infinitely scalable and precisely schedulable. Those qualities are difficult to find in a single system. So rather than building a system that makes compromises between those qualities, the PureWeb Platform uses "Capacity Providers."
Different Capacity Providers excel in different ways: For timebound events with a large number of users, you'll want to discuss reserving server capacity with your sales representative ahead of time. For ad-hoc and low-number situations our "On Demand" capacity may very well fit the bill.
Or, a hybrid approach might be the best option; with some dedicated scheduled infrastructure provisioned, but overflow requests being served by On Demand resources.
See below for more information on each scenario.
Scheduled capacity is a good fit when the usage patterns of an experience are known or predictable, there is a known timebound for availability, or a finite number / large capacity (100+ sessions) of infrastructure is needed.
There is some flexibility possible in a scheduled environment through autoscaling features, and cost is incurred for provisioned resources, regardless of whether they are providing active streams or not.
If your project is configured to use scheduled capacity only, then requests will be routed to the nearest region that has scheduled capacity for models in that project.
Our On Demand capacity leverages a shared pool of resources that can run any customer model. This reduces the burden of trying to predict usage patterns for your experience. Additionally, it allows you to run ad-hoc or long-lived solutions, because it removes the need to maintain a minimum amount of dedicated infrastructure to ensure your experience can be streamed.
If your project is configured to use On Demand capacity only, the system will route users to the closest region with an On Demand pool.
Limitations of On Demand
These resources differ in implementation from Scheduled. Each On Demand resource runs your model inside a container. A container is an isolated, lightweight environment that can only run a single process at a time, which means there are limitations:
- On Demand capacity only supports Unreal packages at this time.
- Any model that requires an active Windows desktop, or full Windows Server OS, will not work in On Demand.
- Any model that tries to launch an external process or executable will not be able to do so inside the container environment.
- DLSS and in-game video playback do not currently work in On Demand.
In a hybrid scenario, your project is configured to use Scheduled capacity that will spill into On Demand capacity if necessary.
The system will route users to the nearest region that has Scheduled capacity first. If there is no scheduled capacity available, they will be routed to the On Demand provider in that region.
No autoscaling of the scheduled resources would occur in a hybrid deployment.
Currently requests will stay within predefined regions, to avoid high latency connections. For example, in a hybrid scenario, a request from North America would not be connected to a server in Asia or Europe; the request would always be fulfilled by resources within North America.
Scheduled / On Demand Comparison
|Capacity Guarantees||Guaranteed||Best Effort|
(1000s of users/model, in each deployed region)
(<500 users/model, in each deployed region)
|Launch Time||If < capacity: 10-30 sec
Worst case: scale up time + launch time
|If < capacity: 15-90 sec
Worst case: Pool scale up time + worst case launch time
|Scale Up Time||~6-7 min||~10 min|
|Sessions Per Resource||1+ (depends on game performance)||1|
|Available Regions||Customer selected||North America, Europe, Asia-Pacific|
|Compatibility||Unreal / Unity||Unreal|
|Pay For||Infrastructure Time||Streaming Time|