2022-22 (Dec 16, 2022)

Summary

Release 2022-22 is all about support for Unreal 5.1.

Epic released version 5.1 in mid-November. Whenever Epic or Unity release a new version of their engines, there is always a question of whether the new release will break compatibility with our platform or SDK. Thankfully, we were able to test compatibility while 5.1 was in preview, so we knew what to expect.

As part of this release, there is a new PureWeb plugin for Unreal, and a new SDK (v3.22). We’ve also updated all of our streaming runtime environments to ensure they have the new dependencies necessary to run 5.1 models.

In Unreal 5.1, Epic re-introduced the ability to resize the game viewport based on the web client dimensions. In our testing of this capability, we found it fairly easy to crash the Unreal game when aggressively resizing. While this only happens about 20% of the time, it’s far too frequent for us to feel confident about supporting this capability. When we dove into this issue further, we found that the root cause of this issue is within Unreal’s Pixel Streaming plugin. As such we’ve escalated a support issue with Epic’s team to hopefully find a fix in short order. Hopefully in the New Year we can look forward to once again providing full support for re-sizing.

Deprecation Notice

With the newly added support for 5.1, we are dropping SDK support for UE 5.0. Our officially supported Unreal engine versions are now:

  • 5.1.x Full SDK and Platform support
  • 5.0.x Platform support only
  • 4.27.x Full SDK and Platform support

Click here for more information on our deprecation policy.

Bug Fixes

This release also fixes a defect in our WebRTC signaling system where clients would occasionally fail to connect to the Unreal game. The root cause lay in one of the AWS SDKs we employ in our own SDK. Upgrading this dependency has resolved the issue.

2022-21 (Dec 12, 2022)

Summary

Release 2022-21 delivers a major overhaul to our global routing system.  With this release we are aiming to improve the system in a number of ways:

  • Enable the system to seamlessly route load across multiple underlying cloud infrastructure providers.  With the rapidly approaching release of our CoreWeave cross-cloud on-demand providers, we wanted to ensure that the core routing system could intelligently make the right routing decision when managing load between multiple clouds for both our dedicated and on-demand provider types.
  • Provide the global routing system with more actionable data so that it can make better routing choices.  Previously, the only data the routing system had access to when making a routing selection, was the end-user client latency, and the project provider configuration (dedicated, on-demand or hybrid).  Now, in addition to the previously available data, all of our providers send back real time load metrics to the routing system on a per-model basis.  This allows the routing system to know:
    1. How much free session capacity exists for a given model, within a given provider.  This tells us if a provider service a given session immediately.
    2. How much total capacity exist for a given model, within a given provider.  This helps the routing system make a judgement about how quickly free capacity might become available for a given model.  For example a model that has 100% utilization within a given provider, but only has 20 units of total capacity, will see a far lower turnover rate than a model that is at 100% utilization but has 400 units of total capacity.
    3. Queue length for a given model within a given provider.  In all but the most high utilization scenarios, this number will be zero.  But when it’s not zero, it’s tremendously helpful to know, so that we can again make an estimate about how long a user might end up waiting in queue.
    4. With all this additional information, the routing system can now make far more optimal choices when dispatching load.  What this means to end users is that in general they will end up being routed to a streaming provider that is geographically closer to them, resulting in lower network latency and higher frame rates.  Additionally, those users are less likely to find themselves waiting in queue to get a streaming session.
  • Improve the overall performance of the routing system under heavy load.  Specifically, we were able to decrease routing time by 16% for our dedicated provider types, and 14% for our on-demand providers.
  • Eliminated the concept of geographical boundaries from routing decisions.  Previously, we artificially grouped streaming providers into geographical boundaries (North America, Asia-Pacific, etc.).  Prior to having the provider load data, geographical boundaries provided a helpful mechanism for deciding how far afield a user should be routed when searching for on-demand capacity for a hybrid-configured model.  However, this system had limitations for users who were located on the edges of these geographical boundaries.  This would result in users being sub-optimally routed to a much further provider, or waiting in queue.  With the new metrics mentioned above, the routing system no longer needs to constrain it’s route selection along arbitrary geographical boundaries (i.e. the shortest distance between two points on a map, is not necessarily the shortest distance between those points in a network).
  • Fixed a defect in the routing system, wherein a model configured for a hybrid provider would not instantly switch to using on-demand capacity, once any dedicated capacity had been exhausted.  This could result in a scenario where a single user may be routed to a dedicated provider, only to wait in queue, even if there was free on-demand capacity to run the session.

Routing System Behavior

With the roll out of this new routing system, we wanted to provide a summary of the general heuristics at play.  Please note, the routing system is a highly parallelized, non-deterministic system, so while the changes we’ve made have significantly improved the routing choices, we can’t guarantee that every user will get the globally optimal routing selection 100% of the time, only that the routing selection they get will be, at-best, significantly more optimal than under the previous system, and at-worst, will be the same as with the previous system.

Dedicated Only OR On-demand Only

For users launching models that are configured with only dedicated provider types or only on-demand provider types, the following rules will be used:

  1. Streaming providers with the lowest latency and free capacity are given priority.
  2. If there is no free capacity, the routing system will factor in current queue lengths of all viable providers, finding a streaming provider that is as close as possible with the shortest possible queue length.

Hybrid

For users launching models that are configured to take advantage of both dedicated and on-demand provider types, the following rules will be used:

  1. Dedicated providers that have lower latency than on-demand providers and free capacity will always be given priority.
  2. In a scenario where there is free capacity for both dedicated and on-demand providers, and the end-user latencies are similar, preference is still given to the dedicated providers, even if the dedicated provider has slightly higher latency (this allows users to make the best use of their dedicated resources).
  3. If free capacity for dedicated providers has been exhausted, the best on-demand provider will be selected (this is computed based on latency, provider load).
  4. In the extremely rare case where there is no capacity in both the dedicated and on-demand providers, the routing system will factor in current queue lengths of all viable providers, finding a streaming provider that is as close as possible with the shortest possible queue length.

2022-20 (Nov 24, 2022)

Summary

This release was focused exclusively on delivering security improvements for items identified in our annual 3rd party penetration test.

2022-19 (Nov 9, 2022)

Summary

Today was a release of a collection of minor bug fixes and stability improvements focused on improving the accuracy of model availability status information. We had observed some scenarios where ample server capacity for a given model was available in either our On-Demand and Dedicated environments, but some models would periodically show up as ‘unavailable’ in the console before returning to the correct ‘available’ status. These fixes will improve the accuracy of the availability flag in the developer console.

2022-18 (Nov 8, 2022)

Summary

Today we released an update to the core platform. The majority of the changes were remediation identified in our annual penetration test report. Of these changes, only a couple have a user-facing component:

  • We’ve added an improved UI for providing feedback on password complexity requirements when signing up for a new account or when resetting your password.
  • Improved the password reset workflow so it is no longer possible for new and unverified users to end up creating an orphaned account.

Beyond the security improvements we’ve made a handful of minor UI changes to improve the consistency of terminology between console.pureweb.io and developer.pureweb.io. Finally, we made a few small performance improvements to the core data model powering our global sorting system. These changes are in preparation for a larger suite of improvements which will be landing in the coming weeks.

2022-17 (Sept 29, 2022)

Summary

We released a small but meaningful change to our on-demand system.  After extensively profiling the performance of Unreal applications in our on-demand environment, we have been able to make some adjustments to how CPU and RAM are allocated between the secure guest container, and the host VM.  Specifically, we’ve been able to increase the CPU / RAM allocations by about 50% without compromising the compute requirements of the underlying host.  As a result of this, we’ve seen a boost of more than 10 FPS in some Unreal applications running in this environment.

2022-16 (Sept 14, 2022)

Summary

Today’s release has two major themes, improvements to our CoreWeave provider (which is still in beta), and a variety of minor security improvements throughout the platform.

CoreWeave Improvements

The major improvement in this release is that our CoreWeave provider can now run multi-region. Now that multi-region support is possible for this on-demand streaming provider, we’ve rolled out a small cluster of servers in New Jersey to compliment the existing cluster in Las Vegas. As our work on CoreWeave nears a “1.0” level of maturity, we’ll begin increasing the size and number of these provider types within our on-demand offering. Beyond multi-region support, two additional defects were addressed: one where models provisioned into this provider were not becoming available automatically, and another where certain models would hold onto compute resources despite the streaming session being terminated.

Security Improvements

  • New projects will be private by default. Previously any models uploaded to a new project would have anonymous access turned on by default. Now, all new projects will have anonymous access turned off, and it must be turned on manually before links can be shared out. As always, we suggest that if you want your users to have secure authenticated access to your streaming experiences that you create an authentication end-point for your project: GitHub - pureweb/platform-auth-example.
  • Console password policy now has higher complexity requirements. Users creating new accounts, or changing their password, will be required to adhere to higher complexity requirements (length, special characters, etc.). This new policy will not require existing users to update their passwords.
  • Adjusted the failed login error messaging to be the same for the case where the username/password is incorrect as when the account in question doesn’t exist.  The previous error message could allow a malicious user to determine if a given email address had an account on the platform (user enumeration).
  • Added a login token expiry for console users. These tokens previously a 30 day expiry.  Now they will expire in 24 hours.

Miscellaneous

  • We’ve improved server scale-up time for dedicated instances by approximately 90 seconds.

2022-15 (August 17, 2022)

Summary

This release was involved involved and upgrade to the underlying virtual server type in our new CoreWeave Beta streaming infrastructure provider, as well a key defect related to the CoreWeave beta. Additionally, we released a new version of our Unreal 4.27 plugin.

CoreWeave Infrastructure Update

Based on internal testing and customer feedback of the CoreWeave beta thus far, we’ve chosen to upgrade the class of servers we’re using under the hood. Previously we were leveraging severs with an Nvidia RTX 4000 GPU, we’ve now upgraded that to an RTX 5000 GPU. These instances also have 24 GB of RAM and 4 CPUs. We will continue to evaluate streaming performance and make further changes if necessary in order to ensure a superb streaming experience.

Unreal 4.27 Plugin

We have made an adjustment to the resizing behavior in our Unreal 4.27 plugin. Specifically, we’ve found that rapid resizing of the pixel streaming frame buffer can cause Unreal models to crash. Epic has validated this finding, which is part of why the resizing was temporarily removed from 5.0. As a temporary fix, we’ve modified our Unreal plugin to only process one resize event every second. This significantly improves the stability of models that use this functionality. Find the new plugin here.

Defect Fixes

  • Intermittent availability of models when initially deploying to the CoreWeave provider. This defect was due to an incompatibility in the unzipping library we use in our automated provisioning pipeline. We’ve switched to a new decompression system, and models in CoreWeave are now provisioning correctly.

2022-14 (August 3, 2022)

Summary

Today’s release marks a major delivery of a new and improved on-demand provider type into the PureWeb platform, and the introduction of an entirely new underlying cloud infrastructure provider. Supplementing our existing AWS providers, we’ve built out a new provider on a GPU-specific cloud called CoreWeave.

The data storage system and the GPU quantity and type in CoreWeave have allowed us to build an on-demand provider that will have significantly faster launch times than our current on-demand offering, and will be capable of providing on-demand streaming for Unity models (in addition to Unreal). Models deployed to this provider should also see better streaming performance, and significantly better scale and elasticity, all without any change in price.

The beta provider is running only in North America, but it can be accessed by users anywhere in the world. The provider itself currently provides access to several thousand RTX4000 GPUs.

The provider is not currently capable of running in a hybrid configuration with other PureWeb providers, meaning that a project can run on our existing AWS based provider types, or on our new CoreWeave on-demand provider. Adding in hybrid support will be a priority in the coming months. This will allow users to use this new provider type in conjunction with both existing on-demand providers and dedicated providers.

This new provider type is currently in beta, and will continue to be the focus of improvements over the next 1-3 months (depending on beta feedback).Anyone looking to try out this new on-demand provider can contact customersuccess@pureweb.com about being enrolled in the beta.

2022-13 (July 21, 2022)

Summary

The release this week adds a new API in our SDK to configure the maximum stream resolution for Unreal Pixel Streaming applications. Specifically, you can now configure the upper limit of the stream resolution to be up to UHD or 4k (3840x2160).

To set the new resolution limit, you can pass a resolution either on the query string of your client (?resolution=uhd), or in your client.json file by adding "resolution":"uhd".  In addition to this, you’ll need to add the following parameter to your <VideoStream> tag:

Resolution={streamResolutionConfiguration(res)}.

You can set the upper resolution limit to be any of the following:

  • sxga -> [1280x1024]
  • hd -> [1366x768]
  • hdplus -> [1600x900]
  • fhd -> [1920x1080]
  • wuxga -> [1920x1200]
  • qhd -> [2560x1440]
  • wqhd -> [3440x1440]
  • uhd -> [3840x2160]

There are a few important notes about this change:

  1. If you set a higher upper limit on the resolution of your client, the bandwidth requirements for any of your users streaming your model will increase significantly.  Specifically, to stream 4k, your users will require a absolute minimum of a 25mbps connection.  
  2. In our testing we’ve noticed significant instability in the pixel streaming plugin when streaming at higher resolutions. We found that much of this instability was addressed in the recent 5.0.3 release of Unreal Engine; so if you plan on using higher resolutions, we recommend updating to 5.0.3 or later.
  3. Due to the bandwidth requirements at higher resolutions, the Pixel Streaming plugin can frequently degrade the resolution of the stream to compensate, causing the instability described in #2. If you intend on using higher resolution settings, we strongly recommend passing  -PixelStreamingWebRTCDegradationPreference="MAINTAIN_RESOLUTION", as a runtime parameter for your model.  This will both improve the stability of streaming in engine versions < 5.0.3, and it will improve the visual quality (albeit at the expense of framerate).

You can get this version of the SDK from NPM here.

2022-12 (July 7, 2022)

Summary

This release upgrades the underlying storage class for our dedicated / scheduled providers.  This new storage class is considerably faster, meaning that scaling up compute takes about 1 minute less time than previously, and for already running instances general launch performance should also see a slight improvement.

2022-11 (June 21, 2022)

Summary

This release primarily focuses on enabling Nanite capabilities for Unreal 5 models, along with a small bug fix and performance improvement.

Scheduled & On-demand Providers

The underlying virtualization infrastructure for both the scheduled and on-demand provider types is now based on the latest Windows Server 2022 OS image. Crucially, this image includes the new DirectX Agility SDK which is a requirement for Nanite support in Unreal 5 models.

Bug Fixes

A defect was fixed in how we keep track of which models are provisioned to which streaming provider types. In rare instances, a model would show as available when there was actually no provider capable of launching the model in question.

Performance Improvements

We’ve undertaken a platform wide performance tuning exercise to better optimize the infrastructure allocated to platform services. This will not impact streaming performance, but general platform operations (logging in, listing models, requesting tokens, etc.), whether through the PureWeb console or the CLI should see a small improvement in response times.

2022-10 (June 13, 2022)

Summary

This release includes a new Unity plugin for the latest LTS version of Unity, as well as configuration changes to our on-demand provider type which will enable models running on this system to access external HTTPS endpoints.

Unity Plugin

We have released a new version of our Unity plugin (v.1.11.0).  This plugin adds support for the new LTS version of Unity 2021.3.  This plugin has also been validated on the 2020 LTS version of Unity.  Internally, the plugin is dependent on version 1.3.0 of the Unity Input System, and 2.3.3-preview of the WebRTC framework.  For more information on how to update your Unity project to this latest version, head over to our developer hub article here: Prepare your Unity project.

On-Demand Provider

We have fixed a defect in our On-demand provider environment where the runtime environment was missing some root SSL certificates which prevented models running in this environment from downloading anything from an HTTPS endpoint.  Most notably this fixes issues with Cesium based digital twins which were unable to download external map data.

2022-9 (May 31, 2022)

Summary

This release delivers a new PureWeb plugin for Unreal and a corresponding Platform SDK and Streaming Agent (v3.13.5) that add support for Unreal 5.

Additionally, this release includes minor security improvements for the front end components of the platform as well as stability improvements to the messaging system between the core platform and our Scheduled providers.

Detailed Notes

The release of UE5 has introduced some challenges related to resizing and collaboration in Pixel Streaming. Unfortunately, due to these changes, the resizing capabilities provided by our Unreal 5.0 plugin are not equivalent to what is present in our Unreal 4.27 plugin.

Resizing: Epic’s Pixel Streaming plugin in Unreal 5.0 has dropped support for dynamic resizing of the pixel stream.  The underlying mechanism that our plugin called to resize the game (PixelStreamingWebRTCDisableResolutionChange) no longer works.  We've been in touch with the dev team at Epic and learned that they removed this functionality due to instability. They're working on an improved API for resizing which should ship with UE 5.1.  Until 5.1 is released, the game will stream to the browser at whatever resolution the game was initialized at.

  • While Epic does provide their own r.setres command, this does not work correctly in our on-demand environment.  In order to aid in this, we’ve introduced two new sizing command line arguments in our Unreal plugin:
    -PixelStreamingResX=1920 -PixelStreamingResY=1080.
  • These arguments will allow you to set the initial size of your stream when your model launches.  However, because UE5.0 only propagates resolution changes into Pixel Streaming one time (during initialization), subsequent calls to resize the pixel stream (i.e. from the browser) do not currently work.
  • You can add these command line arguments under the Runtime Customizations section of your Model Details screen in the PureWeb console.

SDK & Collaboration: Changes in UE5.0 have necessitated a new platform SDK (v3.13.5) to ensure that streaming works for hosts and collaborators correctly.

  • If you have an Unreal 5 model, then upgrading your client SDK will be required.
  • If you have a working Unreal 4.27 model, you can continue to use the 4.27 version of the plugin, and not upgrade the SDK in your custom client.

Edge case: The resizing changes in the SDK for UE5.0 are mutually incompatible with 4.27.  As of this new release, the UE5.0 code path is considered the default.  This is fine if you are running a UE5.0 model on the back end, however if you are running a 4.27 model against the latest SDK, resizing and collaboration will not work correctly.  To work around this, our SDK needs to know if it’s talking to a UE4.27 model, but we do not yet have a mechanism for automatically detecting this in our platform.

  • In order to tell the PureWeb platform that you want to run a 4.27 model against the latest SDK, you can specify this in the environment for your model.
  • Under the Runtime Customizations section of your Model Details screen in the PureWeb console, add Key:UE_VERSION Value:4.27
  • The change to support this new environment variable lives in our Streaming Agent microservice, which you may use in the local development workflow.  As such, updating your copy of the Streaming Agent will be necessary if you want to use the 4.27 with the latest SDK locally.

Finally, as part of prioritizing the UE5.0 default behavior, we’ve updated the SDK version in the preview client in the PureWeb console.  This means that if you want resizing and collaboration to work correctly from the preview client to your 4.27 model, you’ll also have to add the above environment variable.

Deprecation Notice:

The release of our plugin for Unreal 5.0 means we’re formally dropping support for Unreal 4.26.X.  As always, that doesn’t mean 4.26 models won’t continue to work on our platform.  It does mean that we no longer guarantee support, backwards compatibility or other fixes, and we encourage users to update to one of two supported versions (4.27.x and 5.0.x at the time of writing).

Where Can I Get It?

If you are using UE5.0, you can get the new version of our UE5.0 plugin here.

If you are using UE4.27, you can continue to use the existing version of our 4.27 plugin here.

Finally, you can get the client template, SDK and Streaming Agent for your local development workflow on NPM here:

Upcoming Unreal Changes on PureWeb

We are planning to add Nanite support to our Scheduled offering in the next few weeks, and support for Nanite in our on-demand offering will follow but doesn’t have a firm ETA.

DLSS support on On-Demand is still missing.  We’ve been working closely with Nvidia and they’ve reproduced the issue and this should hopefully result in a fix, though the ETA for this is also unknown at this point.

2022-8 (May 18, 2022)

Summary

Update to our Unity plugin.

New Unity Plugin

This release focuses on delivering an updated version of our Unity plugin.  Version 1.10.0 is intended to provide updated support for more recent versions of Unity and the underlying libraries needed to drive the render streaming experience.

Specifically, v1.10.0 of the PureWeb plugin supports Unity engine 2021.2 and 2021.1, and relies on v2.4 of the WebRTC framework.

For details on integrating our Unity plugin into your game project, click here.

2022-7 (May 3, 2022)

Summary

Platform updates to provide initial support for Unreal 5.

Platform

We have updated the NVidia drivers on our scheduled and on-demand providers to 511.65, which is necessary to support models built with Unreal 5. This is the first in a series of releases over the coming weeks to improve and expand our support for UE5.

However, there are some limitations to be aware of when using the PureWeb Reality platform with a UE5 game package:

  • View re-sizing: Epic removed the ability to resize pixel streaming models to match their client side view. Epic anticipates that this feature will be re-added in Unreal 5.1 but we do not have a release date from them at this time. If re-sizing is key to your streaming experience, we advise you stick with 4.27 for now.
  • Collaboration and view-sharing: does not work in Unreal 5 due to a change to the client input handler. We expect to have an update to our SDK that will resolve this in the coming weeks.
  • DLSS: NVidia support for DLSS in UE5 is still inconsistent and may not work. The NVidia drivers have been updated on both our scheduled and on-demand providers to 511.65, which is necessary to support models built with Unreal 5. As DLSS support from Epic improves, we will make the necessary platform changes to support it fully, but at this time DLSS functionality in-engine is not guaranteed.
  • Nanite: The new nanite mesh system in UE5 is not currently supported on our platform. We’ve received guidance from Epic on adding this support, which has been implemented by our development team is currently undergoing testing. We anticipate nanite support arriving in the next couple of weeks.

2022-6 (April 26, 2022)

Summary

This release contains a permanent fix for the Feb 17 outage, plus minor console fixes.

We anticipate another release in the coming days which will include initial support for UE5.

Platform

Overhauled the communication path between the platform and our regional streaming services. A performance bottleneck in this communication path was responsible for the Feb 17th outage. This update significantly improves upon the temporary fix that was put in place immediately after the outage.

Messaging between components has seen performance improvements, which has resulted in reducing launch request processing times by 3-5 seconds in the worst cases.

Fixes

Fixed a rendering issue on the console where the Availability and VP configuration tags on the model cards were rendering overtop of the footer.

Repaired an issue with custom runtime arguments where the arguments would be reset if a user uploaded a new model with no custom runtime parameters, added parameters through the model details dialog, then immediately uploaded a new version. Parameters now persist in this edge case.

2022-5 (March 30, 2022)

Summary

This release consists of a couple quality of life improvements in the console, as well as a minor fix in our global routing system.

Console

The model card will now display a tag that shows if the model in question is configured to run on our scheduled, on demand or hybrid streaming providers.

Custom command line and environment configuration on the models will persist when a new version is uploaded.  Previously all new model versions had these fields reset.

Terms of services has an updated EULA link in the footer on the console.

Fixes

Fixed a defect for an edge case that could randomly result in a stream failing to route to a streaming provider properly if the selected streaming provider was in the middle of refreshing its connection to the platform.  There was a <15 second window every hour when a launch request would have a 50% chance of not being correctly dispatched to a streaming provider.

2022-4 (March 22, 2022)

Summary

This release was primarily focused on the on-demand deployment type.

Fixes

Improvement to the on-demand provider which eliminates a 2-minute delay that would be experienced by the first user who launched a stream on a newly scaled up server in the on-demand pool (regardless of the model they were launching).

2022-3 (March 10, 2022)

Summary

This release was primarily focused on three key areas:

  1. Launch Request speed for On Demand models
  2. Refactoring and improvements in the platform Launch Request pipeline
  3. Support for the latest version of DLSS on both Scheduled and On Demand providers.

Launch request performance, particularly in the On Demand provider, will continue to be a focus for the team in the coming weeks.

Features

Scheduled Providers

  • Changes to allow models running in the scheduled virtualization environment to use the latest DLSS support.  The current GPU driver version is 472.39 allowing support for DLSS 2.2.2+.

On Demand Providers

  • Switched decompression tools for models provisioned to On Demand instances.  This has decreased launch time of On Demand sessions by 30%-70%.

Console

  • The model upload workflow now allows users to manually override the operating system and game engine identified at upload time.  This was added because the heuristic used to classify model uploads is not 100% accurate.  This ensures those values can be corrected by the user if needed.
  • Models running through an On Demand provider type will no longer have the Schedule Change button under the details menu of the model card.  Models configured for Scheduled providers, or those using a hybrid configuration, will still have the button.  This was removed for models running exclusively on On Demand because these models do not have schedules.

CLI

  • Breaking Change: Operating system and game engine are now required fields when uploading a model via the CLI.  The parameters can be supplied non-interactively using  --os windows and --engine unreal, if these are not supplied, the CLI will interactively prompt the user for these values.

Fixes

Multiple improvements to launch request handling within the platform:

  • Eliminated a bottleneck in the communication channel between the platform and all cloud virtualization providers.  This bottleneck was responsible for the a 30 minute outage on February 17th and was causing a small number of launch requests to timeout without ever getting a session.  This has improved the speed with which launch requests are serviced for both Scheduled and On Demand models.
  • Fixed an edge case where if multiple launch requests were in queue on an On Demand provider for the same model, and that provider was still in the process of provisioning the model, only the first launch request would be serviced.