2022-8 (May 18, 2022)
Update to our Unity plugin.
New Unity Plugin
This release focuses on delivering an updated version of our Unity plugin. Version 1.10.0 is intended to provide updated support for more recent versions of Unity and the underlying libraries needed to drive the render streaming experience.
Specifically, v1.10.0 of the PureWeb plugin supports Unity engine 2021.2 and 2021.1, and relies on v2.4 of the WebRTC framework.
For details on integrating our Unity plugin into your game project, click here.
2022-7 (May 3, 2022)
Platform updates to provide initial support for Unreal 5.
We have updated the NVidia drivers on our scheduled and on-demand providers to 511.65, which is necessary to support models built with Unreal 5. This is the first in a series of releases over the coming weeks to improve and expand our support for UE5.
However, there are some limitations to be aware of when using the PureWeb Reality platform with a UE5 game package:
- View re-sizing: Epic removed the ability to resize pixel streaming models to match their client side view. Epic anticipates that this feature will be re-added in Unreal 5.1 but we do not have a release date from them at this time. If re-sizing is key to your streaming experience, we advise you stick with 4.27 for now.
- Collaboration and view-sharing: does not work in Unreal 5 due to a change to the client input handler. We expect to have an update to our SDK that will resolve this in the coming weeks.
- DLSS: NVidia support for DLSS in UE5 is still inconsistent and may not work. The NVidia drivers have been updated on both our scheduled and on-demand providers to 511.65, which is necessary to support models built with Unreal 5. As DLSS support from Epic improves, we will make the necessary platform changes to support it fully, but at this time DLSS functionality in-engine is not guaranteed.
- Nanite: The new nanite mesh system in UE5 is not currently supported on our platform. We’ve received guidance from Epic on adding this support, which has been implemented by our development team is currently undergoing testing. We anticipate nanite support arriving in the next couple of weeks.
2022-6 (April 26, 2022)
This release contains a permanent fix for the Feb 17 outage, plus minor console fixes.
We anticipate another release in the coming days which will include initial support for UE5.
Overhauled the communication path between the platform and our regional streaming services. A performance bottleneck in this communication path was responsible for the Feb 17th outage. This update significantly improves upon the temporary fix that was put in place immediately after the outage.
Messaging between components has seen performance improvements, which has resulted in reducing launch request processing times by 3-5 seconds in the worst cases.
Fixed a rendering issue on the console where the Availability and VP configuration tags on the model cards were rendering overtop of the footer.
Repaired an issue with custom runtime arguments where the arguments would be reset if a user uploaded a new model with no custom runtime parameters, added parameters through the model details dialog, then immediately uploaded a new version. Parameters now persist in this edge case.
2022-5 (March 30, 2022)
This release consists of a couple quality of life improvements in the console, as well as a minor fix in our global routing system.
The model card will now display a tag that shows if the model in question is configured to run on our scheduled, on demand or hybrid streaming providers.
Custom command line and environment configuration on the models will persist when a new version is uploaded. Previously all new model versions had these fields reset.
Terms of services has an updated EULA link in the footer on the console.
Fixed a defect for an edge case that could randomly result in a stream failing to route to a streaming provider properly if the selected streaming provider was in the middle of refreshing its connection to the platform. There was a <15 second window every hour when a launch request would have a 50% chance of not being correctly dispatched to a streaming provider.
2022-4 (March 22, 2022)
This release was primarily focused on the on-demand deployment type.
Improvement to the on-demand provider which eliminates a 2-minute delay that would be experienced by the first user who launched a stream on a newly scaled up server in the on-demand pool (regardless of the model they were launching).
2022-3 (March 10, 2022)
This release was primarily focused on three key areas:
- Launch Request speed for On Demand models
- Refactoring and improvements in the platform Launch Request pipeline
- Support for the latest version of DLSS on both Scheduled and On Demand providers.
Launch request performance, particularly in the On Demand provider, will continue to be a focus for the team in the coming weeks.
- Changes to allow models running in the scheduled virtualization environment to use the latest DLSS support. The current GPU driver version is 472.39 allowing support for DLSS 2.2.2+.
On Demand Providers
- Switched decompression tools for models provisioned to On Demand instances. This has decreased launch time of On Demand sessions by 30%-70%.
- The model upload workflow now allows users to manually override the operating system and game engine identified at upload time. This was added because the heuristic used to classify model uploads is not 100% accurate. This ensures those values can be corrected by the user if needed.
- Models running through an On Demand provider type will no longer have the
Schedule Changebutton under the details menu of the model card. Models configured for Scheduled providers, or those using a hybrid configuration, will still have the button. This was removed for models running exclusively on On Demand because these models do not have schedules.
- Breaking Change: Operating system and game engine are now required fields when uploading a model via the CLI. The parameters can be supplied non-interactively using
--engine unreal, if these are not supplied, the CLI will interactively prompt the user for these values.
Multiple improvements to launch request handling within the platform:
- Eliminated a bottleneck in the communication channel between the platform and all cloud virtualization providers. This bottleneck was responsible for the a 30 minute outage on February 17th and was causing a small number of launch requests to timeout without ever getting a session. This has improved the speed with which launch requests are serviced for both Scheduled and On Demand models.
- Fixed an edge case where if multiple launch requests were in queue on an On Demand provider for the same model, and that provider was still in the process of provisioning the model, only the first launch request would be serviced.