For Alpha 3.12, the Graphics Team mainly focussed on improving stability and fixing bugs with the various graphics features utilized in the release. Many of the bug fixes related to the introduction of gas clouds, such as fixing a visible dither pattern when the sun is obscured by a cloud and preventing gas clouds and particles from clipping inside spaceships by improving both the volumetric culling and particle culling systems. Such issues were expected but largely unavoidable because, although the tech has been used extensively in the development of SQ42, the artwork and scenarios are quite different in the PU. Plus, the sandbox nature of the PU and extensive testing it receives meant many previously unknown issues were discovered or raised in priority.
The team also managed to resolve dozens of other bugs ranging from popping shadows to over-bright camera exposure when a planet is streamed in. The proportion of time spent bug-fixing compared to new features was higher than we’d have ideally liked, but there’s always an emphasis on stability and quality at the end of the year and feature work has already resumed, so this is not of concern. Despite the slowdown in feature work, we did manage to maintain good progress on the new Gen12 renderer, which will be our primary focus for early 2021.
The Physics Team worked on the volumetric soft body prototype as well as the related rendering of volumetric deformation. Moreover, various optimizations were done in physics. For example, we improved the threading of various subsystems, added faster spatial grid queries, removed contention accessing local command queue, and removed contention for actor/living entities step functions (improving the living entity step performance by a factor of 2x – 5x). We also implemented a better way to pre-physicalize the planet terrain patches used for collision checks. With regard to collision detection, we also fixed a longstanding issue that could introduce additional ghost contacts far off from where the actual contacts were being processed. Furthermore, improvements were made to event queueing. The first draft of propagating physicalized shockwaves was submitted and box-shaped physics grids and bullet drag were added. SDF support was improved and research started on improvements to the setup of touch bending vegetation.
On the renderer, we continued with our ongoing Gen12 transition and related refactoring. For instance, we added a parameterizable feature set for the deferred pipeline, implemented per-object resource set updates (including RTT update for brushes) for Gen12 scene-rendering and legacy pipeline state caching (to save DX API calls while we’re still fully transitioning to Gen12). In the shader system, we cleaned up all shader reloading code, which will improve shader editing during development and give a much-improved response when changing system spec settings (e.g. graphics settings that require the use of different shader combinations). We also started a larger refactor of the shader cache backend system as it’s quite outdated and a constant source of grief with regards to maintenance and Gen12 fitness. Several optimizations were done in the renderer code. For instance, the way material constants are uploaded to the GPU was simplified and optimized.
On the graphics side, various fixes for depth-of-field were provided. The hair shader received several improvements, such as the ability to disable specular highlights on eyelashes, improved boundary occlusion at hairlines, support for ambient lights in forward shading, as well as improved hair quality during camera cuts. Dual lobe approximation for the skin shader was improved and the eye shader received a couple of improvements as well. As far as atmospheres, clouds, and the unified raymarcher go, the improvements mentioned in the previous postmortem are now available in Alpha 3.12. With that out of the way, most of the time was focused on volumetric cloud rendering. The initial draft of the cloud renderer was implemented and work on volumetric cloud shadows made good progress. Work will commence on improvements to local cloud shaping. Note that there’s still quite a lot of work to be done before release.
For the core engine system, we implemented a dynamic zone culling path in the zone system. We also fixed a few view distance-related culling bugs to do with pixel-sized objects that went into Alpha 3.11. People have already noticed that they can now see players over 1000 meters away instead of just a few hundred or so. A lot of bug fixes and improvements were provided for vis-areas, such as a fix for streaming meshes for animated vis-areas and the ability to add vis areas to CGA joints. The entity system received several improvements and optimizations to avoid unnecessary updates and searches. Similarly, the entity aggregate manager received low-level optimizations to improve work balancing and reduce memory use and contention when working with entity bubbles. There were also a few smaller improvements made to the entity component update scheduler. Radix tree culling logic has been improved, with its threading logic adjusted to reduce contention and SIMD culling implemented to check up to four bubbles vs objects per node. With regards to performance, progress continues on the engine profiler, which saw a lot of enhancements. Work on a real-time sampling profiler based on event traces will commence soon. Various optimizations were implemented in the entity system, component updates, and zone system. Based on telemetry from the PU and PTU, we continued our ongoing investigations into memory usage. As such, the engine-wide memory allocator and memory tracking system, including its toolchain, was substantially refactored and improved. To provide an additional performance boost to our servers, the Linux DGS was switched to a monolithic executable to allow the compiler to generate better code (thread local storage access in particular). As part of our initiative to build a performance regression system, we added a stress test for object container streaming too.
Regarding crash-handling, we now capture a hex stack of the render thread and embed it in mini-dumps that you (optionally) send to us in case the game crashes. This allows us to recover the full render thread call stack during postmortem debugging without the need for third-party binaries (that might be part of call stack or the video driver) to fully unwind the stack. This saves quite a bit of time as we don’t have to download the various drivers that players use.
On the animation side, we fixed code so that character blend shapes and the dynamic lighting rig don’t switch too late at every camera cut when rendering cutscenes.
Lastly, C++ 14 support was enabled for the entirety of the client server editor and relevant tool projects.
Load Balancing Test Framework
The pestilence warmer for Alpha 3.12 received major updates. First and foremost, the warmer now uses the new JWT identification system that allows it to fetch many tokens for impersonation purposes very rapidly. This has 10x the throughput of warmer instances we can run at the same time.
A major subsystem was added that enables the warmer to connect as a service to the diffusion gateway allowing for executing load scenarios that coordinate both as a client connected through the hub and as a service on the diffusion network.
Backend Concurrency Improvements
We were able to increase the performance of the variable service, loadouts, and the main persistence cache service. The stability of the backend increased greatly, surpassing the performance and reliability that we had in previous releases. Our low-level networking code was updated to improve both performance, scalability, and robustness. We also made several fixes and optimizations to the transaction service, rentals, and our entitlement processor.
Unified and Centralized Logging
With our new unified centralized logging system, all services send formatted JSON messages to a centralized Elasticsearch database. Each log event is tagged and dynamic data such as account IDs, player IDs, etc. are extracted into named fields, which makes searching for events or specific fields – such as an “AccountID” – very efficient. This allows the devs to easily access logs from a centralized place and track complex messages and events happening between multiple services.
Entity Creation & Spawn Decoupling
In preparation for persistent streaming, the entity creation process was decoupled from entity spawning. This allows us to “seed” the initial state of the universe into the persistent database by creating all dynamic entities prior to simulation. DGS processes will then stream persistent entities (spawn/despawn) from that database during simulation. This is an important steppingstone for a fully persistent universe.
Parallel Spawn Queues
As an optimization, we introduced multiple parallel spawn queues on each game server. This allows us to spawn multiple distinct locations (such as Lorville and New Babbage) in parallel on separate threads on the same server. Previous releases had a single queue and therefore (in this example) we wouldn’t start on New Babbage until Lorville was fully spawned. On busy servers, this can really reduce the wait time in some cases. For example, when spawning waves of AI ships or respawning in a hab.
Time & De-Syncs
How the engine measures the passage of time underwent a complete overhaul. Accuracy was improved both in the measurement of time and in its synchronization between server and clients. How the engine uses time to update its logic and physics simulation was changed to eliminate errors that could result in simulation time passing differently on the server and clients. Many smaller bugs that had caused timing errors to grow on long-running servers were also fixed. The network synchronization of vehicles and physics objects were updated to take full advantage of the improvements. The accumulated result of all these changes was a significant reduction in latency and desynchronization issues in many areas, even under harsh test conditions for network and server performance. Besides improving the overall player experience, this work was a necessary step towards server meshing, where simulating the game across multiple game servers would have made desynchronization issues due to timing errors worse.
In preparation for server meshing, the team performed a sweep on the remaining tasks to convert code to the Authority API. Over the last 12 months, there has been a coordinated effort by all teams to update the game-end engine code to this new system. Thanks to their work, a large majority of these tasks have been completed. With a concerted push, we’ve reduced the number of remaining tasks into single digits.
In a server mesh, a client may connect to many different servers during a game session. Part of the work towards server meshing requires separating the process of connecting a client to a server into distinct stages. These stages can then be executed independently without requiring a client to completely abandon its existing game session. Significant progress has been made towards this although there is more work to be done.
VP of Technology
See you in the ‘Verse!