AI
Character Combat kick off February’s report, who spent the month making progress on combat behaviors, specifically iterating on the assault and hold position tactics. They improved the Reservation Manager System usually employed to reserve object use in the world. This was achieved by adding the ability for
NPCs to reserve volumes of space to correctly position themselves and retain the knowledge of where the AI wants them to go. They began adjusting the AI system to accommodate server meshing. The main requirement for this is to split the responsibility of the different systems by server so that the Networking Team can correctly test and identify problems as soon as possible. They’re also tweaking the navigation system to better support the object container streaming level design workflow. This involves constructing a pipeline where navigation data is embedded in each container to improve control over which portions of data are loaded. Also, time was dedicated to the routine code clean-up that’s crucial to keeping the ever-growing code base efficient and readable. This clean-up focused mostly on flight-related Subsumption tasks with heavily repeated (or ‘boilerplate’) code that could be abstracted away to reduce the likelihood of errors when implementing new tasks.
Ship AI completed the first iteration of the full 3D navigation system for NPC ships. The system is currently with QA for testing before it’s integrated into Alpha 3.9. This is an important step as it offers new opportunities to implement more diverse missions not limited to open-space environments. Thanks to dynamic collision avoidance, they were able to remove the current speed limits on all AI ships. Instead of a capped standard combat mode (SCM) speed, NPCs can now set their own according to the desired relative goal velocity.
Social AI continued polishing vendor behavior, readying it for use in the bar setups and shops. Last month’s focus was preparing and delivering drinks and cocktails – a multi-disciplinary effort that will define the pipeline for all future social environments work in the PU. When finished, the team will be able to generate content and create templates for multiple layouts much quicker. Currently, the vendor can pick up and deliver anything that already exists in the world, such as a bottle of beer, and prepare many things that don’t, such as a cocktail. They’re also working on other social behaviors, such as fitness and janitor activities. Both leverage the in-development ‘path scripting’ functionality. The purpose of this is to use the environment to define the AI’s possible options, such as cleaning and patrolling, and can be used to simplify the scripting of story scenes. It works by allowing a designer to lay down paths in the environment, create branches for each vertex, and define which subsumption logic can be executed and which conditions should be met.
Art (Characters)
Character’s February focus was on hair, uniforms, and quality. They completed two new hairstyles, including that of Captain Noah White. They thoroughly tested the Maya tools for hair authoring and are extremely pleased with the results and how the tool’s holding up to full production. They currently working on blonde hair, investigating how it behaves with the current shader; the aim is to make it as realistic a shade as possible. They also continued making passes on character heads and have started work on the rest of the Bridge Officer Uniform assets. Next on the list are battledress uniforms, which are expected to take two months to complete in their entirety.
Cinematics
“‘Real-time, all the time’ isn’t just a mantra that veteran real-time engine users appreciate, it’s often the reason they fell in love with the workflows and engine in the first place. It enables artists and designers to quickly iterate without having to resort to offline build processes or tools not capable of giving real-time results, making creative decision-making instant and not reliant on delayed feedback loops. During tools production, we have to write this mantra on a battle banner and wave it every time new tech is built. With a technologically challenging and beyond cutting-edge game universe, the push to keep things in real-time can become complex. For example, in cinematics, it’s magnitudes easier to resort to a traditional offline rendering workflow. Offline, you can crank up the detail, lighting, shadows, motion blur, LODs, and depth of field to the max before rendering out an image sequence and cutting the final sequence together in video-editing software. However, the end-result is a video with fixed resolution and bitrate that then has to be added to the build using a video codec that can highlight that elements or full sequences aren’t real-time. To maintain authenticity and immersion in SQ42, the goal has always been to push for real-time cinematics during even the biggest set-piece sequences.”
- The Cinematics Team
February saw Cinematics push one of the most challenging sequences through its second pass, locking down the timing before starting VFX and destruction simulation. The upgraded TrackView sequencer tool was completed, finally enabling the team to edit these long, epic sequences in real-time. They added several sub-sequence abilities and a camera priority logic based on the sequence sitting on top. These sequences behave like video clips, making it possible to edit complex setups in real-time, shifting each sub-sequence around on the master timeline. For example, the team can have individual capital ships and exterior fleets on separate sequences and combine them all in a master, with the ability to quickly assert where they want the camera cuts to happen. Another challenge is to get the streaming system working when cutting back and forth between several distant locations. Most game-streaming tech is built with the assumption shots aren’t constantly cutting back and forth between different locales, so getting this polished is the upcoming focus for the team’s workflow.
Engineering
In Frankfurt, Engineering worked on physics threading and performance, including investigating the full parallelization of integration parts in physics time step-code, multithreading the polygonize function for interior volumes, continuing concurrent/immediate queuing for physics, implementing local command queues, and adding an option to create queues on demand. They began finalizing and integrating the new signed distance field (SDF) system to accelerate collision checks on complex geometry and increase the precision of results. Unified fields can now be animated as any other part (they’re still special in some regards, but can be seen as a regular part from the outside) and distance fields on objects can be taken into account when sampling distances.
Optimization-wise, they disabled physics on skin attachments and made OC and SDF-size improvements and allocation optimizations for interior volumes. They enabled foliage to support 64-bit precision status updates and zones, made render proxy for ropes work with the zone system, continued work on death reaction animations, and provided support for actor/ragdoll body dragging.
Work was done to the zone system, while a culling update thread to parallelize updates of spatial culling structures was introduced. The team optimized streaming by skipping many unnecessary atomic operations on smart pointer copies and reduced the parallelization of occlusion culling. They also made improvements to the concurrent access check (support checking RW-lock semantics) and WAF improvements.
In support of the Gen12 renderer and Vulkan, they generalized texture views, refactored the texture creation process and remaining D3D texture view code, added MSAA support, removed old direct VB/IB memory access code, added ViewInfo for access to common view/camera related parameters, and looked into moving shader parsing into the resource container. The first improvement pass for hair AO was made, which is now coupled with color, and dye shift softness was introduced. Ocean shader improvements were made too by porting ocean rendering to deferred and further working on cube map reflections.
The raymarcher was simplified by removing the explicit evaluations of segments to significantly compact generated code and reduce register usage. Non-linear stepping for raymarching was also implemented to improve quality at a reduced number of samples. Planet terrain was iterated on to support the rendering of large ground-based objects on the height and shadow maps. A persona live mo-cap plugin for the editor was created too. Finally, animation code was amended to ensure ragdolls work with ground alignment and the CHR chunk loader was simplified to initialize physics (which is still ongoing).
Specifically for SQ42, the team completed their work on the interrupt, rejoin, and abandon functionality for story scenes. They’re currently fixing various edge cases and visual problems, such as animation glitches, and adding a new system to give the designers better control over when and how events are triggered. Engineering also supported the cinematic designers with their required TrackView requirements, making sure everything works in the intended manner.
Gameplay Story
February was a busy but productive month for Gameplay Story, who worked on a wide variety of scenes, including supporting Design with animations for the abandon mechanic. They also reviewed NPC conversations with Design and Narrative, adding extra dialogue to make them more natural. All med-bay scenes in chapter six were updated to add proper start and end idles wherever possible. This was an extensive overhaul, resulting in conversations that can be triggered with much greater control. This will ensure the scenes look good and play out with the desired pacing. A scene with two characters carrying a large box up a ramp was completed after considerable mo-cap stitching and is currently looking great. Finally for Gameplay Story, new actor records and loadouts were created so several scene setups can be finalized.
Graphics
The Graphics Team continued crucial work on the Gen12/Vulkan renderer, converting the tiled-lighting and dynamic geometry systems to the new rendering paradigm. They also made various improvements to the automated testing system necessary to ensuring stability for the designers and artists.
Level Design
The Social Team, in collaboration with Animation and Engineering, continued work on narrative sequences and the abandon functionally. They’re currently focusing on early full-chapter playthroughs that introduce players to the life of a UEEN pilot. They continue to work closely with the FPS AI Feature Team to place and refine AI behaviors. One of the main aims is to have NPCs behave realistically, regardless of the environment they’re placed in. The game’s levels are designed to be architecturally believable spaces, so the AI needs to adapt to them in believable ways. The Dogfighting Team are working with Flight AI to refine scenarios that require custom engineering support. They’re also supporting the ongoing ambient ship-life work and take-off/landing updates.
Narrative
Some of the US-based Narrative Team visited the UK studios to review the latest level progress and oversee a female-character mo-cap session. They also reviewed the smaller NPC conversations with the Gameplay Story Team to ensure they’re as believable as possible.
QA
Combat and Ship AI test cases were implemented, which the team will start testing regularly. They’re still tasked with creating recorded client scene captures of individual chapters and are currently investigating issues hindering the cinematics workflow.
Tech Animation
Tech Animation improved visuals for character loadouts alongside Character Art and Cinematic Design. They added engine loadouts to Maya to enable the animators to complete their polish pass on the same loadout the characters will wear in-game. The animation workflow was improved by fixing several bugs and adding tools to the pipeline to speed up the bake process for bigger scenes.
Several new weapons reached Tech Animation, while the Social AI Team was supported with new usables for sitting, standing, exercise spots, and the bartender. They also worked with Props to solve problems with their rigger tool and fixed bugs for Design, Animation, and Art.
Tech Art
Last month, Tech Animation implemented a new rig-logic editing plugin command for Maya. For performance and efficiency, the backend was implemented in C++. The Qt-based frontend/UI layer is currently implemented using python. While the Vanduul face rigs already use this new runtime rig logic, all 120+ human face-rigs will soon be ported over to the new system.
User Interface (UI)
February saw UI helping the Actor Team finalize visuals for the Inner Thought radial menu. After a lot of concepting and UI design, the new SQ42-specific visor and visual target were approved. Prototyping is ongoing before code work begins next month.
VFX
The UK’s VFX Team closed out the month tweaking several existing background effects and taking advantage of recent improvements to the particle system’s parent/child hierarchy. Investigation into further methods of triggering particle effects through gas tech was started too.
In Frankfurt, the team worked with the programmers to lockdown the final list of features required for SQ42. This included fixing long-standing bugs on the GPU system, including inconsistencies with parent/child spawning, to give a much deeper hierarchy and more complex effects. This was tested with firework VFX that pushed the system to its limits and helped uncover previously unknown bugs.