IE11 is no longer supported
We do not support Internet Explorer 11 and below. Please use a different web browser.

ID:

17590

Comments:

16

Date:

May 13th 2020

Squadron 42 Monthly Report: April 2020

Squadron 42 Monthly Report: April 2020

This is a cross-post of the report that was recently sent out via the monthly Squadron 42 newsletter. We’re publishing this a second time as a Comm-Link to make it easier for the community to reference back to.

Attention Recruits,

What you are about to read is the latest information on the continuing development of Squadron 42 (SCI des: SQ42).

Our top agents went into deep cover and managed to extract information on work on the Javelin’s weapons and engineering sections, as well as key info on the next step of cinematics.
The information contained in this communication is extremely sensitive and it is of paramount importance that it does not fall into the wrong hands. Purge all records after reading.

UEE Naval High Command

AI (Combat)


The Combat AI Team started the month improving stability and fixing bugs for Alpha 3.9. Afterwards, they began work on realistic firing, initially tackling reload actions for NPCs. They extended the actor state machine to report the results of reloading actions so that NPC behaviors are notified and branch correctly in the decision-making process.

They’re currently working to keep track of NPC bullet quantity so that reloading is triggered at appropriate times. This will include an extension of the current combat behavior that allows NPCs to evaluate the best fire mode for the selected tactic. For example, choosing a mode that consumes less ammo, switching to a different weapon, or looking for ammo in the environment if they run out. They also created the base for the Vanduul character, starting with the entity setup, evaluation of the available locomotion animations, and priorities for the lance AimIK.


AI (Ships)


April saw Ship AI complete their rework of the Defend Target AI, which involves a leader and one or more escort ships:

When out of combat, the in-formation escorts fly alongside the leader and only enter combat when threatened. When flying out-of-formation, the escorts will join the formation if allowed to by the leader. If no formation is supported or all positions are reserved, the escort will utilize the ‘FollowAtDistance’ behavior, where the AI selects a direction and distance relative to the leader and maintains a loose position. Periodically, all escorts will change position or leave the formation depending on the situation.

When in combat, escorts engage enemies governed by an assignment range that can be different for each ship. However, ‘engage’ doesn’t initiate a dogfight immediately and leave the leader undefended; engagement happens gradually. First, escorts gravitate around the leader to ensure they’re between the leader and target. If the enemy is closer (also defined by the assignment), the escorts engage in a dogfight. The overall flow ensures an escort is always in proximity of the leader to protect it and react if other attackers intervene. It’s possible to differentiate the defend dynamic by assigning different engage-ranges to escorts or adjusting how tightly they shadow the leader.

The Defend Target mechanic is also useful for implementing simple convoy behaviors for shipments and peaceful operations.

Ship AI also worked on the ongoing refactor of the Tactical Point system and the creation of the Tactical Targeting system. They’re currently in the last development cycle and are investigating ways to express variations of TTS queries to cope with the mission or scene at hand. They introduced the concept of optional overrides on query parameters, conditions, and weights that are only relevant and applied based on the fulfillment of a specific requirement (such as a tag set on the actor). So, they can define a base query that will be used normally but also introduce variations of specific parts that only affect characters specific to a single scene. For example, encouraging an NPC to target ships with low shields or with a criminal rating instead of the standard selection defined in the base query.


AI (Social)


The Social AI Team spent time improving the performance of the Usable Cache Manager. They added the ability to time-slice the main update to make it easier to control the update usage of different jobs, and began using queued physical ray-casting to validate which usable enter/exit locations are available and not blocked by other objects.

They started work on compound usables, which can inherit functionalities from other usables and override data based on specific setups. For example, maintenance or inspection operations on a Gladius might use generic animations in most locations, but a more complex section of the ship may require specific Mannequin tags.

Functionality and visual quality were further developed for vendors, such as the Bartender. Work went into potentially allowing designers to set up proc clips that stow/unstow objects from a specific inventory pocket. They also extended the Communication system to allow designers to tag different options relating to the same contextual voice line. This will allow a generic behavior to automatically select the right option for a specific context. For example, a shop patron can use a generic ‘order’ communication line to pass the tag of the object they want to order, which selects a relevant voice and animation. If one isn’t present, a generic option might be selected. This, in combination with a pass on wildlines and secondary sub-activities for the vendor behavior, allowed the team to significantly improve the visual fidelity of the behaviors.

They’re currently integrating more functionalities into consumable objects to create an additional layer of customization to the flow, such as enabling AI to differentiate between open and closed bottles and act accordingly.

Focus was also given to improving several aspects of the patrol path tech. This involved moving some of the parallel functionalities trigger-logic to the movement system, which is a big change for several reasons: The team can now send simple requests related to a designer path to the movement system, making the subsumption task much simpler; they can use the same approach that was started for complex usables, such as closed doors, elevators, and the transit system; and the movement planner is now responsible for analyzing the data and creating a plan that can be as complex as needed.


Animation


Last month, Animation supported Social AI with their ship hangar work, general inspection places, bar patron idles and actions, bridge control seats, and lockers. They also worked on reloading and malfunction animations for all weapon classes and progressed with the Vanduul enemy combat style. Development of stumbles, knockdowns, body dragging, and what happens when a player is injured or incapacitated continued.

The Mo-Cap Team improved the motion builder skeleton, which will help the wider team create assets faster, and the ‘take selection’ tool, which will improve post-shoot turnaround. Tools were also created to enable quick data renaming and video conversion. Several team members are temporarily supporting other related areas of development due to the suspension of all mo-cap shoots, including making headway into the large amount of comms calls used throughout the campaign.


Art (Characters)


Art continued their ongoing hair work, which included remodeling some existing hairstyles and working on new shorter styles. They also further improved the fidelity of blonde hair using the latest iteration of the shader.

They’re currently finalizing officer uniforms and will move onto others in the coming weeks. Thanks to the multitude of ranks in the UEE Navy, around fifteen outfits will be created.

Our focus is ensuring that details hold up during cinematic close-ups, as we do not use separate cinematic meshes. What you see in the game is what you get in the cutscene. By the end of this, they will be looking better than ever.
- Character Art Team


Art (Environment)


Last month, the Environment Art Team increased the Javelin’s engineering section, which came about after playtesting one of the chapters that takes place on the ship. Lighting work also continued across the ship for this chapter, utilizing the new real-time cubemap feature used for light-state transitions.

They also continued to build up the Aciedo comms array set. The player visits several arrays during the first section of the campaign, so the set needs to serve as the base for multiple stations before the unique art and assets can be added to each. The truck stops are progressing well, with impact damage being introduced to add to the feel of long-term abandonment. Bear in mind these aren’t the truck stops players are familiar with from the PU; all of SQ42’s environments are unique to the campaign.

The team also fleshed out the hangar interiors on one of the mining bases. This is the largest hangar in the game and needs to carry out multiple functions for the station and passersby. Important story moments take place here, which gave the team the opportunity to dress such a large space with interesting narrative-relevant props. Work also began on two new unique locations; one Brutalist in design and another more like a shantytown.


Art (Props)


April saw the Props Team looking into how to approach dressing some of SQ42’s more unique locations. This involved focusing on the dressing asset materials to bed them into the environment and completing some of the storytelling assets from the Narrative Team.

They created new maintenance and electrical spares and revisited the hangar assets to improve their visual quality and prep them for AI interactions. Finally, several props were prepared and rigged in Maya for the animators to use in gameplay story scenes.


Art (Weapons)


The Ship Weapons Team focused on the Javelin’s built-in arsenal, including a new Behring Gatling cannon and a considerably sized Behring torpedo.


Cinematics


In April, the Cinematics Team shifted from scene implementation to the production pass. There are five stages in the pipeline: Pre-vis, Kick-Off, Implementation, Production, and Finalization. After the implementation pass, scenes will work in-game with the correct root for all NPCs, timing and editing is synced for all scene members, the number of animation fragments is decided so the player can interact correctly, poses are roughly matched, and the correct markup for abandon/interrupt is created. While the designers are marking up the abandon/interrupt points, the animators can make their production pass. This stage is all about final pose matching, fixing remaining small gaps in animation, starting proper polish, and making sure the scene state machine is delivered. Once done, mo-cap polish is tackled by the animators. At the same time, the designers work alongside them to ensure cameras are locked and lighting passes progress to show the scene in the best way possible. The LookIK markup is also done at this stage:

LookIK refers to how an NPC, like Captain White, can look at the player and follow them. This can be with 100% strength, meaning he is leaning his upper body and the neck, head, and eyes follow the player with a field-of-view angle of 120°. However, each of these elements can be tweaked to a smaller percentage. For performance captured animation, the team don’t want Captain White to stare down the player at all times as it would override most of what the actor did on set.

For Implementation, the first pass of LookIK is done, meaning the team turn it on/off at appropriate parts of a scene. But at Production, they revisit the markup and spend time evaluating the original on-set reference camera data of the actor and analyze the performance down to the small details. For example: Why did he look away? Was it because he thought about his answer or to give more emphasis? How much of the look utilized his neck and shoulders? The animators then dial in the markup percentages for body, head, eyes, and carefully animate each of the LookIK parameters via a curve editor in Maya. Then, they review the scene to evaluate how successful the markup is in retaining the actor’s intent.

For scenes where the player is ‘locked in’ watching a briefing, a certain percent is still enabled to accommodate the slightly smaller female player model.

What we’ve found during these reviews is that we need to be careful with the strength of the eyes in LookIK. In real-life, people talking to each other are not completely locked together, so keeping the eyes alive is a small but rather important detail to get right. It helps to keep as much of the performance alive as we can while pushing what was once a linear mo-cap performance into our interactive realm.
- Cinematics Team


Engineering


Engineering’s April involved working on AFT, adding rotation resistance to the driven articulated entity tracking joint (exposing tuning parameters), making body-dragging-driven ragdoll behavior less framerate dependent, and adding rotation damping for the dragging character (with exposed linear and angular damping factors). They also fixed animation pops, added force reactions, and made the head and neck joints of a drag-driven ragdolls ignore actor entity collisions.

Physics-wise, they made improvements to the load and spawn time for entities, reworked the processing of dead-marked entities, and optimized the CheckAreas function (which is now three-to-five times faster in new grid-OC-split regions). Basic ricochet behavior was also added and made deterministic. They also looked into character physicalization issues for the Cinematics Team and integrated physics into the main development stream too.

Engineering wrapped up the mesh refactor for the Gen12 renderer, moved shader item creation away from the render thread, relocated shader objects to resource containers (to improve material-streaming during OCS), and made headway into permanently enabling certain G12 effects. They added asynchronous shader compilation support to the Gen12 code and refactored the buffer management for render mesh. This means they can now create buffers on any thread instead of deferring it to RT, use a VB stream wrapper object to maintain ref counts for writes and reads on render mesh streams, and run numerous API cleanups. They further cleaned the loading and refreshing of the shader system, ported texture sampler state code, and started porting brush rendering to the new pipeline.

Regarding the shader system, they fixed an issue in cache generation that caused certain combinations of hull, domain, geometry, and compute to not pre-cache. For hair, they created a new slider for creating a ‘salt and pepper’ look and improved scattering for blonde and bright hair.

Engineering bundled the processing of adaptive resolution samples so that lowering adaptive resolution will show GPU perf scaling as expected. They changed the way active sample indices are built to maintain the desired serialization order per sample block. Each sample block can now be processed as a bundle via the thread group to better utilize cache effects. They also looked into deep learning-based de-noising and up-sampling, and worked to implement a guided filter to de-noise and up-sample lower resolution raymarched inscatter and transmittance results. For the ongoing ocean work, the team upgraded buoyancy physics to make it functional with the new changes, fixed motion vectors on deferred oceans, and added sub-surface scattering to improve the appearance of crest waves.

General system worked involved enabling advanced CPU features for Linux DGS (in addition to already enabled Haswell-based code gen in Clang). Batch worker wakeup was improved and optimizations to the aggregate broad phase were made, such as replacing the sphere tree with an optimized radix tree and an optimized state update.

Finally for Engineering, the entity component update scheduler was developed further. To this end, they added a new component update policy based on entity rendering bounds with an optional view ratio and added support for view ratio in component updates. They also fixed a crash in a video driver due to incorrect use of the texture upload path taken for UI/Flash assets. Various frame times were added to important threads (MT, RT, network, physics), with one of the goals being to derive a VSync agnostic frame time. The exception handler and crash dump processing tools now support new cig-trace events and have unique folders to dump debug information into, which streamlines runtime error processing and analysis. The developers can now also mute FPEs until a given date.


Gameplay Story


The Gameplay Story Team continued to support Design as they work through various scenes. They also made significant improvements to their library of generic interrupt animations and now have 14 male and female animations that work well from standing poses. The gameplay story scenes on and around the bridge seats on the Idris underwent iteration and they are now beginning to support the Social AI Team’s work in this location. Lastly, they moved a few scenes for chapter eight through the pipeline. These scenes started life as individual lines of dialogue delivered by a single standing character, though new mo-cap (recorded before the lockdown) meant they could be turned into two-person scenes related to the environment.


Graphics


The Graphics Team continued their crucial work on the Gen12/Vulkan renderer, with focus on getting the first version of a complete graphics pipeline up and running. This is the first of several major milestones in the renderer’s development and will allow the team to deprecate much of the old post-effect code. The organic shader used for vegetation and rocks is currently receiving a long-planned upgrade, which adds many new features and integrates it into Planet Tech v4 and allows for much better integration of assets into planetary terrain.


Level Design


Social Design pushed on with their complex scene work, working closely with Animation to make sure all eventualities involving the player breaking, re-joining, or abandoning scenes are covered.

Level Design worked closely with Art, fleshing out some of the alternate routes that players can take through the levels that allow for more exploration or stealth-type gameplay. They further tuned the pre-combat and patrol work delivered by AI Features.

The Space/Dogfight Team continued ‘space scaping’ various locations in the game, crafting them into believable places with function and identity. They also implemented work from the Vehicle Feature Team on the updated scanning mechanics, making sure they function correctly within SQ42.


QA


Throughout April, QA was tasked with recording client scene captures of individual chapters and investigating any issues hindering the cinematics workflow. To further help the Cinematics Team, they expanded the tools used, such as the subsumption visualizer, to better debug any issues found.


Tech Animation


Tech Animation continued building rigs that will give the animators the ability to sync body and ship animations. They worked with the wider Animation Team to rig up seats for SQ42’s capital ships and built a list of loadouts for the animators to use in Maya, making it easier to minimize clipping. They worked with Social AI on new usables for the hangar behavior and developed a tool to allow the animators to export their state machines straight from Visio to Mannequin, which will significantly speed up the implementation process. They also started work on their own tool to make implementing state machines into Mannequin even faster. They helped rig props for the animators to use in Maya and supported the Animation Team with smaller bug fixes, additions to existing tools, and some smaller new tools.


Tech Art


In April, Tech Art continued work on the RuntimeRig system, further enhancing it by adding functionality to allow the efficient sharing of RigComponents between several instances (typically characters). Certain RigComponent types (such as the BlendShapeComponents or the WrinkleMaskComponents) contain static data that never changes at runtime and is usually identical for all rigs of the same type and tier, such as all human T0-T3 head rigs. In early versions, this data was duplicated for simplicity. However, this isn’t ideal, both maintenance and memory-wise. In the old RigLogic system, it had been fully hard-coded via static arrays in shared C++ header files, which led to a more optimal memory footprint but completely removed flexibility on the tech-art side.

The latest iteration permits resource sharing wherever possible and gives the artists full technical freedom to edit the data and quickly iterate on the deformation and shading-related rig outputs. As with all core systems designed to replace legacy versions, rigorous testing, tech-hardening, and optimization are needed before it can be rolled out across the board. As a first step, they’ll test it on the Maya-side only, using the new Vanduul face rigs. Eventually, it’ll be applied to all the 130+ human head rigs in the DCC and, later, the engine too.


User Interface (UI)


Throughout the first quarter of 2020, the core UI Team progressed with adding 3D support to the Building Blocks UI tool. April’s work will make it easier for designers to add 3D vehicle or character icons to a screen without relying on a programmer. They also finished the first iteration of a new UI kit that makes it easier to set up pages by using ready-made components, which can be easily restyled for different manufacturers.

Currently, many of the UI Team are working closely with other teams: Alongside the Vehicle Team, they worked on improvements to the targeting, missile lock, and landing UI. For the Actor Team, UI supported the creation of the new inventory system. The UI artists also worked on screens and branding for some upcoming environments.


VFX


Last month, VFX made great progress towards finalizing the GPU particle system feature list. This involved developing two new features, alpha erosion and emitter rotations. Alpha erosion allows them to eat away at the edges of a texture to give the appearance of it dissolving instead of simply fading out. Emitter rotations allow the team to rotate the emitter over time using curves, which is useful for making spirals and sweeping side-to-side motions.

Conclusion

WE’LL SEE YOU NEXT MONTH



End Transmission

Part of

Monthly Reports

More in this series

Comments
016.0

Feedback

Loading Additional Feedback