So, looking at the UE5 demo, some thoughts...
The new GI system doesn't use lightmaps, is fully realtime... if you wait enough. Looking again at the demo, it's obvious that the GI takes a few frame to adapt:
It's not eye adaptation because the effect would be the opposite : less light = eyes open more wide to get more light. So it would go from dark to bright.
The water simulation they presented is odd. Wave dispersion seems ok, but the height of the wave seems very high, or the reflections are weird. Alos no particles for splashes. Feels a bit cheap and old.
The high density of polygons they showcase is likely the recent technology that Nvidia showed named "Mesh shaders" which allows to stream a lot more LODs of geometry. PS5 was announced with a similar technology.
"You will be able to use 8K textures thanks to Virtual Texturing !"
Great, but I seriously doubt game content will be seen at that level. 8K is crazy, and mip-maps will stream off details very quickly.
Even 4K textures today are very rarely seen at 100% of their texel ratio.
This will be more useful for animation/VFX productions rather than games, which will render at very high resolutions the final image. Don't forget the time it takes to produce this kind of content as well. (Unless you have tools using procedurally generated details ;] ).
Happy to see new features for animation with the contextual situations but also the foot warping and predictive placements. This will make interaction with the environment much easier and will help to ground the characters.
Of course, this could be already done in UE4, but having access to such tools natively will make them easier to integrate. Animation blending today is still a difficult topic imho. I wonder if Epic is also looking into system such as motion matching :
I really like the fact that the bugs are handled via Niagara. The new particle system prove to be really polyvalent. Seeing examples like this makes me wonder how much things we will be able to simulate. :)
This statue come from "directly" comes from zBrush, with 33 millions of triangles. I guess it still need UVs because you can't store all the material properties in vertex color only.
Maybe they use material layering as well for this asset ? It could make the texturing process easier. With the high-poly count fidelity the texture work can be less intensive. So you could just blend generic materials via masks.
Tools will need to evolve to make texturing such big assets easier. Better handling of big textures, faster/better UV unwrapping tools. Maybe automatic UVs will be enough because we will be able to compensate via bigger texture. Big waste however.
With such high polycounts, the main bottleneck now will be the production time allowed on an asset and less about the performances of the asset itself. At least this is what seems to be promised.
It won't be possible to scale this level of details (similar to VFX productions) to a full game without some kind of automation at some point. In VFX you usually focus on a scene only, with specific point of views. Games have free cameras.
This part of the demo is very impressive, it showcases how fast the renderer can stream high levels of geometry from the system, while having physics simulation (baked via alembic or realtime ?) at the same time :
No RayTracing in the demo however. The GI system doesn't use it, shadow borders seem harsh and don't widen based on distance, not shiny/mirror reflections.
Outside of the technical aspects (which promise great stuff for which I'm really looking forward), the demo shows as well a really nice artistic direction.
They have a great use of indirect lighting to light the different areas:
That's it ! :)
Overall great demo, I love seeing this kind of stuff because it is always inspiring. I recommend watching the demo on vimeo to get the best quality: https://vimeo.com/417882964 
Indeed, looking back at it you can see only the GI for the first light change if you keep you eyes on the end of the tunnel. This is great, it means dynamic GI is not limited to the sun light. :) https://twitter.com/EyeOfScar/status/1260619509552967680
Not RTX since no raytracing, but it could be some kind of Voxel based information (SVOGI maybe ?). The multiple-frame interpolation nature means they update info in one place and then propagate it. A dense lightprobes grid is another way. https://twitter.com/EyeOfScar/status/1260615799921770496
Maybe something in-between. :)
Some games will continue to use high to low poly workflows, like stylized game. Main reason being the scaling from one platform to the other. Automated tools will never beat handcrafted assets on that point. https://twitter.com/Ostap_Blender/status/1260621760749359105
If this Mesh Shader system runs on PS5, it means the AMD GPU handles it, therefore Xbox and AMD GPUs on PC will support it too at some point. https://twitter.com/SanbornVR/status/1260623736966955013
Indeed !
It also means the difference of look between a cinematic and the actual gameplay will be even more smaller thanks to the shared assets and quality level. https://twitter.com/newincpp/status/1260624144930144257
Ho, good point ! https://twitter.com/lobachevscki/status/1260634720108449793
I wonder if Tri-Planar UVs could be enough in this case, given the nature of the material/surfaces (noises/dust). This way you could paint on the asset in-engine directly. https://twitter.com/lobachevscki/status/1260634919946108929
You can follow @Froyok.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: