<ruby id="9ue20"></ruby>

  1. 
    

      国产午夜福利免费入口,国产日韩综合av在线,精品久久人人妻人人做精品,蜜臀av一区二区三区精品,亚洲欧美中文日韩在线v日本,人妻av中文字幕无码专区 ,亚洲精品国产av一区二区,久久精品国产清自在天天线
      網易首頁 > 網易號 > 正文 申請入駐

      Unity實現(xiàn)Nanite

      0
      分享至


      【USparkle專欄】如果你深懷絕技,愛“搞點研究”,樂于分享也博采眾長,我們期待你的加入,讓智慧的火花碰撞交織,讓知識的傳遞生生不息!

      這是侑虎科技第1939篇文章,感謝作者傻頭傻腦亞古獸供稿。歡迎轉發(fā)分享,未經作者授權請勿轉載。如果您有任何獨到的見解或者發(fā)現(xiàn)也歡迎聯(lián)系我們,一起探討。(QQ群:793972859)

      作者主頁:

      https://www.zhihu.com/people/tian-cai-ya-gu-shou

      一、前序

      1. 介紹

      Nanite是UE5中虛擬幾何體(Virtualized Geometry System)的系統(tǒng),主要用途是高效率渲染的高面數模型。Nanite會為模型自動生成LOD結構,與傳統(tǒng)LOD不同,Nanite的LOD不再是每個模型的,而是精細到模型中的局部區(qū)域,藝術家不需再為制作或處理LOD煩惱。并且還能享有GPU Driven的高效剔除,單個繪制調用的好處。


      2. 技術要點

      Nanite技術結合了多種技術做到了高效渲染:

      1. Cluster Rendering:由Cluster組織三角形,可以享有更高效的剔除。

      2. Auto LOD:通過Graph Partitioning技術劃分和簡化模型構建LOD,并且把數據組織成BVH結構在Runtime時候可以高效地并行選擇LOD,通過這種方式構建的LOD過渡非常絲滑。

      3. GPU Driven Pipeline:由GPU驅動的繪制,減少了CPU的性能開銷。

      4. Occlusion Culling:更細顆粒的遮擋剔除,用于剔除不可見的三角形。

      5. Hardware/Software Rasterization:由于小三角形對于硬件光柵化非常不友好,所以針對這些三角形用Compute Shader執(zhí)行軟光柵提高效率。

      6. Visibility Buffer:利用Visibility Buffer減少Overdraw,進一步提高GPU效率。

      7. Streaming:加載只看到的相關數據,減少幾何體對內存的壓力。

      3. 本文效果

      由于Nanite系統(tǒng)非常龐大和有非常多的工程細節(jié)要處理,所以本文會簡化和略過一些東西,僅實現(xiàn)核心部分,而且會與有UE5的版本有點出入。

      下圖是本文實現(xiàn)的效果,每個色塊是一個三角形,可以看出LOD切換和相機剔除都非常絲滑。


      色塊表示三角面


      色塊表示Cluster

      二、實現(xiàn)

      1. Clusterize

      第一步,在離線階段處理,將復雜的超高精度網格模型高效且合理地分割成更小、更易于管理的簇(Cluster),每個Cluster最多128個三角形。這種劃分不是簡單的切割,而是旨在最小化簇與簇之間連接的邊數(即切割大?。瑫r保持每個簇的大小大致均衡。



      UE使用的Partition是Metis庫:


      https://github.com/KarypisLab/METIS

      實現(xiàn)代碼可以參考UE5的源碼部分:

      UnrealEngine-release\Engine\Source\Developer\NaniteBuilder\Private\NaniteBuilder.cpp

      本文使用meshoptimizer實現(xiàn)Mesh的切分Cluster和Partition功能,這個庫功能還有優(yōu)化Over Draw,Shadow Depth Index等功能:


      https://github.com/zeux/meshoptimizer

      我們新建一個C++導出DLL的工程,封裝幾個主要函數讓Unity可以使用。其實代碼量不多,翻譯成C# 直接用也可以。

      分別是:

      • meshopt_buildMeshlets(構建Cluster)

      • meshopt_partitionClusters(Cluster劃分Partition)

      • meshopt_buildMeshletsBound(計算Cluster數量)

      • meshopt_computeSphereBounds(合并BoundsSphere)


      在C# 中引用這些函數:



                                                                 unsafe static List 
                          
         
                clusterize(Vector3[] vertices, int[] indices)
      {
      constint max_vertices = 192; // TODO: depends on kClusterSize, also may want to dial down for mesh shaders
      constint max_triangles = kClusterSize; //128
      constint min_triangles = (kClusterSize / 3) & ~3;
      constfloat split_factor = 2.0f;
      constfloat fill_weight = 0.75f;
      int max_meshlets = BuildMeshletsBound(indices.Length, max_vertices, max_triangles);//meshopt_buildMeshletsBound
      var meshlets = new Meshlet[max_meshlets * 2];
      var meshlet_vertices = newint[max_meshlets * max_vertices];
      var meshlet_triangles = newbyte[max_meshlets * max_triangles * 3];
      var meshlet_count = BuildMeshletFlex(meshlets, meshlet_vertices, meshlet_triangles, indices, indices.Length, vertices, vertices.Length, sizeof(float) * 3, max_vertices, min_triangles, max_triangles, 0.0f,
      split_factor);//meshopt_buildMeshlets
      List clusters = new List (meshlet_count);
      for (int i = 0; i < meshlet_count; i++)
      {
      ref Meshlet meshlet = ref meshlets[i];
      fixed (int* ptr = &meshlet_vertices[meshlet.vertex_offset])
      {
      fixed (byte* ptr2 = &meshlet_triangles[meshlet.triangle_offset])
      {
      OptimizeMeshlet(ptr, ptr2, (int)meshlet.triangle_count, (int)meshlet.vertex_count);
      }
      }

      Cluster cluster = new Cluster();
      cluster.indices = newint[meshlet.triangle_count * 3];
      for (int j = 0; j < meshlet.triangle_count * 3; ++j)
      cluster.indices[j] =
      meshlet_vertices[meshlet.vertex_offset + meshlet_triangles[meshlet.triangle_offset + j]];

      cluster.parent.error = float.MaxValue;
      clusters.Add(cluster);
      }

      return clusters;
      }

      然后可以直接通過meshopt_buildMeshlets函數,獲得每個cluster的indexs。

      2. Build DAG

      有了這些Cluster,就可以構建“LOD”了,只需要循環(huán)這個操作:打組->合并->減面->clusterize。如下圖:





      這個過程感覺就像Mipmap一樣,一層一層往上合并和簡化,并記錄一個Err誤差值和Bounds用于運行時LOD選擇用。而這些合并的的節(jié)點就叫做Cluster Group。最后得出一個DAG(有向無環(huán)圖,Directed Acyclic Graph)的結構。

                                                                 public struct ClusterGroup
      {
      public List Children;
      public Vector3 Bounds;
      publicfloat radius;
      public Vector3 LODBounds;
      publicfloat MinLODError;
      publicfloat MaxParentLODError;
      publicint MipLevel;
      }

      publicclassNaniteSubMesh
      {
      public List Group> clusterGroupList;
      public List clusterList;
      publicint maxMipLevel;
      }

      static NaniteSubMesh Nanite(Vector3[] vertices,Vector3[] normals, int[] indices)
      {
      NaniteSubMesh res = new NaniteSubMesh();
      List Group> clusterGroupList = new List Group>();
      var clusters = clusterize(vertices, indices);
      res.clusterList = clusters;
      res.clusterGroupList = clusterGroupList;
      res.maxMipLevel = 0;
      for (int i = 0; i < clusters.Count; ++i)
      {
      var c = clusters[i];
      c.self = Bounds(vertices, clusters[i].indices, 0f);
      c.mip = 0;
      clusters[i] = c;
      }

      List pending = new List(clusters.Count);
      int[] remap = newint[vertices.Length];
      for (int i = 0; i < remap.Length; ++i)
      remap[i] = i;
      for (int i = 0; i < clusters.Count; ++i)
      pending.Add(i);

      int curMip = 1;
      byte[] locks = newbyte[vertices.Length];
      while (pending.Count > 1)
      {
      List int>> groups = partition(clusters, pending, remap, vertices);
      if (kUseLocks)
      lockBoundary(locks, groups, clusters, remap);
      pending.Clear();
      List retry = new List();
      int triangles = 0;
      int stuck_triangles = 0;
      for (int i = 0; i < groups.Count; ++i)
      {
      var curGroupClusters = groups[i];
      if (curGroupClusters.Count == 0)
      {
      continue; // metis shortcut
      }

      List merged = new List(vertices.Length);
      for (int j = 0; j < curGroupClusters.Count; ++j)
      {
      merged.AddRange(clusters[curGroupClusters[j]].indices);
      }
      LODBounds groupb = boundsMerge(clusters, curGroupClusters);
      ClusterGroup clusterGroup = new ClusterGroup();
      clusterGroup.Bounds = groupb.center;
      clusterGroup.MaxParentLODError = groupb.error;
      clusterGroup.radius = groupb.radius;
      clusterGroup.Children = new List(merged.Count);
      clusterGroup.MipLevel = curMip - 1;
      for (int j = 0; j < curGroupClusters.Count; ++j)
      {
      clusterGroup.Children.Add(curGroupClusters[j]);
      }
      clusterGroupList.Add(clusterGroup);

      // aim to reduce group size in half
      int target_size = (merged.Count / 3) / 2 * 3;
      float error = 0f;
      var simplified = simplify(vertices, normals, merged.ToArray(), kUseLocks ? locks : null, target_size,
      ref error);
      if (simplified.Count > merged.Count * kSimplifyThreshold)
      {
      stuck_triangles += merged.Count / 3;
      for (int j = 0; j < curGroupClusters.Count; ++j)
      {
      retry.Add(curGroupClusters[j]);
      }

      continue; // simplification is stuck; abandon the merge
      }

      // enforce bounds and error monotonicity
      // note: it is incorrect to use the precise bounds of the merged or simplified mesh, because this may violate monotonicity

      var split = clusterize(vertices, simplified.ToArray());
      groupb.error += error; // this may overestimate the error, but we are starting from the simplified mesh so this is a little more correct
      // update parent bounds and error for all clusters in the group
      // note that all clusters in the group need to switch simultaneously so they have the same bounds
      for (int j = 0; j < curGroupClusters.Count; ++j)
      {
      int clusterIndex = curGroupClusters[j];
      var t = clusters[clusterIndex];
      t.parent = groupb;
      clusters[clusterIndex] = t;
      }

      for (int j = 0; j < split.Count; ++j)
      {
      var sj = split[j];
      sj.self = groupb;
      sj.mip = curMip;
      split[j] = sj;
      clusters.Add(sj); // std::move
      pending.Add(clusters.Count - 1);
      triangles += sj.indices.Length / 3;
      }
      }

      curMip++;
      }

      if (pending.Count == 1)
      {
      var c = clusters[pending[0]];
      ClusterGroup clusterGroup = new ClusterGroup();
      clusterGroup.Bounds = c.self.center;
      clusterGroup.MaxParentLODError = c.self.error;
      clusterGroup.radius = c.self.radius;
      clusterGroup.Children = new List(1);
      clusterGroup.MipLevel = curMip - 1;
      clusterGroup.Children.Add(pending[0]);
      clusterGroupList.Add(clusterGroup);
      }

      res.maxMipLevel = curMip - 1;
      return res;
      }

      static void lockBoundary(byte[] locks, List int>> groups, List clusters, int[] remap)
      {
      // for each remapped vertex, keep track of index of the group it's in (or -2 if it's in multiple groups)
      int[] groupmap = newint[locks.Length];
      for (int i = 0; i < groupmap.Length; ++i)
      groupmap[i] = -1;

      for (int i = 0; i < groups.Count; ++i)
      {
      var c = groups[i];
      for (int j = 0; j < c.Count; ++j)
      {
      var indices = clusters[c[j]].indices;
      for (int k = 0; k < indices.Length; ++k)
      {
      var v = indices[k];
      var r = remap[v];

      if (groupmap[r] == -1 || groupmap[r] == i)
      groupmap[r] = i;
      else
      groupmap[r] = -2;
      }
      }
      }

      // note: we need to consistently lock all vertices with the same position to avoid holes
      for (int i = 0; i < locks.Length; ++i)
      {
      var r = remap[i];
      locks[i] = (byte)((groupmap[r] == -2) ? 1 : 0);
      }
      }

      這樣我們得到各級Mip的一系列Clusters。


      3. 加速結構

      即使把三角形劃分成Clusters數量也太多,使用Compute Shader來做并行結算效率也不高,于是Nanite就使用了BVH來作為ClusterGroup的加速結構,然后配合Persistent Threads做查找過濾。





      Persistent Threads遍歷BVH部分,有興趣可以參考UE5源碼:Shaders\Private\Nanite\NaniteClusterCulling.usf

      UE5中也有不使用Persistent Threads的流程,應該說一般默認就是不使用的。


      UE5源碼部分

      個人認為Persistent Threads方案在GPU遍歷這種BVH結構有點暴力和重度,所以簡化了一下,把多個Cluster合并成一個剔除單元(Part),先并行對Part做剔除,再對Part里的Cluster去做并行剔除,兩層結構來加速作為Persistent Threads的一個簡單替代方案。

      然后把多個Part組織成Page用于分塊加載。材質處理細節(jié)也不同,UE5的材質是每個Cluster會記錄MaterialRange,簡單起見這里實現(xiàn)是每個SubMesh會去構建獨立的Clusters。

      代碼如下:

                                                                  [Serializable]
      publicstruct NaniteCluster
      {
      publicint indiceIndex;
      publicint indiceCount;
      publicfloat selfErrer;
      publicfloat parentErrer;
      public Vector4 selfSphere;
      public Vector4 parentSphere;
      publicint subMeshID;
      publicint vertexOffset;
      };
      [Serializable]
      publicstruct NaniteClusterGroup
      {
      publicint ClusterStart;
      publicint ClusterCount;
      public Vector3 Bounds;
      publicfloat radius;
      public Vector3 LODBounds;
      publicfloat MinLODError;
      publicfloat MaxParentLODError;
      publicint MipLevel;
      }


      [Serializable]
      publicstruct NaniteMeshPart
      {
      publicint ClusterStart;
      publicint ClusterCount;
      public Vector4 selfSphere;
      publicfloat MaxParentLODError;
      }

                                                                 public classNaniteSubMesh
      {
      public List Group> clusterGroupList;
      public List clusterList;
      publicint maxMipLevel;
      }
      publicclassBuildPart
      {
      public List clusterList;
      publicint mip;
      publicint subMesh;

      }
      public static void BuildNaniteMesh(Mesh mesh)
      {
      var vertices = mesh.vertices;
      var normals = mesh.normals;
      var uvs = mesh.uv;

      int subMeshCount = mesh.subMeshCount;
      int totalClusterCount = 0;
      int totalIndexCount = 0;
      List subMeshList = new List ();
      for (int i = 0; i < subMeshCount; i++)
      {
      var triangles = mesh.GetTriangles(i);
      var subMesh = Nanite(vertices,normals,triangles);
      subMeshList.Add(subMesh);
      totalClusterCount += subMesh.clusterList.Count;
      }

      List buildPartsList = new List (totalClusterCount);
      int MAX_PART_PERPAGE = 128;
      int MAX_CLUSTER_PERPART = 8;

      for (int subMeshIndex = 0; subMeshIndex < subMeshList.Count; subMeshIndex++)
      {
      var subMesh = subMeshList[subMeshIndex];
      List clusters = subMesh.clusterList;
      var groupsList = subMesh.clusterGroupList;
      BuildPart buildPart = null;
      for (int i = 0; i < groupsList.Count; i++)
      {
      var gIndex = i; // sortGroups[i].OldIndex;
      var g = groupsList[gIndex];
      var childs = g.Children;
      for (int c = 0; c < childs.Count; c++)
      {
      int cIndex = childs[c];
      int cMip = clusters[cIndex].mip;
      totalIndexCount += clusters[cIndex].indices.Length;
      //new Part
      if (buildPart == null || buildPart.clusterList.Count >= MAX_CLUSTER_PERPART ||
      buildPart.mip != cMip)
      {
      buildPart = new BuildPart();
      buildPart.clusterList = new List(MAX_CLUSTER_PERPART);
      buildPart.mip = cMip;
      buildPart.subMesh = subMeshIndex;
      buildPartsList.Add(buildPart);
      }

      buildPart.clusterList.Add(cIndex);
      }
      }
      }

      int buildPartCount = buildPartsList.Count;
      NaniteMeshPage[] pageArray = new NaniteMeshPage[(buildPartCount+(MAX_PART_PERPAGE-1))/MAX_PART_PERPAGE];//ceil
      List tempIndiceList = new List(totalIndexCount);
      List mipLists = new List(totalClusterCount);
      int partIndex = 0;
      for (int i = 0; i < pageArray.Length; i++)
      {
      //create new page
      var p = ScriptableObject.CreateInstance ();
      pageArray[i] = p;
      tempIndiceList.Clear();
      int partCount = (i == (pageArray.Length -1)) ? (buildPartCount % MAX_PART_PERPAGE) : MAX_PART_PERPAGE;
      p.parts = new NaniteScene.NaniteMeshPart[partCount];
      List pageClusters = new List (partCount * MAX_CLUSTER_PERPART);
      for (int j = 0; j < partCount; j++)
      {
      var buildPart = buildPartsList[partIndex];
      var buildPartCluster = buildPart.clusterList;
      //create part
      var part = new NaniteScene.NaniteMeshPart();
      part.ClusterStart = pageClusters.Count; //local index
      part.ClusterCount = buildPartCluster.Count;
      int subMeshID = buildPart.subMesh;
      float maxParentErr = 0f;
      var clusters = subMeshList[subMeshID].clusterList;
      for (int c = 0; c < buildPartCluster.Count; c++)
      {
      var cluster = clusters[buildPartCluster[c]];
      mipLists.Add(cluster.mip);
      //create Cluster
      NaniteScene.NaniteCluster naniteCluster = new NaniteScene.NaniteCluster();
      naniteCluster.indiceIndex = tempIndiceList.Count;
      naniteCluster.indiceCount = cluster.indices.Length;
      naniteCluster.parentErrer = cluster.parent.error;
      naniteCluster.parentSphere = new Vector4(cluster.parent.center.x,cluster.parent.center.y,cluster.parent.center.z, cluster.parent.radius);
      naniteCluster.selfErrer = cluster.self.error;
      naniteCluster.selfSphere = new Vector4(cluster.self.center.x,cluster.self.center.y,cluster.self.center.z, cluster.self.radius);
      naniteCluster.subMeshID = subMeshID;
      tempIndiceList.AddRange(cluster.indices);
      maxParentErr = Mathf.Max(naniteCluster.parentErrer, maxParentErr);
      pageClusters.Add(naniteCluster);
      }

      LODBounds partBounds = boundsMerge(clusters, buildPartCluster,true);
      part.selfSphere = new Vector4(partBounds.center.x,partBounds.center.y,partBounds.center.z,partBounds.radius);
      part.MaxParentLODError = maxParentErr;
      p.parts[j] = part;
      partIndex++;
      }
      p.clusterArray = pageClusters.ToArray();
      p.indiceArray = tempIndiceList.ToArray();
      p.clusterMip = mipLists.ToArray();
      }

      string fileName = AssetDatabase.GetAssetPath(mesh);
      string extension = Path.GetExtension(fileName);
      fileName = fileName.Replace(extension, "");
      //Build page
      int totalVerts = 0;
      for (int i = 0; i < pageArray.Length; i++)
      {
      var page = pageArray[i];
      var clusterArray = page.clusterArray;
      var indiceArray = page.indiceArray;
      Dictionary indicesMap = new Dictionary();
      List tempVerts = new List (vertices.Length);
      List tempNormals = new List (vertices.Length);
      List tempUVs = new List (vertices.Length);
      List newIndices = new List(totalIndexCount);
      for (int c = 0; c < clusterArray.Length; c++)
      {
      refvar cluster = ref clusterArray[c];
      var indexStart = cluster.indiceIndex;
      var indexEnd = indexStart+cluster.indiceCount;
      for (int index = indexStart; index < indexEnd; index++)
      {
      int vertIndex = indiceArray[index];
      int newIndex;
      if (!indicesMap.TryGetValue(vertIndex,out newIndex))
      {
      newIndex = newIndices.Count;
      indicesMap.Add(vertIndex, newIndex);
      tempVerts.Add(vertices[vertIndex]);
      tempNormals.Add(normals[vertIndex]);
      if (uvs.Length == 0)
      {
      tempUVs.Add(Vector2.zero);
      }
      else
      {
      tempUVs.Add(uvs[vertIndex]);
      }

      newIndices.Add(newIndex);
      }

      indiceArray[index] = newIndex;
      }
      }

      page.vertexStride = 5;//pos3 + uv2
      page.vertexData = newfloat[tempVerts.Count * page.vertexStride];
      page.vertexCount = tempVerts.Count;
      for (int v = 0; v < tempVerts.Count; v++)
      {
      int vertexIndex = v * page.vertexStride;
      page.vertexData[vertexIndex + 0] = tempVerts[v].x;
      page.vertexData[vertexIndex + 1] = tempVerts[v].y;
      page.vertexData[vertexIndex + 2] = tempVerts[v].z;
      page.vertexData[vertexIndex + 3] = tempUVs[v].x;
      page.vertexData[vertexIndex + 4] = tempUVs[v].y;
      }
      totalVerts +=tempVerts.Count;
      string newPath = fileName + "_p"+i +".asset";
      AssetDatabase.CreateAsset(page, newPath);
      }
      AssetDatabase.Refresh();

      Debug.Log("mesh Vertx:"+vertices.Length +" mesh Nanite:"+ totalVerts + " cluster:"+totalClusterCount + "part:"+ buildPartCount +" page:"+pageArray.Length);
      NaniteMesh naniteMesh = ScriptableObject.CreateInstance ();
      {
      naniteMesh.subMeshCount = subMeshCount;
      naniteMesh.pageArray = new NaniteMeshPage[pageArray.Length];
      for (int i = 0; i < pageArray.Length; i++)
      {
      string newPath = fileName + "_p" + i + ".asset";
      naniteMesh.pageArray[i] = AssetDatabase.LoadAssetAtPath (newPath);
      }
      }

      var meshBound = mesh.bounds;
      naniteMesh.boundingSphere = meshBound.center;
      naniteMesh.boundingSphere.w = meshBound.extents.magnitude;
      string meshExt = "_mesh.asset";
      AssetDatabase.CreateAsset(naniteMesh, fileName + meshExt);
      AssetDatabase.Refresh();
      }

      到這里離線部分基本結束,可以得到一個Nanite的資源。當然UE5原文還做了很多操作,如BVH、Encode、編碼、壓縮、Page的劃分、頂點屬性優(yōu)化等,個人認為這些都屬于工程細節(jié)。


      4. 運行時資源

      來到Runtime部分,我們需要把這個Nanite Mesh加載上來,方便起見,這里直接引用一下資源在腳本上,偷懶省略加載部分。


      把資源、Object、材質信息整合起來,傳到GPU的Buffer中。這里做法很不正式還是偷懶來處理。當然也可以用Compute Shader來更新Page數據到GPUBuffer中。

                                                                     public static List 
                        
       renderers =  
               new List 
                        
       (); 
               
      privatestatic SceneObject[] gpuObjects = new SceneObject[2048];
      //cluster -> part -> page
      publicstruct SceneObject
      {
      publicint naniteMeshID;
      public Matrix4x4 localToWorldMatrix;
      publicint materialIDOffset;
      }
      publicstruct NaniteRes
      {
      public Vector4 boundingSphere;
      publicint partIndex;
      publicint partCount;
      }

      unsafe static void UpdateRenderList()
      {
      if(renderers.Count == 0)
      return;
      //object update
      if (renderers.Count > gpuObjects.Length)
      {
      gpuObjects = new SceneObject[Mathf.NextPowerOfTwo(renderers.Count)];
      }

      objectCount = 0;
      maxPartCount = 0;
      naniteMeshes.Clear();
      materialList.Clear();
      List materialIndices = new List();
      for (int i = 0; i < renderers.Count; i++)
      {
      var renderer = renderers[i];
      var nMesh = renderer.naniteMesh;
      foreach (var p in nMesh.pageArray)
      {
      maxPartCount += p.parts.Length;
      maxClusterCount += p.clusterArray.Length;
      }

      SceneObject obj = new SceneObject();
      obj.localToWorldMatrix = renderer.transform.localToWorldMatrix;
      //mesh index
      int index = naniteMeshes.IndexOf(nMesh);
      if (index < 0)
      {
      index = naniteMeshes.Count;
      naniteMeshes.Add(nMesh);
      }
      obj.naniteMeshID = index;
      //mat indexs
      obj.materialIDOffset = materialIndices.Count;
      for (int m = 0; m < renderer.materials.Length; m++)
      {
      var mat = renderer.materials[m];
      int matIndex = materialList.IndexOf(mat);
      if (matIndex < 0)
      {
      matIndex = materialList.Count;
      materialList.Add(mat);
      }
      materialIndices.Add(matIndex);
      }
      gpuObjects[i] = obj;
      renderer.transformChanged = false;
      objectCount++;
      }

      if(candidateClusterBuffer!=null)
      candidateClusterBuffer.Dispose();
      candidateClusterBuffer = new GraphicsBuffer(GraphicsBuffer.Target.Structured, maxClusterCount *2, sizeof(int));

      if(visibleClusterBuffer != null)
      visibleClusterBuffer.Dispose();
      visibleClusterBuffer = new GraphicsBuffer(GraphicsBuffer.Target.Structured,maxClusterCount *2, sizeof(int));

      if (objectsBuffer != null)
      objectsBuffer.Dispose();
      objectsBuffer = new GraphicsBuffer(GraphicsBuffer.Target.Structured, objectCount, sizeof(SceneObject));
      objectsBuffer.SetData(gpuObjects,0,0,objectCount);

      if(visObjectsBuffer !=null)
      visObjectsBuffer.Dispose();
      visObjectsBuffer = new GraphicsBuffer(GraphicsBuffer.Target.Structured,objectCount, sizeof(int));

      int vertCount = 0;
      List tempClusters = new List ( 2048);
      List tempParts = new List ( 2048);
      List naniteRes = new List ( 2048);
      List tempIndices = new List(2048 * 100);
      List vertexDataList = new List();
      //load page
      for (int nID = 0; nID < naniteMeshes.Count; nID++)
      {
      NaniteRes res = new NaniteRes();
      var nMesh = naniteMeshes[nID];
      //填充到GPU
      var pages = nMesh.pageArray;
      res.partIndex = tempParts.Count;
      res.partCount = 0;
      res.boundingSphere = nMesh.boundingSphere;
      for (int p = 0; p < pages.Length; p++)
      {
      var page = pages[p];
      var parts = page.parts;
      int vertOffset = vertCount;
      int indicesOffset = tempIndices.Count;
      int clusterOffset = tempClusters.Count;

      //add all cluster
      var clusters = page.clusterArray;
      for (int c = 0; c < clusters.Length; c++)
      {
      var cluster = clusters[c];
      cluster.indiceIndex += indicesOffset;
      cluster.vertexOffset = vertOffset;
      tempClusters.Add(cluster);
      }

      //add all part
      for (int partIndex = 0; partIndex < parts.Length; partIndex++)
      {
      var part = parts[partIndex];
      part.ClusterStart += clusterOffset;
      tempParts.Add(part);
      res.partCount++;
      }

      //add page data
      tempIndices.AddRange( page.indiceArray);
      vertexDataList.AddRange(page.vertexData);
      vertCount += page.vertexCount;
      }
      naniteRes.Add(res);
      }

      //TODO GPU Update Buffer
      if (naniteResBuffer != null)
      naniteResBuffer.Dispose();
      naniteResBuffer = new GraphicsBuffer(GraphicsBuffer.Target.Structured, naniteRes.Count, sizeof(NaniteRes));
      naniteResBuffer.SetData(naniteRes);

      if (partsBuffer != null)
      partsBuffer.Dispose();
      partsBuffer = new GraphicsBuffer(GraphicsBuffer.Target.Structured,tempParts.Count, sizeof(NaniteMeshPart));
      partsBuffer.SetData(tempParts);

      if (clusterBuffer != null)
      clusterBuffer.Dispose();
      clusterBuffer = new GraphicsBuffer(GraphicsBuffer.Target.Structured, tempClusters.Count, sizeof(NaniteCluster));
      clusterBuffer.SetData(tempClusters);

      if (indiceseBuffer != null)
      indiceseBuffer.Dispose();
      indiceseBuffer = new GraphicsBuffer(GraphicsBuffer.Target.Raw, tempIndices.Count, sizeof(int));
      indiceseBuffer.SetData(tempIndices);

      if(materialIndexBuffer!=null)
      materialIndexBuffer.Dispose();
      materialIndexBuffer = new GraphicsBuffer(GraphicsBuffer.Target.Structured,materialIndices.Count, sizeof(int));
      materialIndexBuffer.SetData(materialIndices);

      if(vertexDataBuffer!=null)
      vertexDataBuffer.Dispose();
      vertexDataBuffer = new GraphicsBuffer(GraphicsBuffer.Target.Raw, vertexDataList.Count,sizeof(float));
      vertexDataBuffer.SetData(vertexDataList);
      }

      //input object ID =>
      public unsafe static void UpdateNaniteScene()
      {
      if (renderListDirty)
      {
      UpdateRenderList();
      // UpdateRenderListGPU();
      renderListDirty = false;
      }

      for (int i = 0; i < renderers.Count; i++)
      {
      var renderer = renderers[i];
      if (renderer.transformChanged)
      {
      gpuObjects[i].localToWorldMatrix = renderer.transform.localToWorldMatrix;
      renderer.transformChanged = false;
      transformDirty = true;
      }
      }

      if (objectsBuffer != null && transformDirty)
      objectsBuffer.SetData(gpuObjects, 0, 0, objectCount);
      }

      5. 剔除

      這時離線時候已經把Clusters扁平化到數組中了,這些Clusters是可以并行進行剔除的,巧妙之處是他記錄了父級的誤差和自己的誤差,當我們傳入誤差系數時候就可以獨立地判斷自己是否被剔除,而和上下級無關。




      先從CPU發(fā)起剔除Compute Shader的Dispatch。這里因為組織數據時候就知道了所有Object最大的Parts/Cluster數量,所以直接用這個數去Dispatch了。


      Objects剔除:


      根據Object找到NaniteMesh的Parts進行Culling:


      ClustersCulling:



      6. 軟光柵

      略。

      7. VisibilityBuffer

      VBuffer主要用來減少Overdraw,著色器直接輸出InstanceID、ClusterID、材質ID。然后用這個VBuffer來計算頂點數據來著色。



      這個得益于GPUDriven的好處,一個DrawProceduralIndirect就可以繪制所有物體了:


      一次DrawProceduralIndirect繪制多個物體


      VBuffer存哪些屬性,多少位,都是工程細節(jié)這里就不考究了。


      8. 著色

      有了VBuffer就需要逐材質進行繪制,原文是材質ID分Tile組合IndirectDraw畫Quad的思想。




      需要注意一下這里VBuffer通過三角重心插值求出的UV是不能直接采樣貼圖的,因為DDXY不對,所以需求重新計算,計算的代碼放下面。并且利用SampleGrad(samplerName, coord2, dpdx, dpdy)來采樣。

                                                                 uint MurmurMix(uint Hash)
      {
      Hash ^= Hash >> 16;
      Hash *= 0x85ebca6b;
      Hash ^= Hash >> 13;
      Hash *= 0xc2b2ae35;
      Hash ^= Hash >> 16;
      return Hash;
      }
      float3 IntToColor(uint Index)
      {
      uint Hash = MurmurMix(Index);

      float3 Color = float3
      (
      (Hash >> 0) & 255,
      (Hash >> 8) & 255,
      (Hash >> 16) & 255
      );

      return Color * (1.0f / 255.0f);
      }

      struct FBarycentrics
      {
      float3 Value;
      float3 Value_dx;
      float3 Value_dy;
      };

      float2 Lerp(float2 Value0, float2 Value1, float2 Value2, FBarycentrics Barycentrics, out float2 dxy)
      {
      float2 Value = Value0 * Barycentrics.Value.x + Value1 * Barycentrics.Value.y + Value2 * Barycentrics.Value.z;
      dxy.x = Value0 * Barycentrics.Value_dx.x + Value1 * Barycentrics.Value_dx.y + Value2 * Barycentrics.Value_dx.z;
      dxy.y = Value0 * Barycentrics.Value_dy.x + Value1 * Barycentrics.Value_dy.y + Value2 * Barycentrics.Value_dy.z;

      return Value;
      }

      /** Calculates perspective correct barycentric coordinates and partial derivatives using screen derivatives. */
      FBarycentrics CalculateTriangleBarycentrics(float2 PixelClip, float4 PointClip0, float4 PointClip1,
      float4 PointClip2, float2 ViewInvSize)
      {
      FBarycentrics Barycentrics;
      PixelClip.y = 1 - PixelClip.y;
      PixelClip.xy = PixelClip.xy * 2 - 1;
      const float3 RcpW = rcp(float3(PointClip0.w, PointClip1.w, PointClip2.w));
      const float3 Pos0 = PointClip0.xyz * RcpW.x;
      const float3 Pos1 = PointClip1.xyz * RcpW.y;
      const float3 Pos2 = PointClip2.xyz * RcpW.z;

      const float3 Pos120X = float3(Pos1.x, Pos2.x, Pos0.x);
      co...

      特別聲明:以上內容(如有圖片或視頻亦包括在內)為自媒體平臺“網易號”用戶上傳并發(fā)布,本平臺僅提供信息存儲服務。

      Notice: The content above (including the pictures and videos if any) is uploaded and posted by a user of NetEase Hao, which is a social media platform and only provides information storage services.

      相關推薦
      熱點推薦
      突發(fā)!伊朗船只遭到攻擊

      突發(fā)!伊朗船只遭到攻擊

      浙江之聲
      2026-05-14 08:18:42
      3分鐘傾家蕩產?年入千億“精神鴉片”,正精準掏空中國人的錢包

      3分鐘傾家蕩產?年入千億“精神鴉片”,正精準掏空中國人的錢包

      春日在捕月
      2026-05-14 02:58:17
      小米SU7 Ultra挖孔機蓋案一審宣判:小米夸大宣傳但不構成欺詐,退還2萬元定金

      小米SU7 Ultra挖孔機蓋案一審宣判:小米夸大宣傳但不構成欺詐,退還2萬元定金

      紅星新聞
      2026-05-13 20:49:05
      世體:皇馬俱樂部主席選舉將于6月7日舉行

      世體:皇馬俱樂部主席選舉將于6月7日舉行

      懂球帝
      2026-05-14 03:25:34
      太諷刺!許家印獄中等判決,前妻卻在倫敦豪宅養(yǎng)31歲“小鮮肉”

      太諷刺!許家印獄中等判決,前妻卻在倫敦豪宅養(yǎng)31歲“小鮮肉”

      未曾青梅
      2026-05-11 22:14:44
      數學逆襲最快的黃金順序,照著學,成績蹭蹭漲

      數學逆襲最快的黃金順序,照著學,成績蹭蹭漲

      戶外阿毽
      2026-05-12 16:57:29
      性生活不足,原來會短壽!每周多少次比較合適?研究告訴你答案

      性生活不足,原來會短壽!每周多少次比較合適?研究告訴你答案

      醫(yī)學原創(chuàng)故事會
      2026-05-12 15:34:03
      中方已做最壞準備,一旦中美爆發(fā)戰(zhàn)爭,中國三大底牌一個比一個狠

      中方已做最壞準備,一旦中美爆發(fā)戰(zhàn)爭,中國三大底牌一個比一個狠

      阿校談史
      2026-05-14 00:12:02
      套路地方國資?追覓俞浩回懟資金鏈質疑:先還花唄再評價我們

      套路地方國資?追覓俞浩回懟資金鏈質疑:先還花唄再評價我們

      南方都市報
      2026-05-13 11:54:10
      《主角》爆火卻遭大量棄劇,觀眾理由出奇一致,一手好牌被打稀爛

      《主角》爆火卻遭大量棄劇,觀眾理由出奇一致,一手好牌被打稀爛

      嫹筆牂牂
      2026-05-13 07:10:16
      斯諾克女子世界第一白雨露談吳宜澤世錦賽奪冠:深受激勵,氣場與自信值得學習

      斯諾克女子世界第一白雨露談吳宜澤世錦賽奪冠:深受激勵,氣場與自信值得學習

      上觀新聞
      2026-05-14 04:58:06
      足壇瘋狂一夜:巴薩爆冷落敗,曼城大勝,國際米蘭意大利杯奪冠

      足壇瘋狂一夜:巴薩爆冷落敗,曼城大勝,國際米蘭意大利杯奪冠

      足球狗說
      2026-05-14 05:39:25
      開15年汽修店老板揭秘:保養(yǎng)車最坑的2個智商稅,90%車主都交過

      開15年汽修店老板揭秘:保養(yǎng)車最坑的2個智商稅,90%車主都交過

      老特有話說
      2026-05-13 21:38:29
      女子因18元奶茶被親姐拉黑,崩潰大哭:離婚帶娃5年,都看不起我

      女子因18元奶茶被親姐拉黑,崩潰大哭:離婚帶娃5年,都看不起我

      辣媒專欄記錄
      2026-05-11 08:21:59
      “好豪邁的洛麗塔”,165cm未成年女兒穿搭火了,家長尷尬不敢認

      “好豪邁的洛麗塔”,165cm未成年女兒穿搭火了,家長尷尬不敢認

      妍妍教育日記
      2026-05-12 18:46:53
      一個普遍規(guī)律:低層次的社交,靠的是飯局;中層次的社交,靠的是利益;而高層次的社交,靠的是這兩個關鍵核心

      一個普遍規(guī)律:低層次的社交,靠的是飯局;中層次的社交,靠的是利益;而高層次的社交,靠的是這兩個關鍵核心

      心理觀察局
      2026-05-12 09:17:28
      如何限制文班?PJ-塔克:沒有正確答案,只能用小個陣容跟他肉搏

      如何限制文班?PJ-塔克:沒有正確答案,只能用小個陣容跟他肉搏

      懂球帝
      2026-05-13 20:52:07
      導航怎么知道“紅綠燈變化的”?你以為是黑科技,其實原理很簡單

      導航怎么知道“紅綠燈變化的”?你以為是黑科技,其實原理很簡單

      Thurman在昆明
      2026-05-11 14:19:39
      中超離譜操作:莫拉下課6天未官宣,鄧卓翔帶隊踢保級戰(zhàn)

      中超離譜操作:莫拉下課6天未官宣,鄧卓翔帶隊踢保級戰(zhàn)

      格斗社
      2026-05-14 08:19:09
      3-0!英超爭冠格局大亂:前2差2分,曼城橫掃水晶宮,福登兩助攻

      3-0!英超爭冠格局大亂:前2差2分,曼城橫掃水晶宮,福登兩助攻

      足球狗說
      2026-05-14 04:53:38
      2026-05-14 08:51:00
      侑虎科技UWA incentive-icons
      侑虎科技UWA
      游戲/VR性能優(yōu)化平臺
      1575文章數 987關注度
      往期回顧 全部

      科技要聞

      馬斯克:只有我和黃仁勛坐上了"空軍一號"

      頭條要聞

      專機落地特朗普又舞起熟悉手勢 乘專車前往酒店

      頭條要聞

      專機落地特朗普又舞起熟悉手勢 乘專車前往酒店

      體育要聞

      14年半,74萬,何冰嬌沒選那條更安穩(wěn)的路

      娛樂要聞

      白鹿掉20萬粉,網友為李晨鳴不平

      財經要聞

      片仔癀依舊困在“片仔癀”

      汽車要聞

      C級純電轎跑 吉利銀河"TT"申報圖來了

      態(tài)度原創(chuàng)

      游戲
      本地
      時尚
      藝術
      公開課

      《刺客信條:黑旗 記憶重置》主題現(xiàn)實尋寶活動揭幕

      本地新聞

      用蘇繡的方式,打開江西婺源

      T恤+低腰闊腿褲、襯衫+低腰半裙,今年夏天最時髦的搭配,誰穿誰好看!

      藝術要聞

      這才是真正的“史上最強畢業(yè)證”,書法堪比字帖!

      公開課

      李玫瑾:為什么性格比能力更重要?

      無障礙瀏覽 進入關懷版 主站蜘蛛池模板: 国产成人无码精品XXXX| 国产色婷婷五月精品综合在线 | 99热在线观看一区| 综合人妻久久一区二区精品| 国产一区二区三区色视频| 国产精品免费麻豆入口| 中文字幕无码青椒影视| 大屁股国产白浆一二区| 久久国产欧美日韩精品图片| 日韩精品欧美激情国产一区| 久久这里只精品国产免费9| 蜜桃视频中文字幕| 图片区偷拍区小说区五月| 国产精品一区在线蜜臀| 差差差很依人| 免费a级毛片无码a∨蜜芽试看| 天天躁日日躁欧美老妇| 色偷偷久久一区二区三区| 色综合天天综合天天综| 亚洲制服丝袜av一区二区三区| 日本精品极品视频在线| 无码高潮少妇毛多水多水| 99精品众筹模特在线视频| 亚洲综合色一区二区三区| av区无码字幕中文色| 无码人妻久久1区2区3区| 国产女人被狂躁到高潮小说| 少妇人妻偷人精品无码视频| 国产人妻中文字幕| 微拍福利导航| 亚洲av高清一区二区| www.91色| 亚洲国产色图在线视频| 人妻少妇精品无码专区二区| 亚洲中文字幕日产无码成人片| 日韩一级亚洲一午夜免费观看中文版国语版 | 日韩乱码人妻无码系列中文字幕| 国产亚洲精久久久久久无码AV| 亚洲一区二区精品极品| 欧美涩色| 精品国产乱|