四、WebGPU Storage Buffers 存储缓冲区

news2024/11/15 7:52:38

四、WebGPU Storage Buffers 存储缓冲区

存储缓冲区 storage buffers 在许多方面 uniform buffers 缓冲区相似。如果我们所做的只是在JavaScript中将UNIFORM改为STORAGE,WGSL 中的 var 改为 var<storage,read>,上一节的示例代码同样可以运行并达到同样的效果。

实际上,还是有不同之处,不需要将变量重命名为更合适的名称。

    const staticUniformBuffer = device.createBuffer({
      label: `static uniforms for obj: ${i}`,
      size: staticUniformBufferSize,
      usage: GPUBufferUsage.STORAGE | GPUBufferUsage.COPY_DST,
    });
 
 
...
 
    const uniformBuffer = device.createBuffer({
      label: `changing uniforms for obj: ${i}`,
      size: uniformBufferSize,
      usage: GPUBufferUsage.STORAGE | GPUBufferUsage.COPY_DST,
    });

WGSL 的修改:

 @group(0) @binding(0) var<storage, read> ourStruct: OurStruct;
      @group(0) @binding(1) var<storage, read> otherStruct: OtherStruct;

仅需要以上少量的修改,就可以达到同样的效果。

代码及效果如下:

HTML:

<!--
 * @Description: 
 * @Author: tianyw
 * @Date: 2022-11-11 12:50:23
 * @LastEditTime: 2023-09-18 21:28:13
 * @LastEditors: tianyw
-->
<!DOCTYPE html>
<html lang="en">

<head>
    <meta charset="UTF-8" />
    <meta name="viewport" content="width=device-width, initial-scale=1.0" />
    <title>001hello-triangle</title>
    <style>
        html,
        body {
            margin: 0;
            width: 100%;
            height: 100%;
            background: #000;
            color: #fff;
            display: flex;
            text-align: center;
            flex-direction: column;
            justify-content: center;
        }

        div,
        canvas {
            height: 100%;
            width: 100%;
        }
    </style>
</head>

<body>
    <div id="006storage-random-triangle">
        <canvas id="gpucanvas"></canvas>
    </div>
    <script type="module" src="./006storage-random-triangle.ts"></script>

</body>

</html>

TS:

/*
 * @Description:
 * @Author: tianyw
 * @Date: 2023-04-08 20:03:35
 * @LastEditTime: 2023-09-18 21:30:17
 * @LastEditors: tianyw
 */
export type SampleInit = (params: {
  canvas: HTMLCanvasElement;
}) => void | Promise<void>;

import shaderWGSL from "./shaders/shader.wgsl?raw";

const rand = (
  min: undefined | number = undefined,
  max: undefined | number = undefined
) => {
  if (min === undefined) {
    min = 0;
    max = 1;
  } else if (max === undefined) {
    max = min;
    min = 0;
  }
  return min + Math.random() * (max - min);
};

const init: SampleInit = async ({ canvas }) => {
  const adapter = await navigator.gpu?.requestAdapter();
  if (!adapter) return;
  const device = await adapter?.requestDevice();
  if (!device) {
    console.error("need a browser that supports WebGPU");
    return;
  }
  const context = canvas.getContext("webgpu");
  if (!context) return;
  const devicePixelRatio = window.devicePixelRatio || 1;
  canvas.width = canvas.clientWidth * devicePixelRatio;
  canvas.height = canvas.clientHeight * devicePixelRatio;
  const presentationFormat = navigator.gpu.getPreferredCanvasFormat();

  context.configure({
    device,
    format: presentationFormat,
    alphaMode: "premultiplied"
  });

  const shaderModule = device.createShaderModule({
    label: "our hardcoded rgb triangle shaders",
    code: shaderWGSL
  });
  const renderPipeline = device.createRenderPipeline({
    label: "hardcoded rgb triangle pipeline",
    layout: "auto",
    vertex: {
      module: shaderModule,
      entryPoint: "vs"
    },
    fragment: {
      module: shaderModule,
      entryPoint: "fs",
      targets: [
        {
          format: presentationFormat
        }
      ]
    },
    primitive: {
      // topology: "line-list"
      // topology: "line-strip"
      //  topology: "point-list"
      topology: "triangle-list"
      // topology: "triangle-strip"
    }
  });

  const staticUniformBufferSize =
    4 * 4 + // color is 4 32bit floats (4bytes each)
    2 * 4 + // scale is 2 32bit floats (4bytes each)
    2 * 4; // padding
  const uniformBUfferSzie = 2 * 4; // scale is 2 32 bit floats

  const kColorOffset = 0;
  const kOffsetOffset = 4;

  const kScaleOffset = 0;

  const kNumObjects = 100;
  const objectInfos: {
    scale: number;
    uniformBuffer: GPUBuffer;
    uniformValues: Float32Array;
    bindGroup: GPUBindGroup;
  }[] = [];
  for (let i = 0; i < kNumObjects; ++i) {
    const staticUniformBuffer = device.createBuffer({
      label: `staitc uniforms for obj: ${i}`,
      size: staticUniformBufferSize,
      usage: GPUBufferUsage.STORAGE | GPUBufferUsage.COPY_DST
    });
    {
      const uniformValues = new Float32Array(staticUniformBufferSize / 4);

      uniformValues.set([rand(), rand(), rand(), 1], kColorOffset); // set the color
      uniformValues.set([rand(-0.9, 0.9), rand(-0.9, 0.9)], kOffsetOffset); // set the offset
      device.queue.writeBuffer(staticUniformBuffer, 0, uniformValues);
    }

    const uniformValues = new Float32Array(uniformBUfferSzie / 4);
    const uniformBuffer = device.createBuffer({
      label: `changing uniforms for obj: ${i}`,
      size: uniformBUfferSzie,
      usage: GPUBufferUsage.STORAGE | GPUBufferUsage.COPY_DST
    });

    const bindGroup = device.createBindGroup({
      label: `bind group for obj: ${i}`,
      layout: renderPipeline.getBindGroupLayout(0),
      entries: [
        { binding: 0, resource: { buffer: staticUniformBuffer } },
        { binding: 1, resource: { buffer: uniformBuffer } }
      ]
    });
    objectInfos.push({
      scale: rand(0.2, 0.5),
      uniformBuffer,
      uniformValues,
      bindGroup
    });
  }
  function frame() {
    const aspect = canvas.width / canvas.height;

    const renderCommandEncoder = device.createCommandEncoder({
      label: "render vert frag"
    });
    if (!context) return;

    const textureView = context.getCurrentTexture().createView();
    const renderPassDescriptor: GPURenderPassDescriptor = {
      label: "our basic canvas renderPass",
      colorAttachments: [
        {
          view: textureView,
          clearValue: { r: 0.0, g: 0.0, b: 0.0, a: 1.0 },
          loadOp: "clear",
          storeOp: "store"
        }
      ]
    };
    const renderPass =
      renderCommandEncoder.beginRenderPass(renderPassDescriptor);
    renderPass.setPipeline(renderPipeline);
    for (const {
      scale,
      bindGroup,
      uniformBuffer,
      uniformValues
    } of objectInfos) {
      uniformValues.set([scale / aspect, scale], kScaleOffset); // set the scale
      device.queue.writeBuffer(uniformBuffer, 0, uniformValues);
      renderPass.setBindGroup(0, bindGroup);
      renderPass.draw(3);
    }

    renderPass.end();
    const renderBuffer = renderCommandEncoder.finish();
    device.queue.submit([renderBuffer]);

    requestAnimationFrame(frame);
  }

  requestAnimationFrame(frame);
};

const canvas = document.getElementById("gpucanvas") as HTMLCanvasElement;
init({ canvas: canvas });

Shaders:

shader:

struct OurStruct {
    color: vec4f,
    offset: vec2f
};

struct OtherStruct {
    scale: vec2f
};

@group(0) @binding(0) var<storage,read> ourStruct: OurStruct;
@group(0) @binding(1) var<storage,read> otherStruct: OtherStruct;

@vertex 
fn vs(@builtin(vertex_index) vertexIndex: u32) -> @builtin(position) vec4f {
    let pos = array<vec2f, 3>(
        vec2f(0.0, 0.5), // top center
        vec2f(-0.5, -0.5), // bottom left
        vec2f(0.5, -0.5)  // bottom right
    );
   return vec4f(pos[vertexIndex] * otherStruct.scale + ourStruct.offset,0.0,1.0);
}

@fragment
fn fs() -> @location(0) vec4f {
    return ourStruct.color;
}

在这里插入图片描述

在这里插入图片描述

Differences between uniform buffers and storage buffers

它们主要的不同在于:

1、对于它们的典型用例,统一缓冲区可能更快

这实际上取决于用例。一个典型的应用程序需要绘制很多不同的东西。假设这是一款3D游戏。应用程序可能会绘制汽车、建筑、岩石、灌木丛、人物等,这些都需要传入方向和材料属性,就像我们上面的例子一样。在这种情况下,建议使用统一的缓冲区 uniform buffers。

2、存储缓冲区 storage buffers 可以比统一缓冲区 uniform buffers 大得多。

​ 统一缓冲区的最小最大大小为64k

​ 存储缓冲区的最小最大大小为128meg

通过最小最大值,存在一个特定类型的缓冲区的最大大小。对于统一缓冲区,最大大小至少为64k。存储缓冲区至少是128meg。我们将在另一篇文章中讨论限制。

3、存储缓冲区可以读写,统一缓冲区是只读的

在第一篇文章中,我们看到了在计算着色器示例中写入存储缓冲区的示例。

使用存储缓冲区实例化

考虑到上面的前两点,让我们以最后一个例子为例,并将其更改为在一次绘制调用中绘制所有100个三角形。这是一个可能适合存储缓冲区的用例。我说可能是因为WebGPU类似于其他编程语言。有很多方法可以达到同样的目的。array.forEach vs for (const element of array) vs for (let i = 0; i < array.length; ++i)。每种都有其用途。WebGPU也是如此。我们想做的每一件事都有多种方法可以实现。当涉及到绘制三角形时,WebGPU所关心的是我们从顶点着色器返回一个 builtin(position)的值,并从片段着色器返回一个location(0)的颜色/值。

我们要做的第一件事是将存储声明更改为运行时大小的数组。

@group(0) @binding(0) var<storage, read> ourStructs: array<OurStruct>;
@group(0) @binding(1) var<storage, read> otherStructs: array<OtherStruct>;

然后我们将改变着色器使用这些值:

@vertex fn vs(
  @builtin(vertex_index) vertexIndex : u32,
  @builtin(instance_index) instanceIndex: u32
) -> @builtin(position) {
  let pos = array(
    vec2f( 0.0,  0.5),  // top center
    vec2f(-0.5, -0.5),  // bottom left
    vec2f( 0.5, -0.5)   // bottom right
  );
 
  let otherStruct = otherStructs[instanceIndex];
  let ourStruct = ourStructs[instanceIndex];
 
   return vec4f(
     pos[vertexIndex] * otherStruct.scale + ourStruct.offset, 0.0, 1.0);
}

我们为顶点着色器添加了一个名为 instanceIndex 的新参数,并赋予它@builtin(instance_index)属性,这意味着它从WebGPU获取每个绘制的“instance”的值。当我们调用draw时,我们可以传递第二个参数来表示实例的数量,对于每个绘制的实例,正在处理的实例的数量将传递给我们的函数。

使用instanceIndex,我们可以从结构数组中获得特定的结构元素。

我们还需要从正确的数组元素中获取颜色,并在片段着色器中使用它。片段着色器没有访问@builtin(instance_index),因为这将没有意义。我们可以将它作为一个阶段间变量传递,但更常见的是在顶点着色器中查找颜色并直接传递颜色。为了做到这一点,我们将使用另一个结构体,就像我们在讨论阶段间变量的文章中所做的那样.

struct VSOutput {
  @builtin(position) position: vec4f,
  @location(0) color: vec4f,
}
 
@vertex fn vs(
  @builtin(vertex_index) vertexIndex : u32,
  @builtin(instance_index) instanceIndex: u32
) -> VSOutput {
  let pos = array(
    vec2f( 0.0,  0.5),  // top center
    vec2f(-0.5, -0.5),  // bottom left
    vec2f( 0.5, -0.5)   // bottom right
  );
 
  let otherStruct = otherStructs[instanceIndex];
  let ourStruct = ourStructs[instanceIndex];
 
  var vsOut: VSOutput;
  vsOut.position = vec4f(
      pos[vertexIndex] * otherStruct.scale + ourStruct.offset, 0.0, 1.0);
  vsOut.color = ourStruct.color;
  return vsOut;
}
 
@fragment fn fs(vsOut: VSOutput) -> @location(0) vec4f {
  return vsOut.color;
}

现在我们已经修改了WGSL着色器,让我们更新JavaScript。

  const kNumObjects = 100;
  const objectInfos = [];
 
  // create 2 storage buffers
  const staticUnitSize =
    4 * 4 + // color is 4 32bit floats (4bytes each)
    2 * 4 + // offset is 2 32bit floats (4bytes each)
    2 * 4;  // padding
  const changingUnitSize =
    2 * 4;  // scale is 2 32bit floats (4bytes each)
  const staticStorageBufferSize = staticUnitSize * kNumObjects;
  const changingStorageBufferSize = changingUnitSize * kNumObjects;
 
  const staticStorageBuffer = device.createBuffer({
    label: 'static storage for objects',
    size: staticStorageBufferSize,
    usage: GPUBufferUsage.STORAGE | GPUBufferUsage.COPY_DST,
  });
 
  const changingStorageBuffer = device.createBuffer({
    label: 'changing storage for objects',
    size: changingStorageBufferSize,
    usage: GPUBufferUsage.STORAGE | GPUBufferUsage.COPY_DST,
  });
 
  // offsets to the various uniform values in float32 indices
  const kColorOffset = 0;
  const kOffsetOffset = 4;
 
  const kScaleOffset = 0;
 
  {
    const staticStorageValues = new Float32Array(staticStorageBufferSize / 4);
    for (let i = 0; i < kNumObjects; ++i) {
      const staticOffset = i * (staticUnitSize / 4);
 
      // These are only set once so set them now
      staticStorageValues.set([rand(), rand(), rand(), 1], staticOffset + kColorOffset);        // set the color
      staticStorageValues.set([rand(-0.9, 0.9), rand(-0.9, 0.9)], staticOffset + kOffsetOffset);      // set the offset
 
      objectInfos.push({
        scale: rand(0.2, 0.5),
      });
    }
    device.queue.writeBuffer(staticStorageBuffer, 0, staticStorageValues);
  }
 
  // a typed array we can use to update the changingStorageBuffer
  const storageValues = new Float32Array(changingStorageBufferSize / 4);
 
  const bindGroup = device.createBindGroup({
    label: 'bind group for objects',
    layout: pipeline.getBindGroupLayout(0),
    entries: [
      { binding: 0, resource: { buffer: staticStorageBuffer }},
      { binding: 1, resource: { buffer: changingStorageBuffer }},
    ],
  });

上面我们创建了2个存储缓冲区。一个用于OurStruct数组,另一个用于OtherStruct数组。

然后,我们用偏移量和颜色填充OurStruct数组的值,然后将该数据上传到staticStorageBuffer。我们只创建一个绑定组来引用两个缓冲区。

新的呈现代码是:

  function render() {
    // Get the current texture from the canvas context and
    // set it as the texture to render to.
    renderPassDescriptor.colorAttachments[0].view =
        context.getCurrentTexture().createView();
 
    const encoder = device.createCommandEncoder();
    const pass = encoder.beginRenderPass(renderPassDescriptor);
    pass.setPipeline(pipeline);
 
    // Set the uniform values in our JavaScript side Float32Array
    const aspect = canvas.width / canvas.height;
 
 
    // set the scales for each object
    objectInfos.forEach(({scale}, ndx) => {
      const offset = ndx * (changingUnitSize / 4);
      storageValues.set([scale / aspect, scale], offset + kScaleOffset); // set the scale
    });
    // upload all scales at once
    device.queue.writeBuffer(changingStorageBuffer, 0, storageValues);
 
    pass.setBindGroup(0, bindGroup);
    pass.draw(3, kNumObjects);  // call our vertex shader 3 times for each instance
 
 
    pass.end();
 
    const commandBuffer = encoder.finish();
    device.queue.submit([commandBuffer]);
  }

上面的代码将绘制kNumObjects 个实例。对于每个实例,WebGPU将调用顶点着色器3次,vertex_index设置为0,1,2,instance_index设置为0 到 kNumObjects - 1

我们设法绘制了所有100个三角形,每个三角形都有不同的比例、颜色和偏移量,只需要一个绘制调用。对于想要绘制同一对象的许多实例的情况,这是一种方法。

以下为完整代码及其运行效果:

HTML:

<!--
 * @Description: 
 * @Author: tianyw
 * @Date: 2022-11-11 12:50:23
 * @LastEditTime: 2023-09-18 21:28:13
 * @LastEditors: tianyw
-->
<!DOCTYPE html>
<html lang="en">

<head>
    <meta charset="UTF-8" />
    <meta name="viewport" content="width=device-width, initial-scale=1.0" />
    <title>001hello-triangle</title>
    <style>
        html,
        body {
            margin: 0;
            width: 100%;
            height: 100%;
            background: #000;
            color: #fff;
            display: flex;
            text-align: center;
            flex-direction: column;
            justify-content: center;
        }

        div,
        canvas {
            height: 100%;
            width: 100%;
        }
    </style>
</head>

<body>
    <div id="006storage-random-triangle2">
        <canvas id="gpucanvas"></canvas>
    </div>
    <script type="module" src="./006storage-random-triangle2.ts"></script>

</body>

</html>

TS:

/*
 * @Description:
 * @Author: tianyw
 * @Date: 2023-04-08 20:03:35
 * @LastEditTime: 2023-09-18 22:22:11
 * @LastEditors: tianyw
 */
export type SampleInit = (params: {
  canvas: HTMLCanvasElement;
}) => void | Promise<void>;

import shaderWGSL from "./shaders/shader.wgsl?raw";

const rand = (
  min: undefined | number = undefined,
  max: undefined | number = undefined
) => {
  if (min === undefined) {
    min = 0;
    max = 1;
  } else if (max === undefined) {
    max = min;
    min = 0;
  }
  return min + Math.random() * (max - min);
};

const init: SampleInit = async ({ canvas }) => {
  const adapter = await navigator.gpu?.requestAdapter();
  if (!adapter) return;
  const device = await adapter?.requestDevice();
  if (!device) {
    console.error("need a browser that supports WebGPU");
    return;
  }
  const context = canvas.getContext("webgpu");
  if (!context) return;
  const devicePixelRatio = window.devicePixelRatio || 1;
  canvas.width = canvas.clientWidth * devicePixelRatio;
  canvas.height = canvas.clientHeight * devicePixelRatio;
  const presentationFormat = navigator.gpu.getPreferredCanvasFormat();

  context.configure({
    device,
    format: presentationFormat,
    alphaMode: "premultiplied"
  });

  const shaderModule = device.createShaderModule({
    label: "our hardcoded rgb triangle shaders",
    code: shaderWGSL
  });
  const renderPipeline = device.createRenderPipeline({
    label: "hardcoded rgb triangle pipeline",
    layout: "auto",
    vertex: {
      module: shaderModule,
      entryPoint: "vs"
    },
    fragment: {
      module: shaderModule,
      entryPoint: "fs",
      targets: [
        {
          format: presentationFormat
        }
      ]
    },
    primitive: {
      // topology: "line-list"
      // topology: "line-strip"
      //  topology: "point-list"
      topology: "triangle-list"
      // topology: "triangle-strip"
    }
  });

  const kNumObjects = 100;

  const staticStorageUnitSize =
    4 * 4 + // color is 4 32bit floats (4bytes each)
    2 * 4 + // scale is 2 32bit floats (4bytes each)
    2 * 4; // padding
  const storageUnitSzie = 2 * 4; // scale is 2 32 bit floats

  const staticStorageBufferSize = staticStorageUnitSize * kNumObjects;
  const storageBufferSize = storageUnitSzie * kNumObjects;

  const staticStorageBuffer = device.createBuffer({
    label: "static storage for objects",
    size: staticStorageBufferSize,
    usage: GPUBufferUsage.STORAGE | GPUBufferUsage.COPY_DST
  });

  const storageBuffer = device.createBuffer({
    label: "changing storage for objects",
    size: storageBufferSize,
    usage: GPUBufferUsage.STORAGE | GPUBufferUsage.COPY_DST
  });

  const staticStorageValues = new Float32Array(staticStorageBufferSize / 4);
  const storageValues = new Float32Array(storageBufferSize / 4);

  const kColorOffset = 0;
  const kOffsetOffset = 4;

  const kScaleOffset = 0;

  const objectInfos: {
    scale: number;
  }[] = [];
  for (let i = 0; i < kNumObjects; ++i) {
    const staticOffset = i * (staticStorageUnitSize / 4);

    staticStorageValues.set(
      [rand(), rand(), rand(), 1],
      staticOffset + kColorOffset
    );
    staticStorageValues.set(
      [rand(-0.9, 0.9), rand(-0.9, 0.9)],
      staticOffset + kOffsetOffset
    );

    objectInfos.push({
      scale: rand(0.2, 0.5)
    });
  }
  device.queue.writeBuffer(staticStorageBuffer, 0, staticStorageValues);
  const bindGroup = device.createBindGroup({
    label: "bind group for objects",
    layout: renderPipeline.getBindGroupLayout(0),
    entries: [
      { binding: 0, resource: { buffer: staticStorageBuffer } },
      { binding: 1, resource: { buffer: storageBuffer } }
    ]
  });

  function frame() {
    const aspect = canvas.width / canvas.height;

    const renderCommandEncoder = device.createCommandEncoder({
      label: "render vert frag"
    });
    if (!context) return;

    const textureView = context.getCurrentTexture().createView();
    const renderPassDescriptor: GPURenderPassDescriptor = {
      label: "our basic canvas renderPass",
      colorAttachments: [
        {
          view: textureView,
          clearValue: { r: 0.0, g: 0.0, b: 0.0, a: 1.0 },
          loadOp: "clear",
          storeOp: "store"
        }
      ]
    };
    const renderPass =
      renderCommandEncoder.beginRenderPass(renderPassDescriptor);
    renderPass.setPipeline(renderPipeline);
    objectInfos.forEach(({ scale }, ndx) => {
      const offset = ndx * (storageUnitSzie / 4);
      storageValues.set([scale / aspect, scale], offset + kScaleOffset); // set the scale
    });
    device.queue.writeBuffer(storageBuffer, 0, storageValues);
    renderPass.setBindGroup(0, bindGroup);
    renderPass.draw(3, kNumObjects);
    renderPass.end();
    const renderBuffer = renderCommandEncoder.finish();
    device.queue.submit([renderBuffer]);

    requestAnimationFrame(frame);
  }

  requestAnimationFrame(frame);
};

const canvas = document.getElementById("gpucanvas") as HTMLCanvasElement;
init({ canvas: canvas });

Shaders:

shader:

struct OurStruct {
    color: vec4f,
    offset: vec2f
};

struct OtherStruct {
    scale: vec2f
};

struct VSOutput {
    @builtin(position) position: vec4f,
    @location(0) color: vec4f
};

@group(0) @binding(0) var<storage,read> ourStructs: array<OurStruct>;
@group(0) @binding(1) var<storage,read> otherStructs: array<OtherStruct>;

@vertex 
fn vs(@builtin(vertex_index) vertexIndex: u32,@builtin(instance_index) instanceIndex: u32) -> VSOutput {
    let pos = array<vec2f, 3>(
    vec2f(0.0, 0.5), // top center
    vec2f(-0.5, -0.5), // bottom left
    vec2f(0.5, -0.5)  // bottom right
);
    let otherStruct = otherStructs[instanceIndex];
    let ourStruct = ourStructs[instanceIndex];
    
    var vsOut: VSOutput;
    vsOut.position = vec4f(pos[vertexIndex] * otherStruct.scale + ourStruct.offset,0.0,1.0);
    vsOut.color = ourStruct.color;
   return vsOut;
}

@fragment
fn fs(vsOut: VSOutput) -> @location(0) vec4f {
    return vsOut.color;
}

在这里插入图片描述

在这里插入图片描述

为顶点数据使用存储缓冲区

到目前为止,我们已经在着色器中直接使用了硬编码三角形。存储缓冲区的一个用例是存储顶点数据。就像我们在上面的例子中用instance_index索引当前存储缓冲区一样,我们可以用vertex_index索引另一个存储缓冲区来获取顶点数据。

struct OurStruct {
  color: vec4f,
  offset: vec2f,
};
 
struct OtherStruct {
  scale: vec2f,
};
 
struct Vertex {
  position: vec2f,
};
 
struct VSOutput {
  @builtin(position) position: vec4f,
  @location(0) color: vec4f,
};
 
@group(0) @binding(0) var<storage, read> ourStructs: array<OurStruct>;
@group(0) @binding(1) var<storage, read> otherStructs: array<OtherStruct>;
@group(0) @binding(2) var<storage, read> pos: array<Vertex>;
 
@vertex fn vs(
  @builtin(vertex_index) vertexIndex : u32,
  @builtin(instance_index) instanceIndex: u32
) -> VSOutput {
 
  let otherStruct = otherStructs[instanceIndex];
  let ourStruct = ourStructs[instanceIndex];
 
  var vsOut: VSOutput;
  vsOut.position = vec4f(
      pos[vertexIndex].position * otherStruct.scale + ourStruct.offset, 0.0, 1.0);
  vsOut.color = ourStruct.color;
  return vsOut;
}
 
@fragment fn fs(vsOut: VSOutput) -> @location(0) vec4f {
  return vsOut.color;
}

现在我们需要用一些顶点数据再设置一个存储缓冲区。首先让我们创建一个函数来生成一些顶点数据。让我们画一个圆。

function createCircleVertices({
  radius = 1,
  numSubdivisions = 24,
  innerRadius = 0,
  startAngle = 0,
  endAngle = Math.PI * 2,
} = {}) {
  // 2 triangles per subdivision, 3 verts per tri, 2 values (xy) each.
  const numVertices = numSubdivisions * 3 * 2;
  const vertexData = new Float32Array(numSubdivisions * 2 * 3 * 2);
 
  let offset = 0;
  const addVertex = (x, y) => {
    vertexData[offset++] = x;
    vertexData[offset++] = y;
  };
 
  // 2 vertices per subdivision
  //
  // 0--1 4
  // | / /|
  // |/ / |
  // 2 3--5
  for (let i = 0; i < numSubdivisions; ++i) {
    const angle1 = startAngle + (i + 0) * (endAngle - startAngle) / numSubdivisions;
    const angle2 = startAngle + (i + 1) * (endAngle - startAngle) / numSubdivisions;
 
    const c1 = Math.cos(angle1);
    const s1 = Math.sin(angle1);
    const c2 = Math.cos(angle2);
    const s2 = Math.sin(angle2);
 
    // first triangle
    addVertex(c1 * radius, s1 * radius);
    addVertex(c2 * radius, s2 * radius);
    addVertex(c1 * innerRadius, s1 * innerRadius);
 
    // second triangle
    addVertex(c1 * innerRadius, s1 * innerRadius);
    addVertex(c2 * radius, s2 * radius);
    addVertex(c2 * innerRadius, s2 * innerRadius);
  }
 
  return {
    vertexData,
    numVertices,
  };
}

上面的代码由这样的三角形组成一个圆:

在这里插入图片描述

我们可以用它来填充一个存储缓冲区用一个圆的顶点

  // setup a storage buffer with vertex data
  const { vertexData, numVertices } = createCircleVertices({
    radius: 0.5,
    innerRadius: 0.25,
  });
  const vertexStorageBuffer = device.createBuffer({
    label: 'storage buffer vertices',
    size: vertexData.byteLength,
    usage: GPUBufferUsage.STORAGE | GPUBufferUsage.COPY_DST,
  });
  device.queue.writeBuffer(vertexStorageBuffer, 0, vertexData);

然后我们需要将它添加到bind group 中。

  const bindGroup = device.createBindGroup({
    label: 'bind group for objects',
    layout: pipeline.getBindGroupLayout(0),
    entries: [
      { binding: 0, resource: { buffer: staticStorageBuffer }},
      { binding: 1, resource: { buffer: changingStorageBuffer }},
      { binding: 2, resource: { buffer: vertexStorageBuffer }},
    ],
  });

最后在渲染时我们需要渲染圆中的所有顶点。

pass.draw(numVertices, kNumObjects);

以下为完整代码及运行效果:

HTML:

<!--
 * @Description: 
 * @Author: tianyw
 * @Date: 2022-11-11 12:50:23
 * @LastEditTime: 2023-09-19 21:12:49
 * @LastEditors: tianyw
-->
<!DOCTYPE html>
<html lang="en">

<head>
    <meta charset="UTF-8" />
    <meta name="viewport" content="width=device-width, initial-scale=1.0" />
    <title>001hello-triangle</title>
    <style>
        html,
        body {
            margin: 0;
            width: 100%;
            height: 100%;
            background: #000;
            color: #fff;
            display: flex;
            text-align: center;
            flex-direction: column;
            justify-content: center;
        }

        div,
        canvas {
            height: 100%;
            width: 100%;
        }
    </style>
</head>

<body>
    <div id="006storage-srandom-circle">
        <canvas id="gpucanvas"></canvas>
    </div>
    <script type="module" src="./006storage-srandom-circle.ts"></script>

</body>

</html>

TS:

/*
 * @Description:
 * @Author: tianyw
 * @Date: 2023-04-08 20:03:35
 * @LastEditTime: 2023-09-19 21:28:58
 * @LastEditors: tianyw
 */
export type SampleInit = (params: {
  canvas: HTMLCanvasElement;
}) => void | Promise<void>;

import shaderWGSL from "./shaders/shader.wgsl?raw";

const rand = (
  min: undefined | number = undefined,
  max: undefined | number = undefined
) => {
  if (min === undefined) {
    min = 0;
    max = 1;
  } else if (max === undefined) {
    max = min;
    min = 0;
  }
  return min + Math.random() * (max - min);
};

function createCircleVertices({
  radius = 1,
  numSubdivisions = 24,
  innerRadius = 0,
  startAngle = 0,
  endAngle = Math.PI * 2
} = {}) {
  // 2 triangles per subdivision, 3 verts per tri, 2 values(xy) each
  const numVertices = numSubdivisions * 3 * 2;
  const vertexData = new Float32Array(numSubdivisions * 2 * 3 * 2);

  let offset = 0;
  const addVertex = (x: number, y: number) => {
    vertexData[offset++] = x;
    vertexData[offset++] = y;
  };

  // 2 vertices per subdivision
  for (let i = 0; i < numSubdivisions; ++i) {
    const angle1 =
      startAngle + ((i + 0) * (endAngle - startAngle)) / numSubdivisions;
    const angle2 =
      startAngle + ((i + 1) * (endAngle - startAngle)) / numSubdivisions;

    const c1 = Math.cos(angle1);
    const s1 = Math.sin(angle1);
    const c2 = Math.cos(angle2);
    const s2 = Math.sin(angle2);

    // first angle
    addVertex(c1 * radius, s1 * radius);
    addVertex(c2 * radius, s2 * radius);
    addVertex(c1 * innerRadius, s1 * innerRadius);

    // second triangle
    addVertex(c1 * innerRadius, s1 * innerRadius);
    addVertex(c2 * radius, s2 * radius);
    addVertex(c2 * innerRadius, s2 * innerRadius);
  }

  return {
    vertexData,
    numVertices
  };
}

const init: SampleInit = async ({ canvas }) => {
  const adapter = await navigator.gpu?.requestAdapter();
  if (!adapter) return;
  const device = await adapter?.requestDevice();
  if (!device) {
    console.error("need a browser that supports WebGPU");
    return;
  }
  const context = canvas.getContext("webgpu");
  if (!context) return;
  const devicePixelRatio = window.devicePixelRatio || 1;
  canvas.width = canvas.clientWidth * devicePixelRatio;
  canvas.height = canvas.clientHeight * devicePixelRatio;
  const presentationFormat = navigator.gpu.getPreferredCanvasFormat();

  context.configure({
    device,
    format: presentationFormat,
    alphaMode: "premultiplied"
  });

  const shaderModule = device.createShaderModule({
    label: "our hardcoded rgb triangle shaders",
    code: shaderWGSL
  });
  const renderPipeline = device.createRenderPipeline({
    label: "hardcoded rgb triangle pipeline",
    layout: "auto",
    vertex: {
      module: shaderModule,
      entryPoint: "vs"
    },
    fragment: {
      module: shaderModule,
      entryPoint: "fs",
      targets: [
        {
          format: presentationFormat
        }
      ]
    },
    primitive: {
      // topology: "line-list"
      // topology: "line-strip"
      //  topology: "point-list"
      topology: "triangle-list"
      // topology: "triangle-strip"
    }
  });

  const kNumObjects = 100;

  const staticStorageUnitSize =
    4 * 4 + // color is 4 32bit floats (4bytes each)
    2 * 4 + // scale is 2 32bit floats (4bytes each)
    2 * 4; // padding
  const storageUnitSzie = 2 * 4; // scale is 2 32 bit floats

  const staticStorageBufferSize = staticStorageUnitSize * kNumObjects;
  const storageBufferSize = storageUnitSzie * kNumObjects;

  const staticStorageBuffer = device.createBuffer({
    label: "static storage for objects",
    size: staticStorageBufferSize,
    usage: GPUBufferUsage.STORAGE | GPUBufferUsage.COPY_DST
  });

  const storageBuffer = device.createBuffer({
    label: "changing storage for objects",
    size: storageBufferSize,
    usage: GPUBufferUsage.STORAGE | GPUBufferUsage.COPY_DST
  });

  const staticStorageValues = new Float32Array(staticStorageBufferSize / 4);
  const storageValues = new Float32Array(storageBufferSize / 4);

  const kColorOffset = 0;
  const kOffsetOffset = 4;

  const kScaleOffset = 0;

  const objectInfos: {
    scale: number;
  }[] = [];
  for (let i = 0; i < kNumObjects; ++i) {
    const staticOffset = i * (staticStorageUnitSize / 4);

    staticStorageValues.set(
      [rand(), rand(), rand(), 1],
      staticOffset + kColorOffset
    );
    staticStorageValues.set(
      [rand(-0.9, 0.9), rand(-0.9, 0.9)],
      staticOffset + kOffsetOffset
    );

    objectInfos.push({
      scale: rand(0.2, 0.5)
    });
  }
  device.queue.writeBuffer(staticStorageBuffer, 0, staticStorageValues);

  const { vertexData, numVertices } = createCircleVertices({
    radius: 0.5,
    innerRadius: 0.25
  });
  const vertexStorageBuffer = device.createBuffer({
    label: "storage buffer vertices",
    size: vertexData.byteLength,
    usage: GPUBufferUsage.STORAGE | GPUBufferUsage.COPY_DST
  });
  device.queue.writeBuffer(vertexStorageBuffer, 0, vertexData);

  const bindGroup = device.createBindGroup({
    label: "bind group for objects",
    layout: renderPipeline.getBindGroupLayout(0),
    entries: [
      { binding: 0, resource: { buffer: staticStorageBuffer } },
      { binding: 1, resource: { buffer: storageBuffer } },
      { binding: 2, resource: { buffer: vertexStorageBuffer } }
    ]
  });

  function frame() {
    const aspect = canvas.width / canvas.height;

    const renderCommandEncoder = device.createCommandEncoder({
      label: "render vert frag"
    });
    if (!context) return;

    const textureView = context.getCurrentTexture().createView();
    const renderPassDescriptor: GPURenderPassDescriptor = {
      label: "our basic canvas renderPass",
      colorAttachments: [
        {
          view: textureView,
          clearValue: { r: 0.0, g: 0.0, b: 0.0, a: 1.0 },
          loadOp: "clear",
          storeOp: "store"
        }
      ]
    };
    const renderPass =
      renderCommandEncoder.beginRenderPass(renderPassDescriptor);
    renderPass.setPipeline(renderPipeline);
    objectInfos.forEach(({ scale }, ndx) => {
      const offset = ndx * (storageUnitSzie / 4);
      storageValues.set([scale / aspect, scale], offset + kScaleOffset); // set the scale
    });
    device.queue.writeBuffer(storageBuffer, 0, storageValues);
    renderPass.setBindGroup(0, bindGroup);
    renderPass.draw(numVertices, kNumObjects);
    renderPass.end();
    const renderBuffer = renderCommandEncoder.finish();
    device.queue.submit([renderBuffer]);

    requestAnimationFrame(frame);
  }

  requestAnimationFrame(frame);
};

const canvas = document.getElementById("gpucanvas") as HTMLCanvasElement;
init({ canvas: canvas });

Shaders:

shader:

struct OurStruct {
    color: vec4f,
    offset: vec2f
};

struct OtherStruct {
    scale: vec2f
};

struct Vertex {
    position: vec2f
}

struct VSOutput {
    @builtin(position) position: vec4f,
    @location(0) color: vec4f
};

@group(0) @binding(0) var<storage,read> ourStructs: array<OurStruct>;
@group(0) @binding(1) var<storage,read> otherStructs: array<OtherStruct>;
@group(0) @binding(2) var<storage,read> pos: array<Vertex>;

@vertex 
fn vs(@builtin(vertex_index) vertexIndex: u32, @builtin(instance_index) instanceIndex: u32) -> VSOutput {
    let otherStruct = otherStructs[instanceIndex];
    let ourStruct = ourStructs[instanceIndex];

    var vsOut: VSOutput;
    vsOut.position = vec4f(pos[vertexIndex].position * otherStruct.scale + ourStruct.offset, 0.0, 1.0);
    vsOut.color = ourStruct.color;
    return vsOut;
}

@fragment
fn fs(vsOut: VSOutput) -> @location(0) vec4f {
    return vsOut.color;
}

在这里插入图片描述

在这里插入图片描述

通过存储缓冲区传递顶点越来越受欢迎。有人告诉我,在一些旧设备上,这比经典的方式要慢。我们将在下一篇关于顶点缓冲的文章中讲到。

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/1075515.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

React +ts + babel+webpack

babel babel/preset-typescript 专门处理ts "babel/cli": "^7.17.6", "babel/core": "^7.17.8", "babel/preset-env": "^7.16.11", "babel/preset-react": "^7.16.7", "babel/preset…

vue3学习(二)--- ref和reactive

文章目录 ref1.1 ref将基础类型和对象类型数据转为响应式1.2 ref()获取id元素1.3 isRef reactive1.1 reactive()将引用类型数据转为响应式数据&#xff0c;基本类型无效1.2 ref和reactive的联系 toRef 和 toRefs1.1 如果原始对象是非响应式的就不会更新视图 数据是会变的 ref …

Redis高可用技术 二:主从复制、哨兵、Cluster集群

文章目录 1. 主从复制1.1 简介1.2 主从复制的作用1.3 主从复制的特性1.4 主从复制的工作原理1.4.1 全量复制1.4.2 增量复制 1.5 Redis主从同步策略 2. 搭建Redis主从复制2.1 前置准备2.2 配置master节点2.3 配置slave1-2节点2.4 主从复制结果验证 3. 哨兵模式3.1 简介3.2 哨兵的…

若依微服务部署,裸服务部署、docker部署、k8s部署

目录 前言windows 部署若依-微服务版本浏览器验证docker部署若依-微服务版本浏览器验证k8s部署若依-微服务版本浏览器验证总结 前言 环境&#xff1a;centos7、Win10 若依是一个合适新手部署练习的开源的微服务项目&#xff0c;本篇讲解Windows部署若依微服务、docker部署若依…

【鼠标右键菜单添加用VSCode打开文件或文件夹】

鼠标右键菜单添加用VSCode打开文件或文件夹 演示效果如下&#xff1a; 右击文件 或右击文件夹 或在文件夹内空白处右击 方法一&#xff1a;重装软件 重装软件&#xff0c;安装时勾选如图所示方框&#xff08;如果登录的有账号保存有配置信息可以选择重装软件&#xff0c…

YOLOv7暴力涨点:Gold-YOLO,遥遥领先,超越所有YOLO | 华为诺亚NeurIPS23

💡💡💡本文独家改进:提出了全新的信息聚集-分发(Gather-and-Distribute Mechanism)GD机制,Gold-YOLO,替换yolov7 head部分 实现暴力涨点 Gold-YOLO | 亲测在多个数据集能够实现大幅涨点,适用各个场景的涨点 收录: YOLOv7高阶自研专栏介绍: http://t.csdnim…

49位主播带货破亿,单品直播销量100万+,9月的黑马都是谁?

9月&#xff0c;抖音电商开始为下半年重要的营销节点做出筹备&#xff0c;不仅发起抖音中秋好礼季&#xff0c;还抢先发布双11品牌玩法攻略&#xff0c;活跃平台的消费氛围。 那么&#xff0c;9月有哪些主播表现突出&#xff0c;哪些商品在畅销&#xff0c;哪些达人的粉丝数飙升…

【数据结构与算法】如何对快速排序进行细节优化以及实现非递归版本的快速排序?

君兮_的个人主页 即使走的再远&#xff0c;也勿忘启程时的初心 C/C 游戏开发 Hello,米娜桑们&#xff0c;这里是君兮_&#xff0c;国庆长假结束了&#xff0c;无论是工作还是学习都该回到正轨上来了&#xff0c;从今天开始恢复正常的更新频率&#xff0c;今天为大家带来的内容…

将nginx注册为Windows系统服务

文章目录 1、使用nssm小工具2、使用winsw小工具2.1、下载2.2、用法2.3、重命名2.4、创建配置文件2.4.1、xml文件2.4.2、config文件&#xff08;该文件可省略&#xff09; 2.5、最终文件2.6、安装与卸载 1、使用nssm小工具 该方法最简单 首先&#xff0c;下载nssm小工具&#…

C语言——二周目——字符串与内存库函数总结

目录 一、字符串函数 1.求字符串长度——strlen 模拟实现 2.字符串拷贝函数——strcpy/strncpy 3.字符串追加函数——strcat/strncat 4.字符串比较函数——strcmp/strncmp 5.字符串查找函数——strstr 6.字符串分割函数——strtok 二、内存操作函数 1.内存拷贝函数—…

每日leetcode_2441

Leetcode每日一题_2441 记录自己的成长&#xff0c;加油。 题目 解题 class Solution {public int findMaxK(int[] nums) {int k -1;Set<Integer> set new HashSet<Integer>();for (int x : nums) {set.add(x);}for (int x : nums) {if (set.contains(-x)) {k …

linux 分区 添加 挂载centos挂载 Microsoft basic

​ 一 背景 es 忽然写不进去了 报错 TOO_MANY_REQUESTS/12/disk usage exceeded flood-stage watermark查看发现 磁盘已经满了 2T 的磁盘已经使用完了 ​ fdisk -l 查看原来磁盘有64000G 有64T 装机的人只分区了2T,白白浪费58T. 二 添加分区 利用剩余的空间 1添加分区 添…

光伏发电站并网新能源消纳数据采集监控监测方案

全市分布式光伏大数据平台&#xff0c;上报省级能源大数据中心。光伏电站实时运行数据&#xff1a;包括逆变器运行数据和状态、样板逆变器实时出力曲线&#xff0c;光伏电站并网点实际功率&#xff0c;气象监测数据&#xff0c;数据实时采集&#xff0c;采集频率根据光伏电站实…

Pytorch之EfficientNetV2图像分类

文章目录 前言一、EfficientNet V21. 网络简介2. EfficientNetV1弊端&#x1f947;训练图像的尺寸很大时&#xff0c;训练速度非常慢&#x1f948;在网络浅层中使用Depthwise convolutions速度会很慢&#x1f949;同等的放大每个stage是次优的 3.NAS Search4. Progressive Lear…

【C++】Vector -- 详解

一、vector的介绍及使用 1、vector的介绍 https://cplusplus.com/reference/vector/vector/ vector 是表示可变大小数组的序列容器。 就像数组一样&#xff0c;vector 也采用的连续存储空间来存储元素。也就是意味着可以采用下标对 vector 的元素进行访问&#xff0c;和数组一…

华天OA任意文件上传漏洞 复现

文章目录 华天OA任意文件上传漏洞 复现0x01 前言0x02 漏洞描述0x03 影响版本0x04 漏洞环境0x05 漏洞复现1.访问漏洞环境2.构造POC3.复现 0x06 修复建议 华天OA任意文件上传漏洞 复现 0x01 前言 免责声明&#xff1a;请勿利用文章内的相关技术从事非法测试&#xff0c;由于传播…

整理笔记——推挽输出、开漏输出

在使用MCU时&#xff0c;常看到配置IO口为推挽输出、开漏输出&#xff0c;以STM32为例&#xff0c;IO口有一下集中模式&#xff0c;单片机的内部电路简化图如下&#xff1a; 1.推挽输出 2.开漏输出 3.复用推挽输出 4.复用开漏输出 一、推挽输出 推挽电路的示意图&#xff1a; …

Linux系统常用指令篇---(完)

Linux系统常用指令篇—(完) 1.时间相关的指令 date显示 date 指定格式显示时间&#xff1a; date %Y:%m:%d date 用法&#xff1a; date [OPTION]… [FORMAT] 1.在显示方面&#xff0c;使用者可以设定欲显示的格式&#xff0c;格式设定为一个加号后接数个标记&#xff0c;其…

【AI】深度学习——前馈神经网络——全连接前馈神经网络

文章目录 1.1 全连接前馈神经网络1.1.1 符号说明超参数参数活性值 1.1.2 信息传播公式通用近似定理 1.1.3 神经网络与机器学习结合二分类问题多分类问题 1.1.4 参数学习矩阵求导链式法则更为高效的参数学习反向传播算法目标计算 ∂ z ( l ) ∂ w i j ( l ) \frac{\partial z^{…

ubuntu安装Chrome浏览器

切换暂存路径(具体根据自己) cd 下载/下载Chrome安装包 wget https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb安装Chrome浏览器 sudo dpkg -i google-chrome-stable_current_amd64.deb更新包 非必须 sudo apt-get -f install 启动Chrome浏览器 …