前言
我这偏文章还是继前面的《【祥哥带你玩HoloLens开发】了解 DirectX 全息应用开发》系列文章继续讲基于DirectX 全息应用开发开发技术。
本次是实现一个可以播放左右格式的 3D 图片的VR应用,其中利用了上一篇【祥哥带你玩HoloLens开发】了解 DirectX 全息应用开发(三)了解Hololens的3D全息视觉呈现原理 讲的 双目立体视觉的数学原理 ,控制HoloLens 左右两个镜片分别显示左右格式图片的左边画面和右边画面,从而在视觉上呈现出3D的效果。
开发环境
Visual Studio 2015 update3
Visual C#
HoloLens模拟器(模拟器安装包中集成了 Holographic DirectX 11 App (Universal Windows) 的程序模板)
程序所用到到 test0.jpg test1.jpg test2.jpg test3.jpg 4张3D图片资源可以自己找左右格式的3D影片截取,也可以到 Mirosoft Hololens 中国社区 Github 代码库 的VR-Player项目里的 Assets 目录下载
1.创建 VR-Player 项目
VS新建项目的模板库里有两个DirectX全息应用程序模板,分别是C#和C++两种开发语言的模板,分别在模块库中两种开发语言下Windows > Universal > Holographic 可以找到名为 Holographic DirectX 11 App (Universal Windows) 的程序模板,我这里选择C#语言的模板创建一个名为 VR-Player 的项目。
选择 x86 在Hololens真机或模拟器中运行,可以看到一个旋转的彩色立方体。
2.准备资源
我通过左右格式的3D影片中截取了四张图片作为我们这次播放的图片,分别是test0.jpg test1.jpg test2.jpg test3.jpg,并将这4这张图片放到本项目的 Assets 目录。在VS中,虽然你讲这4张图片放在了项目目录下,但是不进行如下配置的话这些图片资源在程序包生产的时候并不会被项目最终打包到到程序包中。
在VS的 解决方案资源管理器 中的VR-Player项目的 Assets 目录上右键点击菜单 添加->现有项目(快捷键Shift+Alt+A) 打开文件选择的对话框,选取我准备好在 Assets 目录中的4张图片后点 添加 按钮将他们引用进来。
3.修改 CommonCameraResources.cs
为什么要修改 CameraResources 在上一篇 【祥哥带你玩HoloLens开发】了解 DirectX 全息应用开发(三)了解Hololens的3D全息视觉呈现原理 中已经做了很详细的介绍,如果还有疑问的请先学习这一章节,本节就直接直接介绍代码如何实现。
CameraResources 改动很小,就可以实现左右两个镜片显示不同的内容。
找到 public void UpdateViewProjectionBuffer 方法的以下两句:
viewProjectionConstantBufferData.viewProjectionLeft = Matrix4x4.Transpose(
viewCoordinateSystemTransform.Left * cameraProjectionTransform.Left
);
viewProjectionConstantBufferData.viewProjectionRight = Matrix4x4.Transpose(
viewCoordinateSystemTransform.Right * cameraProjectionTransform.Right
);
这两句就是控制左右镜片显示具体内容的本全息场景中的两个 camera 的,我们将这两句修改成如下四句:
// 创建两个分别用来贴3D图片的左右画面的模型的中心点也就是左右camera的焦点的向量(模型下面就会讲到)
// 这里就是对默认的中心点
Matrix4x4 traslationL = Matrix4x4.CreateTranslation(0.2f, 0f, 0);
Matrix4x4 traslationR = Matrix4x4.CreateTranslation(-2.1f, 0f, 0);
viewProjectionConstantBufferData.viewProjectionLeft = Matrix4x4.Transpose(
cameraProjectionTransform.Left * traslationL
);
viewProjectionConstantBufferData.viewProjectionRight = Matrix4x4.Transpose(
cameraProjectionTransform.Right * traslationR
);
4.增加VR_Player.Content.TexturedVertex struct 结构
在已有的 ContentShaderStructures.cs 文件中我们增加一个 TexturedVertex 结构,该结构定义了我们用来实现图片显示的模型的贴图定点的数据结构,代码如下:
using System.Numerics;
namespace VR_Player.Content
{
…………………………(已有代码略)…………………………………………
internal struct TexturedVertex
{
///
/// Position
///
public Vector3 Position;
///
/// Texture coordinate
///
public Vector2 TextureCoordinate;
///
/// Constructor
///
/// Position
/// Texture Coordinate
public TexturedVertex(Vector3 position, Vector2 textureCoordinate)
{
Position = position;
TextureCoordinate = textureCoordinate;
}
}
}
5.实现 TextureLoader 类
实现一个用来加载图片作为为模型的纹理贴图的 TextureLoader 类,对外提供一个 FromBitmapFile 方法,其功能分三步:
5.1 load一个图片,将其通过 SharpDX.WIC.FormatConverter 转换为 SharpDX.WIC.BitmapSource
5.2 通过BitmapSource创建一个2D的贴图纹理 SharpDX.Direct3D11.Texture2D 对象
5.3 再通过2D的贴图纹理创建一个 SharpDX.Direct3D11.ShaderResourceView 对象,并返回该对象
代码如下:
namespace VR_Player.Content
{
class TextureLoader
{
///
/// Loads a bitmap using WIC.
///
///
///
///
public static SharpDX.WIC.BitmapSource LoadBitmap(SharpDX.WIC.ImagingFactory2 factory, string filename)
{
//filename = Windows.ApplicationModel.Package.Current.InstalledLocation.Path + filename;
var bitmapDecoder = new SharpDX.WIC.BitmapDecoder(
factory,
filename,
SharpDX.WIC.DecodeOptions.CacheOnDemand
);
var formatConverter = new SharpDX.WIC.FormatConverter(factory);
formatConverter.Initialize(
bitmapDecoder.GetFrame(0),
SharpDX.WIC.PixelFormat.Format32bppPRGBA,
SharpDX.WIC.BitmapDitherType.None,
null,
0.0,
SharpDX.WIC.BitmapPaletteType.Custom);
return formatConverter;
}
///
/// Creates a from a WIC
///
/// The Direct3D11 device
/// The WIC bitmap source
/// A Texture2D
public static SharpDX.Direct3D11.Texture2D CreateTexture2DFromBitmap(SharpDX.Direct3D11.Device device, SharpDX.WIC.BitmapSource bitmapSource)
{
// Allocate DataStream to receive the WIC image pixels
int stride = bitmapSource.Size.Width * 4;
using (var buffer = new SharpDX.DataStream(bitmapSource.Size.Height * stride, true, true))
{
// Copy the content of the WIC to the buffer
bitmapSource.CopyPixels(stride, buffer);
return new SharpDX.Direct3D11.Texture2D(device, new SharpDX.Direct3D11.Texture2DDescription()
{
Width = bitmapSource.Size.Width,
Height = bitmapSource.Size.Height,
ArraySize = 1,
BindFlags = SharpDX.Direct3D11.BindFlags.ShaderResource,
Usage = SharpDX.Direct3D11.ResourceUsage.Immutable,
CpuAccessFlags = SharpDX.Direct3D11.CpuAccessFlags.None,
Format = SharpDX.DXGI.Format.R8G8B8A8_UNorm,
MipLevels = 1,
OptionFlags = SharpDX.Direct3D11.ResourceOptionFlags.None,
SampleDescription = new SharpDX.DXGI.SampleDescription(1, 0),
}, new SharpDX.DataRectangle(buffer.DataPointer, stride));
}
}
public static SharpDX.Direct3D11.ShaderResourceView FromBitmapFile(Common.DeviceResources deviceResources, string bitmapFile)
{
using (var bitmap = TextureLoader.LoadBitmap(deviceResources.WicImagingFactory, bitmapFile))
using (var texture2D = TextureLoader.CreateTexture2DFromBitmap(deviceResources.D3DDevice, bitmap))
{
SharpDX.Direct3D11.ShaderResourceView textureView = new SharpDX.Direct3D11.ShaderResourceView(deviceResources.D3DDevice, texture2D);
return textureView;
}
}
}
}
6.修改 SpinningCubeRenderer 类
6.1 在头部引入 SharpDX.Direct3D11 命名空间,在代码的 using Windows.UI.Input.Spatial; 下面加上如下一行:
using SharpDX.Direct3D11;
6.2 为 SpinningCubeRenderer 类申明一个 ShaderResourceView 类型私有变量 textureView 和 一个 Texture2D 类型私有变量 texture ,代码如下:
private ShaderResourceView textureView;
private Texture2D texture;
6.3 修改 Update() 方法,让原来每一帧都要变换的立方体模型不在旋转和倾斜,保持模型固定位置,代码如下:
public void Update(StepTimer timer)
{
// Rotate the cube.
// Convert degrees to radians, then convert seconds to rotation angle.
//float radiansPerSecond = this.degreesPerSecond * ((float)Math.PI / 180.0f);
//double totalRotation = timer.TotalSeconds * radiansPerSecond;
//float radians = (float)System.Math.IEEERemainder(totalRotation, 2 * Math.PI);
//Matrix4x4 modelRotation = Matrix4x4.CreateFromAxisAngle(new Vector3(0, 1, 0), -radians);
// Position the cube.
//Matrix4x4 modelTranslation = Matrix4x4.CreateTranslation(position);
// Multiply to get the transform matrix.
// Note that this transform does not enforce a particular coordinate system. The calling
// class is responsible for rendering this content in a consistent manner.
//Matrix4x4 modelTransform = modelRotation * modelTranslation;
//注释上面的语句,并且增加这一句
Matrix4x4 modelTransform = Matrix4x4.CreateTranslation(position);
// The view and projection matrices are provided by the system; they are associated
// with holographic cameras, and updated on a per-camera basis.
// Here, we provide the model transform for the sample hologram. The model transform
// matrix is transposed to prepare it for the shader.
this.modelConstantBufferData.model = Matrix4x4.Transpose(modelTransform);
// Loading is asynchronous. Resources must be created before they can be updated.
if (!loadingComplete)
{
return;
}
// Use the D3D device context to update Direct3D device-based resources.
var context = this.deviceResources.D3DDeviceContext;
// Update the model transform buffer for the hologram.
context.UpdateSubresource(ref this.modelConstantBufferData, this.modelConstantBuffer);
}
6.4 修改 Render() 方法,将原来用的颜色像素着色器替换成贴图的像素着色器
public void Render()
{
// Loading is asynchronous. Resources must be created before drawing can occur.
if (!this.loadingComplete)
{
return;
}
var context = this.deviceResources.D3DDeviceContext;
// Each vertex is one instance of the VertexPositionColor struct.
// 第一点:将VertexPositionColor换成TexturedVertex
int stride = SharpDX.Utilities.SizeOf();
int offset = 0;
var bufferBinding = new SharpDX.Direct3D11.VertexBufferBinding(this.vertexBuffer, stride, offset);
context.InputAssembler.SetVertexBuffers(0, bufferBinding);
context.InputAssembler.SetIndexBuffer(
this.indexBuffer,
SharpDX.DXGI.Format.R16_UInt, // Each index is one 16-bit unsigned integer (short).
0);
context.InputAssembler.PrimitiveTopology = SharpDX.Direct3D.PrimitiveTopology.TriangleList;
context.InputAssembler.InputLayout = this.inputLayout;
// Attach the vertex shader.
context.VertexShader.SetShader(this.vertexShader, null, 0);
// Apply the model constant buffer to the vertex shader.
context.VertexShader.SetConstantBuffers(0, this.modelConstantBuffer);
if (!this.usingVprtShaders)
{
// On devices that do not support the D3D11_FEATURE_D3D11_OPTIONS3::
// VPAndRTArrayIndexFromAnyShaderFeedingRasterizer optional feature,
// a pass-through geometry shader is used to set the render target
// array index.
context.GeometryShader.SetShader(this.geometryShader, null, 0);
}
// Attach the pixel shader.
context.PixelShader.SetShader(this.pixelShader, null, 0);
// 第二点,设置由着色器阶段使用的着色器资源给像素着色器
context.PixelShader.SetShaderResource(0, textureView);
// Draw the objects.
context.DrawIndexedInstanced(
indexCount, // Index count per instance.
2, // Instance count.
0, // Start index location.
0, // Base vertex location.
0 // Start instance location.
);
}
下面我们将修改创建设备依赖资源的 CreateDeviceDependentResourcesAsync()
6.5 修改顶点描述定义语句
SharpDX.Direct3D11.InputElement[] vertexDesc =
{
new SharpDX.Direct3D11.InputElement("POSITION", 0, SharpDX.DXGI.Format.R32G32B32_Float, 0, 0, SharpDX.Direct3D11.InputClassification.PerVertexData, 0),
new SharpDX.Direct3D11.InputElement("COLOR", 0, SharpDX.DXGI.Format.R32G32B32_Float, 12, 0, SharpDX.Direct3D11.InputClassification.PerVertexData, 0),
};
将这段点稍作修改
SharpDX.Direct3D11.InputElement[] vertexDesc =
{
new SharpDX.Direct3D11.InputElement("POSITION", 0, SharpDX.DXGI.Format.R32G32B32_Float, 0, 0, SharpDX.Direct3D11.InputClassification.PerVertexData, 0),
new SharpDX.Direct3D11.InputElement("TEXCOORD", 0, SharpDX.DXGI.Format.R32G32_Float, 12, 0, SharpDX.Direct3D11.InputClassification.PerVertexData, 0),
};
6.6 将原来的一个立方体模型改成两个矩形面片模型,重新定义 VertexPositionColor[] 类型的 cubeVertices 顶点数组
TexturedVertex[] cubeVertices =
{
new TexturedVertex(new Vector3(-0.526f, 0.296f, 0), new Vector2(0, 0)),
new TexturedVertex(new Vector3( 0.526f, 0.296f, -0.01f), new Vector2(0.5f, 0)),
new TexturedVertex(new Vector3( 0.526f, -0.296f, -0.01f), new Vector2(0.5f, 1)),
new TexturedVertex(new Vector3(-0.526f, -0.296f, 0), new Vector2(0, 1)),
new TexturedVertex(new Vector3(0.527f, 0.296f, -0.01f), new Vector2(0.5f, 0)),
new TexturedVertex(new Vector3(1.579f, 0.296f, 0), new Vector2(1, 0)),
new TexturedVertex(new Vector3(1.579f, -0.296f, 0), new Vector2(1, 1)),
new TexturedVertex(new Vector3(0.527f, -0.296f, -0.01f), new Vector2(0.5f, 1)),
};
6.7 将原来的 cubeIndices 索引修成如下:
ushort[] cubeIndices =
{
0,1,2, 0,2,3, //left
4,5,6, 4,6,7, //right
};
通过6.6、6.7两步咱们模型也修改完了
6.8 将图片加载进来,我们定义了一个加载图片的 ChangeImage() 方法,之所以叫 ChangeImage 我们后面还要利用这个方法来更换图片, ChangeImage() 方法的代码如下:
///
/// 更新图片
/// test0~3 四张图片随机显示一张
///
public void ChangeImage()
{
Random random = new Random();
int n = random.Next(4);
this.textureView = TextureLoader.FromBitmapFile(deviceResources, "Assets\test" + n + ".jpg");
}
6.9 在上面提到过的 CreateDeviceDependentResourcesAsync()方法的末尾调用一下ChangeImage(),让应用初始化打开的时候就随机显示一张图片。
public async void CreateDeviceDependentResourcesAsync()
{
………………………(略去)…………………………………
//随机加载一张图片
ChangeImage();
// Once the cube is loaded, the object is ready to be rendered.
loadingComplete = true;
}
7.hlsl着色器程序的实现
SpinningCubeRenderer 主体工程已经完工,虽然现在已经可以在HoloLens设备中运行程序了,但是第六步里将原来用颜色来渲染改成了用纹理现在还不能看到最终图片浏览的效果,那还要对应修改的 hlsl 着色器程序才行。
7.1 ContentShadersPixelShader.hlsl
struct PixelShaderInput
{
min16float4 position : SV_POSITION;
min16float2 texcoord : TEXCOORD;
};
//texture
Texture2D textureMap;
SamplerState textureSampler
{
Filter = MIN_MAG_MIP_LINEAR;
AddressU = Wrap;
AddressV = Wrap;
};
min16float4 main(PixelShaderInput input) : SV_TARGET
{
return (min16float4)textureMap.Sample(textureSampler, input.texcoord);
}
7.2 ContentShadersVPRTVertexShader.hlsl
// A constant buffer that stores the model transform.
cbuffer ModelConstantBuffer : register(b0)
{
float4x4 model;
};
// A constant buffer that stores each set of view and projection matrices in column-major format.
cbuffer ViewProjectionConstantBuffer : register(b1)
{
float4x4 viewProjection[2];
};
// Per-vertex data used as input to the vertex shader.
struct VertexShaderInput
{
min16float3 pos : POSITION;
//min16float3 color : COLOR0;
min16float2 texcoord : TEXCOORD;
uint instId : SV_InstanceID;
};
// Per-vertex data passed to the geometry shader.
// Note that the render target array index is set here in the vertex shader.
struct VertexShaderOutput
{
min16float4 pos : SV_POSITION;
//min16float3 color : COLOR0;
min16float2 texcoord : TEXCOORD;
uint rtvId : SV_RenderTargetArrayIndex; // SV_InstanceID % 2
};
// Simple shader to do vertex processing on the GPU.
VertexShaderOutput main(VertexShaderInput input)
{
VertexShaderOutput output;
float4 pos = float4(input.pos, 1.0f);
// Note which view this vertex has been sent to. Used for matrix lookup.
// Taking the modulo of the instance ID allows geometry instancing to be used
// along with stereo instanced drawing; in that case, two copies of each
// instance would be drawn, one for left and one for right.
int idx = input.instId % 2;
// Transform the vertex position into world space.
pos = mul(pos, model);
// Correct for perspective and project the vertex position onto the screen.
pos = mul(pos, viewProjection[idx]);
output.pos = (min16float4)pos;
// Pass the color through without modification.
output.texcoord = input.texcoord;
// Set the render target array index.
output.rtvId = idx;
return output;
}
7.3 ContentShadersVertexShader.hlsl
// A constant buffer that stores the model transform.
cbuffer ModelConstantBuffer : register(b0)
{
float4x4 model;
};
// A constant buffer that stores each set of view and projection matrices in column-major format.
cbuffer ViewProjectionConstantBuffer : register(b1)
{
float4x4 viewProjection[2];
};
// Per-vertex data used as input to the vertex shader.
struct VertexShaderInput
{
min16float3 pos : POSITION;
//min16float3 color : COLOR0;
min16float2 texcoord : TEXCOORD0;
uint instId : SV_InstanceID;
};
// Per-vertex data passed to the geometry shader.
// Note that the render target array index will be set by the geometry shader
// using the value of viewId.
struct VertexShaderOutput
{
min16float4 pos : SV_POSITION;
//min16float3 color : COLOR0;
min16float2 texcoord : TEXCOORD0;
uint viewId : TEXCOORD1; // SV_InstanceID % 2
};
// Simple shader to do vertex processing on the GPU.
VertexShaderOutput main(VertexShaderInput input)
{
VertexShaderOutput output;
float4 pos = float4(input.pos, 1.0f);
// Note which view this vertex has been sent to. Used for matrix lookup.
// Taking the modulo of the instance ID allows geometry instancing to be used
// along with stereo instanced drawing; in that case, two copies of each
// instance would be drawn, one for left and one for right.
int idx = input.instId % 2;
// Transform the vertex position into world space.
pos = mul(pos, model);
// Correct for perspective and project the vertex position onto the screen.
pos = mul(pos, viewProjection[idx]);
output.pos = (min16float4)pos;
// Pass the color through without modification.
output.texcoord = input.texcoord;
// Set the instance ID. The pass-through geometry shader will set the
// render target array index to whatever value is set here.
output.viewId = idx;
return output;
}
至此,咱们的基本功能已经都实现了,选择 x86 远程计算机,按 F5 生成并将该程序部署到HoloLens里并运行。
现在已经可以看到3D左右格式的图片在HoloLens里的3D预览效果的,但是当我们用手势点击图片的时候会有bug,并且想换一张图片看的话只能推出当前程序并且杀掉后再打开咱们这个 VR Player 的话才能随机换一个图片看,这是不是也很麻烦,如果点击一下换个图片岂不是很方便,那么下面我们继续完善一下代码。
8.最后一步:解决点击手势操作的bug,并且将点击手势操作做成更好图片的操作
修改 VR_PlayerMain.cs 中的 VR_PlayerMain 的 public HolographicFrame Update() 方法:
找到代码:
#if DRAW_SAMPLE_CONTENT
// Check for new input state since the last frame.
SpatialInteractionSourceState pointerState = spatialInputHandler.CheckForInput();
if (null != pointerState)
{
// When a Pressed gesture is detected, the sample hologram will be repositioned
// two meters in front of the user.
spinningCubeRenderer.PositionHologram(
pointerState.TryGetPointerPose(currentCoordinateSystem)
);
}
#endif
修改为:
#if DRAW_SAMPLE_CONTENT
// Check for new input state since the last frame.
SpatialInteractionSourceState pointerState = spatialInputHandler.CheckForInput();
if (null != pointerState)
{
// When a Pressed gesture is detected, the sample hologram will be repositioned
// two meters in front of the user.
//spinningCubeRenderer.PositionHologram(
// pointerState.TryGetPointerPose(currentCoordinateSystem)
// );
spinningCubeRenderer.ChangeImage();
}
#endif
至此我们这个例子就完成了,完整代码到 Mirosoft Hololens 中国社区 Github 代码库 中下载
转载请注明出处:
【祥哥带你玩HoloLens开发】基于Sharpdx(C#+DirectX)基于Sharpdx(C#+DirectX)实现VR 播放器(一)3D图片播放 Microsoft HoloLens 中国社区
http://mshololens.cn/discussion/119/xiang-ge-dai-ni-wan-HoloLens-kai-fa-ji-yu-Sharpdx-C-DirectX-shi-xian-3D-tu-pian-shi-xian-VR-he-zi-xiao-guo