Managed DirectX Tutorial 3 – Cameras (Part 1)


In this tutorial I’ll explain how you can setup a scene with different cameras for each viewport. This will be a very short tutorial but in the following ones we’ll improve our camera system.
So, how does these cameras work in editors like 3D Studio Max? Just like in the movies, you have a scene (your 3D scene, in this case I used a cube) surrounded by cameras and each one of these cameras is filming your scene and presenting the result in different places (viewports).


Before we setup the cameras, we need something to show so I decided to create a simple mesh, a cube (I used the sample that comes in the DirectX documentation). A mesh is just a bunch of lines, triangles, polygons that define the basic structure of a 3D model.

m_mesh = new Mesh(numberVerts * 3, numberVerts, MeshFlags.Managed,
  CustomVertex.PositionColored.Format, m_device);

In the creation of the mesh we indicate:

  • the number of faces, since it’s a cube, it has 24 faces/tris.
  • the number of vertices, a cube has 8 vertices.
  • some mesh flags, we indicate that vertex and index buffers are managed by DirectX.
  • the format of the vertices, our contains position and color information.
  • finally we indicate the device.

Now two important things here are the vertex and index buffers. A vertex buffer is nothing more than a collection (an array) of vertices and an index buffer stores how these vertices are used to draw a triangle. This can be very useful to save space when we render something and in this case you can see clearly how we can save space, just imagine that you have to send the vertices for every triangle in the cube, you’d have to repeat each vertex several times since the vertices are shared by different triangles. This way we send all the vertices we need then we just have to indicate how we build a triangle from these vertices.

short[] indices =
  0,1,2, // Front Face
  1,3,2, // Front Face
  4,5,6, // Back Face
  6,5,7, // Back Face
  0,5,4, // Top Face
  0,2,5, // Top Face
  1,6,7, // Bottom Face
  1,7,3, // Bottom Face
  0,6,1, // Left Face
  4,6,0, // Left Face
  2,3,7, // Right Face
  5,2,7 // Right Face

using (VertexBuffer vb = m_mesh.VertexBuffer)
  GraphicsStream data = vb.Lock(0, 0, LockFlags.None);

  data.Write(new CustomVertex.PositionColored(-5.0f, 5.0f, 5.0f, Color.Red.ToArgb()));
  data.Write(new CustomVertex.PositionColored(-5.0f, -5.0f, 5.0f, Color.Green.ToArgb()));
  data.Write(new CustomVertex.PositionColored(5.0f, 5.0f, 5.0f, Color.Red.ToArgb()));
  data.Write(new CustomVertex.PositionColored(5.0f, -5.0f, 5.0f, Color.Green.ToArgb()));
  data.Write(new CustomVertex.PositionColored(-5.0f, 5.0f, -5.0f, Color.Blue.ToArgb()));
  data.Write(new CustomVertex.PositionColored(5.0f, 5.0f, -5.0f, Color.Blue.ToArgb()));
  data.Write(new CustomVertex.PositionColored(-5.0f, -5.0f, -5.0f, Color.Yellow.ToArgb()));
  data.Write(new CustomVertex.PositionColored(5.0f, -5.0f, -5.0f, Color.Yellow.ToArgb()));


using (IndexBuffer ib = m_mesh.IndexBuffer)
  ib.SetData(indices, 0, LockFlags.None);

Here we set the mesh vertex and index buffers. Check the indices variable, it contains the indices of the vertices that build up a triangle of the cube. When we add a vertex, we indicate a color so if we’re indicating different colors for each vertex, what will be the color of a triangle? DirectX does a smooth transition between the vertex colors. I added different colors so we can easily see that the cameras are pointing to different sides of the cube.


To support different cameras in each viewport, some minor things were modified in our viewport. If you remember, in the last tutorial we saved the swap chain in the tag field of the viewport, but now, we have also the camera to save in each viewport:

class ViewportInfo
  public SwapChain swapChain;
  public CameraInfo camInfo;

  public ViewportInfo()

  public ViewportInfo(SwapChain nswapChain, CameraInfo ncamInfo)
    swapChain = nswapChain;
    camInfo = ncamInfo;

In the previous tutorial we already had the swap chain, but now we add a new attribute, the camera info. This CameraInfo, for now, will only save simple data to identify our viewport’s cameras.

class CameraInfo
  public Vector3 position;
  public Vector3 lookAt;
  public Vector3 up;

  public bool ortho;
  public float scale;
  public float zNearPlane;
  public float zFarPlane;

  public CameraInfo(Vector3 nposition, Vector3 nlookAt, Vector3 nup, bool northo)
    position = nposition;
    lookAt = nlookAt;
    up = nup;
    ortho = northo;

    zNearPlane = 1f;
    zFarPlane = 100f;
    scale = 0.2f;

These structures may (will, almost for sure) suffer changes in future, to accommodate new features.


How do we setup these cameras? Easy, just setup the look at and projection matrices.
What are these matrices? The look at matrix is also known as view matrix and this matrix transforms our world points into camera space, that is, all objects are relocated around our camera point. The projection matrix is another transformation matrix used to project three dimensional points (from the camera space) onto a two dimensional plane (our computer screen). I’m not going to enter in detail with these matrices, if you want to see how they work or how they’re built, check the DirectX documentation and search for Transformations.

Let’s see how we set these matrices in DirectX:

if (camInfo.ortho)
  m_device.Transform.Projection = Matrix.OrthoLH(viewport.Width * camInfo.scale,
    viewport.Height * camInfo.scale, camInfo.zNearPlane, camInfo.zFarPlane);

  m_device.Transform.View = Matrix.LookAtLH(camInfo.position, camInfo.lookAt, camInfo.up);
  m_device.Transform.Projection = Matrix.PerspectiveFovLH((float)Math.PI / 4,
    (float)viewport.Width / (float)viewport.Height, camInfo.zNearPlane, camInfo.zFarPlane);

  m_device.Transform.View = Matrix.LookAtLH(camInfo.position, camInfo.lookAt, camInfo.up);

We use two types of cameras, the orthogonal and the perspective.


  • Orthogonal cameras map the objects into a 2D view.
  • We create an orthogonal matrix using Matrix.OrthoLH (LH stands for left handed, DirectX uses left handed system) and set the projection matrix. Why is that scale multiplication by width and height? Because the width and height of the viewport are very large when compared to the size of the object, so we zoom a bit.
  • Create a look at matrix with Matrix.LookAtLH and set the view matrix. We indicate our camera’s position, the poitnt where we’re looking at and the camera’s up vector.


  • These cameras map the objects into a 3D view.
  • Create a perspective matrix from Matrix.PerspectiveFovLH and set the projection matrix. We define a FOV of 90º (degrees), the viewport aspect ratio width/height and the near and far planes that define the boundaries of what we can see (we can see everything that’s between these planes.)
  • Set the view matrix like in orthogonal cameras.


Viewport has been improved and I decided to add a title to know which camera we have in each viewport. Another thing that was missing in the viewport was the ability to see it rendered in the designer, when used in our Four way splitter control and this has also been fixed. Some other minor improvements here and there were done.

And we reached the end of another tutorial. I’ll add the C++/CLI code shortly (only C# for now), and other updates I might do to the code. Next time we’ll continue with cameras to help us move through the scenario. Feel free to contact me about anything wrong or good with the tutorial, you have my e-mail at the end of the page. See ya till next time! ;)


Download Files:

C# version
You’re free to use this code as long as you don’t blame me for any changes/bugs that were introduced and don’t demand anything from me. Only download it if you accept these conditions.

References: – Microsoft DirectX Development Center – DirectX 9.0 for Managed Code – MSDN Forums

Previous Tutorial

Tagged , , , , , . Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.