A Walk in the Dark

Bruno Vidal and Paulo Silva started working on “A Walk in the Dark” while they were working with me at ZPX. I saw this project evolve from the very beginning and was always eager to test new builds and levels they were working on. In the meanwhile they left ZPX and continued their work on AWITD.

374769_282116308502179_229708283742982_796597_2146777246_n          AWalkInTheDark_09

Since then, the project had massive improvements in every direction, gameplay, art, music, etc.. Recently I was asked to help in the project and I gladly accepted to join the team in that effort. Unfortunately the time available isn’t much but I hope to help wherever I can.

Announcement trailer


Flying Turtle Software is the name of the company behind A Walk in the Dark. The game is supposed to come out in 2012 for the PC.

SilverMenu library for WP7

Inspired by the menus of WP7 game Puzzle Jumble, I decided to write a library that would help me and others to build menus for WP7 games easily. The Puzzle Jumble’s GUI code is a bit of a mess because it wasn’t thought for this kind of interface from the beginning so I decided to write SilverMenu from scratch.

Puzzle Jumble’s GUI


SilverMenu aims to bring some of the Silverlight user interface that is now present on WP7 to the XNA. To make it simple to use you can build simple animated menus with just a few lines of XML. It has only one week of development but you can see some progress already:

First SilverMenu sample.


With time I hope it grows into something more complete and useful. I’ll still update my blog about this project but if you want to follow the project closely and keep up to date, as well as, download the last build, you can do it at http://silvermenu.codeplex.com. I hope you find it useful and I appreciate all the feedback I can get. Thanks!

Don’t be nervous

Reading an interesting article about comedy in games at GDC I found something rather interesting and wanted to share it with you:

Tim: When we started on Monkey Island, I don’t know about Dave [Grossman], but I thought we were writing the temporary dialogue for that game. Cause we were really new, and there was a big company there with Lucasarts and George Lucas and everyone. So I was sure they’d have professionals come and write the dialogue. So we were just kind of goofing around and writing really silly dialogue. And it took the pressure off us cause we didn’t sit there and wonder, “Is this good enough?” We were just making each other laugh in the office. And then, as it went on, Ron said, “No, no, no. This is the dialogue for the game.” And I was like, “Oh, god.”

And we never wrote anything funny after that.

Managed DirectX Tutorial 3 – Cameras (Part 1)


In this tutorial I’ll explain how you can setup a scene with different cameras for each viewport. This will be a very short tutorial but in the following ones we’ll improve our camera system.
So, how does these cameras work in editors like 3D Studio Max? Just like in the movies, you have a scene (your 3D scene, in this case I used a cube) surrounded by cameras and each one of these cameras is filming your scene and presenting the result in different places (viewports).


Before we setup the cameras, we need something to show so I decided to create a simple mesh, a cube (I used the sample that comes in the DirectX documentation). A mesh is just a bunch of lines, triangles, polygons that define the basic structure of a 3D model.

m_mesh = new Mesh(numberVerts * 3, numberVerts, MeshFlags.Managed,
  CustomVertex.PositionColored.Format, m_device);

In the creation of the mesh we indicate:

  • the number of faces, since it’s a cube, it has 24 faces/tris.
  • the number of vertices, a cube has 8 vertices.
  • some mesh flags, we indicate that vertex and index buffers are managed by DirectX.
  • the format of the vertices, our contains position and color information.
  • finally we indicate the device.

Now two important things here are the vertex and index buffers. A vertex buffer is nothing more than a collection (an array) of vertices and an index buffer stores how these vertices are used to draw a triangle. This can be very useful to save space when we render something and in this case you can see clearly how we can save space, just imagine that you have to send the vertices for every triangle in the cube, you’d have to repeat each vertex several times since the vertices are shared by different triangles. This way we send all the vertices we need then we just have to indicate how we build a triangle from these vertices.

short[] indices =
  0,1,2, // Front Face
  1,3,2, // Front Face
  4,5,6, // Back Face
  6,5,7, // Back Face
  0,5,4, // Top Face
  0,2,5, // Top Face
  1,6,7, // Bottom Face
  1,7,3, // Bottom Face
  0,6,1, // Left Face
  4,6,0, // Left Face
  2,3,7, // Right Face
  5,2,7 // Right Face

using (VertexBuffer vb = m_mesh.VertexBuffer)
  GraphicsStream data = vb.Lock(0, 0, LockFlags.None);

  data.Write(new CustomVertex.PositionColored(-5.0f, 5.0f, 5.0f, Color.Red.ToArgb()));
  data.Write(new CustomVertex.PositionColored(-5.0f, -5.0f, 5.0f, Color.Green.ToArgb()));
  data.Write(new CustomVertex.PositionColored(5.0f, 5.0f, 5.0f, Color.Red.ToArgb()));
  data.Write(new CustomVertex.PositionColored(5.0f, -5.0f, 5.0f, Color.Green.ToArgb()));
  data.Write(new CustomVertex.PositionColored(-5.0f, 5.0f, -5.0f, Color.Blue.ToArgb()));
  data.Write(new CustomVertex.PositionColored(5.0f, 5.0f, -5.0f, Color.Blue.ToArgb()));
  data.Write(new CustomVertex.PositionColored(-5.0f, -5.0f, -5.0f, Color.Yellow.ToArgb()));
  data.Write(new CustomVertex.PositionColored(5.0f, -5.0f, -5.0f, Color.Yellow.ToArgb()));


using (IndexBuffer ib = m_mesh.IndexBuffer)
  ib.SetData(indices, 0, LockFlags.None);

Here we set the mesh vertex and index buffers. Check the indices variable, it contains the indices of the vertices that build up a triangle of the cube. When we add a vertex, we indicate a color so if we’re indicating different colors for each vertex, what will be the color of a triangle? DirectX does a smooth transition between the vertex colors. I added different colors so we can easily see that the cameras are pointing to different sides of the cube.


To support different cameras in each viewport, some minor things were modified in our viewport. If you remember, in the last tutorial we saved the swap chain in the tag field of the viewport, but now, we have also the camera to save in each viewport:

class ViewportInfo
  public SwapChain swapChain;
  public CameraInfo camInfo;

  public ViewportInfo()

  public ViewportInfo(SwapChain nswapChain, CameraInfo ncamInfo)
    swapChain = nswapChain;
    camInfo = ncamInfo;

In the previous tutorial we already had the swap chain, but now we add a new attribute, the camera info. This CameraInfo, for now, will only save simple data to identify our viewport’s cameras.

class CameraInfo
  public Vector3 position;
  public Vector3 lookAt;
  public Vector3 up;

  public bool ortho;
  public float scale;
  public float zNearPlane;
  public float zFarPlane;

  public CameraInfo(Vector3 nposition, Vector3 nlookAt, Vector3 nup, bool northo)
    position = nposition;
    lookAt = nlookAt;
    up = nup;
    ortho = northo;

    zNearPlane = 1f;
    zFarPlane = 100f;
    scale = 0.2f;

These structures may (will, almost for sure) suffer changes in future, to accommodate new features.


How do we setup these cameras? Easy, just setup the look at and projection matrices.
What are these matrices? The look at matrix is also known as view matrix and this matrix transforms our world points into camera space, that is, all objects are relocated around our camera point. The projection matrix is another transformation matrix used to project three dimensional points (from the camera space) onto a two dimensional plane (our computer screen). I’m not going to enter in detail with these matrices, if you want to see how they work or how they’re built, check the DirectX documentation and search for Transformations.

Let’s see how we set these matrices in DirectX:

if (camInfo.ortho)
  m_device.Transform.Projection = Matrix.OrthoLH(viewport.Width * camInfo.scale,
    viewport.Height * camInfo.scale, camInfo.zNearPlane, camInfo.zFarPlane);

  m_device.Transform.View = Matrix.LookAtLH(camInfo.position, camInfo.lookAt, camInfo.up);
  m_device.Transform.Projection = Matrix.PerspectiveFovLH((float)Math.PI / 4,
    (float)viewport.Width / (float)viewport.Height, camInfo.zNearPlane, camInfo.zFarPlane);

  m_device.Transform.View = Matrix.LookAtLH(camInfo.position, camInfo.lookAt, camInfo.up);

We use two types of cameras, the orthogonal and the perspective.


  • Orthogonal cameras map the objects into a 2D view.
  • We create an orthogonal matrix using Matrix.OrthoLH (LH stands for left handed, DirectX uses left handed system) and set the projection matrix. Why is that scale multiplication by width and height? Because the width and height of the viewport are very large when compared to the size of the object, so we zoom a bit.
  • Create a look at matrix with Matrix.LookAtLH and set the view matrix. We indicate our camera’s position, the poitnt where we’re looking at and the camera’s up vector.


  • These cameras map the objects into a 3D view.
  • Create a perspective matrix from Matrix.PerspectiveFovLH and set the projection matrix. We define a FOV of 90º (degrees), the viewport aspect ratio width/height and the near and far planes that define the boundaries of what we can see (we can see everything that’s between these planes.)
  • Set the view matrix like in orthogonal cameras.


Viewport has been improved and I decided to add a title to know which camera we have in each viewport. Another thing that was missing in the viewport was the ability to see it rendered in the designer, when used in our Four way splitter control and this has also been fixed. Some other minor improvements here and there were done.

And we reached the end of another tutorial. I’ll add the C++/CLI code shortly (only C# for now), and other updates I might do to the code. Next time we’ll continue with cameras to help us move through the scenario. Feel free to contact me about anything wrong or good with the tutorial, you have my e-mail at the end of the page. See ya till next time! ;)


Download Files:

C# version
You’re free to use this code as long as you don’t blame me for any changes/bugs that were introduced and don’t demand anything from me. Only download it if you accept these conditions.


http://msdn.microsoft.com/directx/ – Microsoft DirectX Development Center
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/directx9_m/directx/directx9m.asp – DirectX 9.0 for Managed Code
http://forums.microsoft.com/ – MSDN Forums

Previous Tutorial

Managed DirectX Tutorial 2 – Introduction to swap chains: Rendering to two panels at the same time


In this tutorial I’ll explain how you can use swap chains to create different views of the same scene. This tutorial should be fairly simple for you to understand and follow.

A swap chain is basically the process that DirectX uses when presenting a scene to the user. It draws the scene to an off-screen buffer (back buffer) and when it’s ready, it presents the image to the user swapping the back buffer (that contains the current frame) with the front buffer (that contains the last frame). Many swap chains can be created depending on the memory you have available. These are useful when you want to render a scene in more than one place. Think of an editor or a 3D program like 3D Studio Max, it has different views (using different cameras and projections) of the same scene. By default, DirectX creates one swap chain whenever you create the device.
The creation of a new swap chain is very easy. Let’s see what’s needed:

SwapChain sc = new SwapChain(device, presentParams);

All we need is the device a present parameters structure. So, if we only need these two things to create a new swap chain, how do we indicate to where we will render it? When creating a new device we indicate the handle to the control where we’re going to render our scene but in the case of swap chains we already have the device created, so, we use a field in the present parameters structure to indicate the handle:

presentParams.Window = panel.Handle;


Now let’s draw something in 4 different viewports. I built a new control for this purpose, the 4 way splitter. To continue we need to take some special measures to assure that our viewports are rendered/updated correctly. What happens if we resize one or more viewports? we need to create new swap chains since viewport’s area has changed. I created a new control to help us on this task, the viewport control. This is a very simple control (for now, we might add some stuff to it later), it has two function callbacks: one for the render and one for the creation of the swap chain. This two callbacks are called whenever the control is painted or resized, respectively.

Viewport code

public delegate void RenderFuncDelegate(Viewport viewport);
private RenderFuncDelegate m_renderFunc;

public delegate void SwapChainFuncDelegate(Viewport viewport);
private SwapChainFuncDelegate m_swapChainFunc;

public RenderFuncDelegate RenderFunc
	set { m_renderFunc = value; }

public SwapChainFuncDelegate SwapChainFunc
	set { m_swapChainFunc = value; }

protected override void OnPaint(PaintEventArgs e)
	// render our scene
	if (m_renderFunc != null)

protected override void OnResize(EventArgs e)

	// create new swap chain
	if (m_swapChainFunc != null)

Our main window is composed by a four way splitter with four viewports in each corner. We do some initializations with the device (see managed directx tutorial (part 1)) and create our swap chain and render functions so they can process requests for the viewports:

public void InitializeViewports()
  // save our viewports in an array
  m_viewports[0] = viewportTopLeft;
  m_viewports[1] = viewportTopRight;
  m_viewports[2] = viewportBottomLeft;
  m_viewports[3] = viewportBottomRight;

  // for each viewport register the delegate functions
  foreach(Viewport viewport in m_viewports)
    viewport.SwapChainFunc = new WindowsControls.Viewport.SwapChainFuncDelegate(CreateSwapChain);
    viewport.RenderFunc = new WindowsControls.Viewport.RenderFuncDelegate(Render);

First we initialize an array with our viewports and for each viewport we associated the delegate functions that will handle the creation of the swap chain and render of the screen. Next our two delegate functions:

private void CreateSwapChain(Viewport viewport)
  // get the swap chain of the viewport
  SwapChain swapChain = ((SwapChain)viewport.Tag);

  // if we already have a swap chain created, get rid of it
  if(swapChain != null)

  PresentParameters presentParams = m_presentParams;
  presentParams.BackBufferWidth = viewport.Width;
  presentParams.BackBufferHeight = viewport.Height;

  // check if our buffer is valid
  if(presentParams.BackBufferWidth == 0 || presentParams.BackBufferHeight == 0)

  presentParams.DeviceWindowHandle = viewport.Handle;

  // save the swap chain in the viewport
  viewport.Tag = new SwapChain(m_device, m_presentParams);

private void Render(Viewport viewport)
  // get the swap chain of the viewport
  SwapChain swapChain = ((SwapChain)viewport.Tag);

  // safety check
  if (swapChain == null)

  // retrieve it's back buffer surface where we do the render
  Surface pBackBuffer = swapChain.GetBackBuffer(0);

  // set it as our current back buffer
  m_device.SetRenderTarget(0, pBackBuffer);

  // clear our back buffer
  m_device.Clear(ClearFlags.Target, Color.DarkSlateBlue, 1.0f, 0);

  // we don't need the surface anymore, get rid of it

  // swap the screen buffer and back buffer to present our new data

And this is the end of another tutorial. I’ll add the C++/CLI code shortly (only C# for now), and other updates I might do to the code. Next time I’ll show you… well, I don’t know what I’ll do so stay tuned ;) Your feedback is appreciated, you can leave comments at the end of the page. See ya till next time! ;)

Download Files:

C# version
You’re free to use this code as long as you don’t blame me for any changes/bugs that were introduced and don’t demand anything from me. Only download it if you accept these conditions.


http://msdn.microsoft.com/directx/ – Microsoft DirectX Development Center
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/directx9_m/directx/directx9m.asp – DirectX 9.0 for Managed Code
http://forums.microsoft.com/ – MSDN Forums

Managed DirectX Tutorial 1 – Creation of a Direct 3D device and GDI+ rendering


Here you’ll learn how to setup a simple directx device in C# (and C++/CLI). If you have any experience with C++ or DirectX it will be just a warm up for you, if you don’t, it will be a simple tutorial that you read in 15 minutes.
If it was easy to setup a device in C++, in managed code it’s even easier to do it. Let’s start by creating a new windows form. Next follows the device setup and creation.
In this part you need to know mainly two things: the presentation parameters and the device. Let’s start by explaining the later one: the device is the main “interface” with the graphics system in your computer. It is responsible for rendering triangles, lines, etc., creating resources, shaders, etc. and other useful stuff; the presentation parameters hold the data that describes the setup of device, such as indicating if an application is running in windowed mode or not, the device’s window and the handle to the respective window, etc.
Let’s start by setting up the PresentParameters structure:

PresentParameters presentParams = new PresentParameters();
presentParams.IsWindowed = true;
presentParams.SwapEffect = SwapEffect.Discard;
  • The first parameter, IsWindowed, indicates that we are running the application in windowed mode, this means that we’re rendering to a window (or part of it) instead of rendering to the screen (full screen mode).
  • The second parameter, SwapEffect, indicates that we do not want DirectX to do any special operation when a swap is made between the back buffer and the actual screen buffer.

Note: When rendering some stuff using DirectX you are not actually drawing it to the screen. DirectX (as well as OpenGL) has the capability of drawing to a back buffer. This buffer will hold the final image and then it will be copied to the screen buffer.


Since you have some presentation parameters set, let’s have a look at the creation of the device:

device = new Device(0, DeviceType.Hardware, this.Handle, CreateFlags.HardwareVertexProcessing, presentParams);

Here we indicate the physical device (0 is default, the type of the device, the handle that we want to use to render our window, some flags for the creation of the device and our presentation parameters.

Now, here there is one thing that it’s interesting, what handle do we provide to the device?
In the .net framework, all components have the Handle property that returns a handle to that component. Well, if you want to create a device that will use a panel as its render scene, we give the panel’s handle to the device or if we want a form to have our scene, we give the form’s handle. This is very useful if we want to use part of a window as a DirectX scene and other part with contents of a regular windows application with buttons, textboxes and labels.


The device is created, let’s proceed to some action… let’s clear the contents of our scene:

device.Clear(ClearFlags.Target, Color.DarkSlateBlue, 1.0f, 0);

In this command we’re clearing the render target (that’s what we’re indicating in the first field, think of it as our offscreen buffer for now), with the color provided in the second argument and the other two arguments you can ignore them for now.


The final step is where we indicate that we want to see what we’ve been doing, that is, we want to see the contents of the back buffer.


Although there might be some unexplained stuff here, you might get the basic idea on how to set a new device.


We saw how to create a device and clear it, now we’ll see how to put this all together in a managed application. Step by step:

  • Create a new project.
  • Create a new Form (if you don’t have one already), I named it DXSampleForm:
public partial class DxSampleForm : Form
	public DxSampleForm()
  • Add a new private member of type Device:
private Device device;
  • Create a method called InitializeDevice where you create the present parameters structure and the device. Call this method after the InitializeComponent in the constructor:
public DxSampleForm()

public void InitializeDevice()
	PresentParameters presentParams = new PresentParameters();
	presentParams.IsWindowed = true;
	presentParams.SwapEffect = SwapEffect.Discard;

	device = new Device(0, DeviceType.Hardware, this.Handle, CreateFlags.HardwareVertexProcessing,
  • Override the OnPainBackground and do the DirectX rendering:
protected override void OnPaintBackground(System.Windows.Forms.PaintEventArgs e)
	device.Clear(ClearFlags.Target, Color.DarkSlateBlue, 1.0f, 0);

Why the OnPaintBackground and not the OnPaint you may ask?
Simple, overriding the OnPaintBackground can spare us of some problems in future if we ever need to use GDI+ or if we need to have components in the middle of the area where we’re rendering our scene.

Note: Don’t forget that when you use managed DirectX, you’ll have to reference a(some) DLL(s). For this tutorial you only need the Microsoft.DirectX.DLL.

As you can see it’s very simple to create a DirectX sample using the framework .net. It takes only 12 lines of code made by you to have a rendered window.


Q: How can I render to a panel?
A: Use the panel’s handle as the handle when creating the device. Example:

device = new Device(0, DeviceType.Hardware, panel.Handle, CreateFlags.HardwareVertexProcessing, presentParams);

Q: I want to draw some buttons and stuff over my DirectX render scene, is it possible?
A: Sure, just be careful to call the DirectX render methods in the OnPaintBackground instead of the OnPaint. As for the components, just drag them in the designer or add them by hand.
Q: How can I use GDI+ with DirectX?
A: First make sure you specify in your present parameters that you want to be able to lock the back buffer:

presentParams.PresentFlag = PresentFlag.LockableBackBuffer;

After that you need to access the back buffer. To do so, you use the GetBackBuffer method where you indicate the swap chain (0 by default) and the index of the back buffer. From this surface you can create a Graphics object that you can use to do your GDI+ drawings:

Microsoft.DirectX.Direct3D.Surface backbuffer;
backbuffer = device.GetBackBuffer(0, 0);
Graphics graphics = backbuffer.GetGraphics();
graphics.DrawRectangle(Pens.Beige, 10, 20, 50, 60);

And we reached the end of this tutorial. Next time I’ll show you how to render to two separate areas in a form. This is usually used in editors. Let me know if you find/have any bugs/questions/comments/suggestions. See ya till next time! ;)


Download Files:

C# and C++/CLI versions
You’re free to use this code as long as you don’t blame me for any changes/bugs that were introduced and don’t demand anything from me. Only download it if you accept these conditions.


http://msdn.microsoft.com/directx/ – Microsoft DirectX Development Center
http://msdn.microsoft.com/library/default.asp?url=/library/en-us/directx9_m/directx/directx9m.asp – DirectX 9.0 for Managed Code
http://pluralsight.com/wiki/default.aspx/Craig.DirectX/Direct3DTutorialIndex.html – Direct3DTutorialIndex

Next Tutorial

GAPPS – Game Art Pre-Processing Studio

# Main feature:

Fast appearance preserving simplification of high polygon models as described in the following papers:

  • “Fitting Smooth Surfaces to Dense Polygon Meshes” by Venkat Krishnamurthy and Marc Levoy.
  • “Appearance-Preserving Simplification” by Jonathan Cohen, Marc Olano, and Dinesh Manocha.

# Current features:

  • Fast generation of:
    • Normal Maps in World Space.
    • Displacement Maps.
    • Ambient Occlusion Maps.
  • DLL backend for integration on many platforms in executable or plugin form.
  • Easy and intuitive interface.
  • Smart heuristic for intersection selection.

# Planned features:

  • Dilation filter.
  • Anti-Aliasing.
  • Fixed and floating point output formats.
  • Object and Tangent space normal map output.
  • Compressed normal map output (3Dc, DXT).
  • New optimizations to achieve near realtime normal and displacement map generation.
  • Integrated plugins for 3D Studio Max and Maya.

# Comparison with free tools:

  • nVidia Melody: this is probably the most complete free tool available for appearance preserving generation, although the GUI is far from practical. Was used as a quality and speed reference during the development of Gapps.
  • ATI NormalMapper: Lacks speed and quality.
  • OpenRB: Slow even in hardware. Poor console based GUI.

# Conclusion:

If GAPPS was released with all of the planned features as free software it would be the fastest and most complete free APS tool available.

# Screenshots:


Particle System

This is my first tutorial about game programming. Maybe it’s not the right place to start for beginners but I think you can understand this tutorial quite easily. It’s nothing very hard to understand but you’ll have to have some knowledge about math, specially about vectors.

The tutorial is still a work in progress although I’ll present here the core of this tutorial. As soon as I complete this tutorial, I’ll put here.


The best way to explain what a particle system is, is by giving an example: imagine that it’s raining (well, if it’s really raining outside, just go to your window :)), can you see all the drops falling? Imagine that you are playing a game where you fire a rocket, in its tail you see smoke and when the rocket hits a player, there’ll be body parts all over the place. All these objects of the same kind (drops, smoke, body parts) are particles and some sort of program controls this particles, creating, moving and destroying them. This program is the particle system.

Most games have implemented some kind of particle system, some more advanced, others more simple. When creating one particle system, there are some important things to consider:
– will this particle system design only for this game/program or will it be used in the future?
– is speed an issue?
– is memory an issue?

These questions are extremely important and before you continue you should have answers to all of them. Particle systems should be very flexible if you plan to use it in different games/programs. Thus the data structures you choose to your system must be flexible. But the main concern is about speed and memory. The more particles you have the more the system will run slow and grow in memory.

Well, after this introduction, let’s get into business.

The Particles

As the name indicates, a particle system has one or more particles. Each particle has a set of attributes that can change over time. In the next table I’ve some attributes that I’ll use on the particles of my system. You may add other attributes that you think that may fit.

Particle attributes:
– Previous position
– Current position
– Direction
– Speed
– Color
– Delta color
– Alpha
– Delta alpha
– Size
– Delta size
– Age
– Lifetime

I think that the name of each attribute is clearly self explanatory. I’ll talk about the delta ones that you might not understand clearly. These delta attributes (color, alpha and size) are the attributes, that will be added to the correspondent non-delta attributes, over time.

Here is the correspondent code to what is described before:

struct Particle {
  Vector3D m_prevPosition; // Previous position of the particle.
  Vector3D m_position;     // Position of the particle.
  Vector3D m_direction;    // Direction of the particle.
  float    m_speed;        // Speed of the particle.

  RGBColor m_color;       // Color of the particle.
  RGBColor m_colorDelta;  // Color to add over time.
  float m_alpha;          // Transparency of the particle.
  float m_alphaDelta;     // Transparency to add over time.

  bool active;      // Indicates if the particle is active.

  float m_age;      // Age of the particle.
  float m_lifetime; // Life time of the particle. Age at which the
                    // particle dies.

  float m_size;       // Size of the particle.
  float m_sizeDelta;  // Size to add over time.

You could create a class for your particles that would update their values but I thought that that wouldn’t be necessary for this case. Therefore, the particle system will take care of all updates of the particles.

Now let’s see the particle system.

The Particle System

The particle system must to have the means to configure and update the particles in the system. In this system I created some functions to set the attributes of the class. I think the code is quite well commented and it will be easy for you to read it and understand it. I’ll just explain some things I put there and the reason why I did this way.

First thing is the list of particles in the system. There are usually two ways to deal with the particles within the system, one using arrays and the other one using lists. If you store the data for each particle in an array, it will be of a fixed size (maximum number of particles) and sometimes you’ll reserve more memory than necessary (e.g. in the case you need only 100 particles and your particle data array is declared to more than the 100 particles needed.) In the other hand, using a list instead of an array, the process of dealing with the data becomes a more slow but you can control the amount of memory used by creating and destroying particles. The choice is up to you! Later I’ll do a simple test with real values to compare the differences.

typedef std::list Particles;  // Declare a list of particles.
Particles m_particles; // List of particles of the system.

I also created a second list of particles that will contain references to dead particles. Why should I have a list for dead particles? The answer is: for optimization purposes! In a particle system, particles born and die very fast (in most cases) and we must be always allocating and deallocating memory for these particles. With this list we make use of dead particles and the allocation process doesn’t have to be done. We only set the attributes for the particle and delete the reference of this particle from the dead particles list. Later I’ll present here the differences between using this additional list and not using it, for you to compare.

Particles m_deletedParticles; // List of deleted particles of the system. (For speed increase purposes)

Two important things to mention are the emitter function and the physics function. The emitter function is where you program the emitter of your system. Think of the emitter as the start point of the particles. Imagine that you have a hose spraying water… The end of the hose is the emitter that emits the drops of water (particles). The emitter sends the particles with a direction and speed, and it’s convenient that you add some randomness to create something more real. All the physics in the system are computed in the physics function. The purpose of this function is to simulate the physics that we want on the system, gravity, wind, it’s up to you. Modifying these two functions, you can create other effects with your particle system.

typedef void (*EmitterFunction)(Vector3D *emitterPos, Vector3D *emitterDir,
 Vector3D *particlePos, Vector3D *particleDir);

typedef void (*PhysicsFunction)(float time, Vector3D *pos, Vector3D *direction, float *speed);

Update of the system

Now let me explain the update method (I’ll not explain the method of the creation of the particles since it just sets the attributes of each particle he creates.)

The position of each particle is given by its direction times its speed times the time elapsed since the last update. If more forces act on the particle (gravity, wind, etc.), these will modify the particle’s position. The life of each particle will be decreased by the amount of time that has elapsed since last update. Other attributes of the particles are updated. Their size, color and alpha (transparency).

// If there is any physics acting on
if (m_physics)
  m_physics(time, &(part->m_position), &(part->m_direction), &(part->m_speed));

part->m_position.m_x += (part->m_direction.m_x) * (part->m_speed) * time;
part->m_position.m_y += (part->m_direction.m_y) * (part->m_speed) * time;
part->m_position.m_z += (part->m_direction.m_z) * (part->m_speed) * time;

// Let's update the particle's age.
part->m_age += time;

// Let's update the particle's color.
part->m_color.r += part->m_colorDelta.r * time;
part->m_color.b += part->m_colorDelta.b * time;
part->m_color.g += part->m_colorDelta.g * time;

// Let's update the particle's transparency.
part->m_alpha += part->m_alphaDelta * time;

// Let's update the particle's size.
part->m_size += part->m_sizeDelta * time;

When the age of a particle reaches its lifetime, the particle dies. When this happens, we add this dead particle to the dead particles list (remember? to increase the speed) and delete its reference from the active particles list instead of deleting the particle.

if(part->m_age >= part->m_lifetime) {
  part->active = false;                // We mark the particle has dead.
  i=m_particles.erase(i);              // We erase this particle from the system.
  m_deletedParticles.push_back(part);  // Add this particle to the  list of dead particles.
  m_numParticles--;                    // Update the number of particles in the system.

The system calculates how many particles it has to create by multiplying the number of particles to create per second with the amount of time elapsed and adding the m_particlesRest. Sometimes, we get values like 1.5 particles to create and of course we can’t create half particle so we save the decimal part into the m_particlesRest variable to use it in the next update. If there is any dead particles, we use them avoiding the allocation of memory. Their values are initialized (color, lifetime, etc.) and their initial position is set according to the emitter’s function.

// We are going to see how many particles we have to create.
float nPartCreate = m_particlesPerSec * time + m_particlesRest;
// We create only an integer number of particles.
unsigned int numParticlesToCreate = (int)nPartCreate;
// The remaining part is saved to subsequent calculus.
m_particlesRest = nPartCreate - numParticlesToCreate;

// If we have to create particles
if(numParticlesToCreate > 0)
  createParticles(numParticlesToCreate);  // we create the particles needed.

There is no collision detection in this particle system. It will complicate things a bit and I wanted to present something simple. I may add it in the future.

Another thing that should be present in the particle system is the use of billboards for each particle. A billboard is a polygon (normally a quad) that is always parallel to the screen. We’ll see this in a later tutorial.

I programmed two simple effects with this particle system, some snow and a simple particle jet. With time I’ll present more effects. By the way, send me some effects of your own ;)

These two effects are really simple to create:
– To create snow you just have to give some random values to the emitter’s position and throw the particles with a negative direction in the y axis (-y). In the physics we put some gravity acting on the particles.
– The emitter function of particle jet, throws the particles with a direction up in the air adding some randomness to create a more “fantastic” effect. The position of the emitter also varies a little. The physics here are the same as in the snow effect. Just add some gravity and it will do the trick ;)

Screen shot


Download here the code for this tutorial.

That’s it for now. Hope you enjoyed it! Improve it, create amazing effects with it and share them with me (I also like to learn things ;)). Let me know if you have bugs/questions/comments/suggestions.

Before I go, I want to thank Diogo Andrade for some tips he gave me and for bits of code from his engine that I used in this particle system.

See you in the next tutorial!


http://nehe.gamedev.net/data/lessons/lesson.asp?lesson=01 – Setting Up An OpenGL Window
http://nehe.gamedev.net/data/lessons/lesson.asp?lesson=06 – Texture Mapping
http://www.spellcasterstudios.com/users/diogo.andrade/ludiEngine3d_V3.2.zip – LudiEngine3d V3.2
http://www.gamedev.net/reference/articles/article1043.asp – Particle Chamber
http://www.gamedev.net/reference/articles/article1042.asp – Advanced Particle Systems
Vectors Tutorial (Mathematics Section)