Thursday, April 11, 2013

Tangent-Space Normal Mapping

Tangent-Space Normal Mapping


  One of my earlier blogs was about Normal mapping as a whole.  There, I talked about how normal maps work, and the effects that they add to your game.  What I didn't talk about was the difference of object space normal maps to tangent space normal maps.  Is there a big difference you ask?  Well yes.  Object space normal maps are great if you want to get that bump mapping effect on objects within your world.  There is one caveat however, if you morph them in anyway (animation), it looks hideous.  This is due to the matrix orientations for each vertex being in object space.  The way to apply a normal map, and be able to animate something without distorting the effect, is through tangent space mapping.


  The left image shows the matrix orientations per vertex for tangent space mapping, and the right image shows it for object space mapping.  The grey vector represents the surface normals, and the red, green and blue vectors represent the basis vectors.  Imagine the character is walking.  If your use tangent space mapping (left image), the surface normals will remain the same due to them being orientated in tangent space (local to the vertex).  If you use object space mapping (right image), the normals will be shifting differently from the boot, because they are oriented in object space mapping (local to the object).  This will create that distortion effect that I mentioned earlier.  To get around all this, you have to bring your object into tangent space.  This is done by multiplying the object space matrix, with the TNB matrix which you have to find.


  First off, you get your normal from your normal map (tangent space).  So that is done for you immediately.  Now, you need to find your tangent and bi-tangent (bi-normal).  To do this, you need to find the deltaU and deltaV of your vertex.  Imagine your origin is at U0,V0.  You can find the triangle edges E1 and E2 as a combination of your T (tangent) and B (bi-tangent).

                                                E1 = (U1 - U0)T + (V1 - V0)B
                                                E2 = (U2 - U0)T + (V2 - V0)B

  What you are calculating are the changes, or derivatives of U and V.  This only needs to be done once, so it can be calculated on the CPU.  You can then easily transform this into a matrix to represent your tangent and bi-tangent.  From there, you grab the inverse of this matrix (transpose) and then multiply it by your object space matrix.  This then results moving your object from object space, into tangent space.  From there, your normal mapping will look much better, especially when you are doing animations.


GamesCon

GamesCon


  On Tuesday, UOIT held their annual GamesCon event in the UB and ERC buildings.  Here, students showcase their game projects which they worked on throughout the year.  The event was held from 10am-5pm and students were set up in the main lobbies.  Other students, teachers and faculty members then came around and looked at each game which was produced.


 We showcased our game "Beat Dragon", where you use music as your weapon.  This is a game that we worked on heavily throughout our last semester, especially since we had to re-develop a lot of assets which were used from first semester.  The build we showed on Tuesday was good but admittedly, not nearly close to what we wish we had.  We had a lot of assets ready to be implemented and some great aspirations which sadly, never made it into the game.

  
  What we had was met with a very positive response.  People loved our game, it's gameplay and the concept which we had developed.  We wish we had more to show them, but for what we had, people thoroughly enjoyed.  They loved the ability to build music as you play through the game, and said they loved the visual style of our game.  This felt great, knowing that our hard work had not gone unnoticed.


  We showed our teachers the tech which we included within our game.  Shaders like bloom, and depth of field, VBO's, VAO's, and our own built graphics pipeline which was done by our programmer.  We got to show them our animations, levels, enemies and environmental objects which we had spent 3 months developing.  Then we showed our music, produced and recorded by our own audio producer.  We showed how we divided it into layers, and how they built on top of each other with our "Beat Timer" class within our game.  

  5pm came quickly, and it was time to announce the winners.  Sadly, we didn't win, but we were proud of what we had done.  We learned from our mistakes and what we needed to build on, but we also learned what we had done right, and what works well when working in a group.  Most Importantly, we enjoyed producing our game, as it was a valuable experience and one I will learn from.


Tuesday, April 9, 2013

Level Up

Level Up!


  On Wednesday April 3rd, 2013, my group (My First Ventilation Shaft) attended Level Up showcase at the Design Exchange in Toronto.  We were asked to showcase our game "Beat Dragon", where music is your weapon.

  I arrived early in the day, around 10 am.  I met my group members in the lobby, where shortly after, we were escorted upstairs.  At this time, only UOIT students were there, so we all sat around while they started to set up the monitors and stations where we would be presenting.  We sat around, touching up our game, making sure it was ready to present.  Half an hour later, students from U of T and OCADU started to come in.  We started to get our first glimpse at the competition.  Their games were impressive, but we used our own engines and they were using prebuilt ones, so we had a little extra hop in our step, so to speak.


  An hour later, and we were ready to head to our stations, and start setting up.  We managed to grab a TV down the middle of the main area, so we were in a perfect spot to show off our game.  We quickly got our game set up, and were ready to show it off to the world.  There was one problem however, it was hours until the doors opened.  So, we waited......


  After what seemed like days, 5pm rolled around and it was time to get to business.  People started flooding in; Individuals from the industry, attendee's relatives, children, and people off the street.  It got busy, fast.  We started showing off our game to anyone interested, and wanting to learn more.  We got so much positive feedback, and people who loved the concept of our game.  Even industry guys were intrigued with our game.  It was nonstop for the next 4 hours.


  I spent some time during these four hours, walking around and looking at other groups' games.  It was really incredible to see how unique and original the games being shown off were.  You can tell everyone had put in a lot of time and effort to make their game the best they could, and it showed.  One thing I can say from talking to the other participants from other schools was UOIT really teaches you everything you need to know when it comes to games, and seeing the look on people's faces when you tell them you built the engine yourself never gets old.  You can tell people understand how hard it is to build an engine from scratch.


  It was approaching 9pm, and people were starting to leave.  We were asked to pack up our stations, and hand in any remaining ballots to be counted.  Just as we finished packing, they started handing out the awards.  No one from UOIT won, but that isn't something to be get upset over.  We learn how to build games from the ground up, and we will be better for it when we graduate.  However, Knescha Rafat did win a nice gaming headset, so he was pretty happy about it.

  With everything done, and it getting close to 10:30pm, we decided to call it a night, and head home.  It was a thoroughly enjoyable experience, and if you ever have the chance to go, I urge you to do so.  I know we will be aiming to attend next year.  Here's to hoping!

Sunday, April 7, 2013

SSAO: Screen Space Ambient Occlusion

SSAO: Screen Space Ambient Occlusion


  Screen space ambient occlusion is technique which globally illuminates the scene in screen space.  It is done through approximating the occlusion function on visible surfaces by sampling the depth of neighboring pixels. This is all done in screen space; hence SSAO (Screen Space Ambient Occlusion).  There are many different ways to achieve SSAO, but I will be focusing on the technique used in Starcraft 2.


  In the first pass, the linearized depth and surface normals are rendered to a texture.  In the second pass, a fullscreen quad is rendered and multiple samples are taken from neighboring points to the current pixel.  They are sampled and then are offset in 3D space from this pixel.  They are then converted back to screen space to compare and sample the depth found in the first pass.  This is done within the fragment shader.  This allows us to see if the sampled point's depth is farther or closer than the actual point's depth.  If it is closer, then there is something covering the sampled point.


  The occlusion function gives off zero occlusion if the depth delta is negative, high occlusion if the depth delta is small, or the occlusion falls back to zero beyond a certain depth delta, where the depth delta is the difference between the depth of sample point and the depth sampled at the point.


  After calculating the SSAO, you need to blur your scene in order to reduce the noise within the picture.  This can be done with either a 2D gaussian blur or a 1D bi-lateral blur, one in the X direction, one in the Y for a total of 2 passes.  You will also want to retain the sharp edges within your scene.  This is done by sampling a pixel, and then determining whether the pixel's depth is below a certain threshold, or if the dot product of the normal of the sample pixel is below a certain threshold.  If it is, then the kernel has a zero weight factor on the current pixel.


  With all of this implemented, you will have then successfully developed SSAO within your game.  You can then use SSAO combined with other effects to really make your game pop, and achieve that high quality visual style.

Saturday, April 6, 2013

Displacement Maps

Displacement Maps


  Displacement maps are very similar to Normal maps except for one difference: they actually deform geometry.  If you recall my blog on normal maps, I talked about how the normal's were used with lighting to create depth which gave geometry detail without having to actually deform it.  Displacement maps work very similarly, except now you are deforming geometry to the given map, and are creating the detail through this displacement.



  In the above picture, the ground is mapped with the same texture, except one uses normal mapping, and the other uses displacement mapping.  The top one uses displacement.  As you can see, the bricks on the ground are generated through moving the necessary geometry specified by the displacement map.  The normal mapped ground creates detail through the normal's casting shadows, creating depth and giving the look of raised bricks.  
  
  The displacement is done in the vertex shader.  The shader reads in the necessary vertices from the map, and then displaces the vertices accordingly.


  You can also do displacement mapping through ray casting.  With this, you need the coarse mesh, displaced texture and assume that the displacement is a negative displacement.  You find the intersection point and then find the displaced texture coordinate.  Per vertex, you determine the eye ray, light ray etc. in tangent space, and per fragment you find the intersection of the eye ray with the height field, (offset the surface point and texture coordinate) and add the shadow ray to light source.  You walk along the ray in small steps, and interpolate the height field linearily.



  The interpolation is important as you can see in the following pictures.  The top picture is through a piece-wise constant interpolation.  Notice how it seems to be chopped into levels, and isn't very smooth with no discernible shape.  The bottom picture is through a piece-wise linear interpolation, and is much smoother and more accurate to the actual displacement map.  You can make out a face in the bottom picture.  


  Displacement mapping is a great way to create detailed objects within your game through actual manipulation of the geometry.  The only downside is that your objects need to be much higher poly rather than objects that are normal-mapped.  The more detail, the higher the poly count.  However, if your game can handle it, the payoffs are worth it.






Thursday, April 4, 2013

Normal Mapping

Normal Mapping


  Normal mapping is a procedure in which you can create a lot of detail within a low-poly model.  It creates quality and visually appealing graphics without having to create huge taxing models which can slow down your game.

  The first means of normal mapping is creating a highly detailed model.  With this model, you would show all the fine detail that you want if your final model.  You would then take a second model, however you would create a significantly lower detailed model.  This model would however have to be similar to the high detailed model, or else the normal map will not be applied correctly.  This is known as creating a high-poly and low-poly rendition of your character.


  After this is done, you would then create a normal map.


  This is what a normal map looks like.  The reason you see a lot of blue and red/purple is because this map shows the normal's of the model.  With this data, when you apply a normal map to a low poly model, it applies the normal's to the necessary geometry, and with lighting, creates depth and shadows showing fine detail.  However if you go right up to the model itself, you will actually see that the model's geometry has not changed at all.  This is why lighting is key, because without it, you would not see the detail that the normal map creates.


  Here, you can see a low-poly model, a high-poly model, and then the low-poly model with the normal map applied.  You can't even tell the difference between the low-poly and high-poly model and the normal-mapped model is not nearly as expensive in computing power as the high-poly model.  This is what makes normal-mapping so fantastic, and so helpful.  You can create these amazingly detailed models without destroying your frame rate.

  Since this is an in-expensive way to produce high-quality models, and creates visually appealing graphics, it is used extensively in games today.  You can normal map anything, and in the end, you get a gorgeous game.


Tune in for my next blog!

Tuesday, April 2, 2013

Depth of Field

Depth of Field


  Depth of Field is help adds depth to your game, through different focal planes.  There are 2 different camera models, Pinhole and thin lens.


  Pinhole (shown on the left) lets a single ray through, which always hits the object, always creates an in focus picture.  While a scene might be in 3D space, it might appear flat since there is no depth within the field of view.  The thin lens model casts out multiple rays through a lens, which are then redirected towards the object.  If the rays come together to a single point and fall on the object at that position, it is considered to be in focus and falls on the focal plane.  If the rays hit it at a point of where they are scattered, they all attribute to image and create a circle of confusion.  This circle of confusion describes how blurry the object is.  The bigger the circle, the more out of focus it is.  


  The above picture shows 3 gnomes in a forest.  You can see the middle gnome is perfectly in focus, thus would fall on the focal plane.  The gnome closest to us is quite blurry and would fall on the Near Plane.  It's circle of confusion would be quite large, hence the extreme amount of blur.  Lastly, the gnome off in the distance would fall on the far plane, and would also have a large circle of confusion.  These planes help convey the depth of the image, and the distance each gnome has in the scene.



  The blur is created by downsampling the in focused image and then applying a gaussian blur.


  To do this for your game, you would take the circle of confusion, set a centre sample, and then randomnly sample parts within the circle using either Stochastic Sampling or Poisson distribution (Poisson is the preferred algorithm).  You then apply this sample to a filter.


  The filter is then sized based on whether or not the point is in focus or blurred.  The more blurred it is, the bigger the circle of confusion and pixels it affects on the kernel.  If it is in focus, it will affect a single pixel within the kernel.


  The end result is a photo-realistic image which conveys the proper depth within the scene, and accentuates the object which is the main focus.  Tune in soon for my next blog post!