Monday, December 22, 2008

Starfields & Nebulas

Instead of focusing on craters (which I implied I was going to do last time), I decided to look into generating starfields and distant (ie 2D) nebulas.

I started on simple starfields and I had a working solution in less than an hour by simply drawing star sprites in random positions.

This yields the following result:


Next, in order to make the stars more interesting, I looked into implementing nebula clouds. I knew that some summation of Perlin Noise would be the best solution. So, I decided to implement a Perlin Noise function in C#. I have a really fast implementation in HLSL, but I wanted the full power of breakpoints and variable watches. I ended up using a standard fBm summation with an exponential filter applied to it.

This helped to produce this nebula cloud image:


I still need to blend these two results together and construct them into a decent cubemap.

World of Warcraft + 360 Controller

All of that work done on the starfields and nebulas was done a couple weeks ago. My co-workers recently pressured me into playing World of Warcraft again. (About 6 months ago, I got up to level 18, and then quit.) As I was playing I kept thinking about how bad the controls were. I was moving my hands all over the keyboard to use all of my skills. It was fine for just grinding along and completing quests, but grouping and duels I always performed horribly. I desparately wanted to play with a game controller, however World of Warcraft didn't support them. There were a couple of applications that other people had written to allow gamepads, but I didn't care for them either. One was very ackward and built up macro commands that you had to then "submit" by pulling the right trigger. Another one seemed decent, but it had a subscription fee of $20 a year.

I decided that if I wanted something done right, I had to do it myself. So, I threw together a quick XNA application that read the 360 controller input. It would then use the Win32 function SendInput to place keyboard and mouse input events into the Windows input queue. I mapped out all of the controls and I was then able to play World of Warcraft using a 360 controller! I can target enemies, target allies, use all 12 skills on the current action bar, switch action bars, loot corpses, and even move the camera with full analog control using the 360 controller! It has completely changed the game for me and I even won my first duel last night.

I have now taken the app and added in mappings for Call of Duty 4 for my brother. I'm planning on converting the project to be more generalized so that the input mapping can be defined in XML and easily changed without recompiling.

Thursday, December 4, 2008

Lighting Revisited (AKA I'm Back!)

So, after almost 3 months of being sidetracked with various other projects I came up with, I finally decided to come back to C# and XNA with welcome arms.

And I have come back in a big way, in my opinion. After discussing it with one of my friends, I decided that it would be best to procedurally generate a moon instead of a planet. This greatly simplifies many things; no atmosphere, no fog, no water, no vegetation, etc. However, this meant that I needed to figure out how to properly do two things: generate craters and have realistic lighting.

You may recall that I had a previous blog entry about my attempts with realistic lighting. [http://recreationstudios.blogspot.com/2008/07/do-you-see-light.html] In the end, I determined that it would be too difficult to do now, and easy to do in the future once I had geometry shader support. So, I had settled with simple spherical lighting across the entire planet.

So, it took me quite a bit of effort to even start looking at alternative ways to render realistic lighting. In the past I was trying to calculate the other two vertices of the triangle in the vertex shader. It was with a great facepalm that I realized that what I was trying to find was the tangent of the surface and then I could use that to adjust the spherical normal. Well, as you may remember from your math classes, a tangent is calculated by taking the derivative of the original function. So, I looked into quite a few articles about how to find the derivative of a Perlin Noise function.

This article was very informative, but I wasn't quite sure on what speeds would be on the GPU:
http://rgba.scenesp.org/iq/computer/articles/morenoise/morenoise.htm

Finally, I found an article by Ken Perlin himself that shows how to approximate the derivative:
http://http.developer.nvidia.com/GPUGems/gpugems_ch05.html (Section 5.6)

With this little nugget of knowlege I went into my shader and wrote a function to do exactly that.

float3 NoiseDerivative(float3 position, float e)
{
//calculate the height values using Perlin Noise
float F0 = ridgedmf(position*noiseScale, 8);
float Fx = ridgedmf(float3(position.x+e, position.y, position.z)*noiseScale, 8);
float Fy = ridgedmf(float3(position.x, position.y+e, position.z)*noiseScale, 8);
float Fz = ridgedmf(float3(position.x, position.y, position.z+e)*noiseScale, 8);
float3 dF = float3(Fx-F0, Fy-F0, Fz-F0) / e;

//calculate the actual normal
return normalize(position - dF);
}

Then I simply added a call to this function in my pixel shader:
input.Normal = NoiseDerivative(input.Normal, 0.000001);

Lo and behold, it worked!

Check out the pics:


Friday, November 7, 2008

Sidetracked

Wow, it has been awhile and I haven't really had time to work on my graphics projects. I have been really busy at work. When I do finally make it home, I usually get sidetracked with other things. I finally got to level 60 in Mass Effect. I am over halfway through the excellent novel Prey by Michael Crichton (I can't believe he's dead now). I have also been piecing together ideas for my own novel (I've always wanted to write a novel).

In terms of the DirectX 10 work, I did manage to get multiple render targets working. However, I got sidetracked by a bright idea I came up with involving using "strokes" generated in the geometry shader to do some NPR post processing. I was having a hell of a time trying to get multiple vertex buffers working. As I was investigating though, I found out that it wouldn't work anyway because the buffer size is limited to 16-bits (65,536 vertices). I needed more than that ... alot more. It would be a rather long explanation to explain why, so I'll just say that I was trying to essentially use the vertex shader as a pixel shader.

XNA 3.0 is out now, and I'm really surprised that I have not yet downloaded it. It sounds like they added in some nice features though (new Media classes, ClickOnce deployment, etc.).

I am really missing C#, so I may just go back to XNA and just wait for it (or some other C# graphics library) to get Geometry Shader support.

I was also reading about some of the features that are going to be in C# 4.0, and I am really excited. They are finally adding in optional parameters. Plus they are adding a dynamic type, which could prove useful in some cases.

Tuesday, October 7, 2008

It's getting close

I've been continuing my work in DirectX 10, and it has been relatively slow progress, but progress nonetheless. I managed to render the simple character model that comes with the DirectX SDK using the first pass of my pencil shader. The first pass applies 6 different pencil textures based upon how lit that part of the model is. It is currently using the given texture coordinates of the model, so it doesn't look as good as it potentially could. That's an improvement for another time though.

I have been busy working on getting the second pass of the pencil shader working. It's pretty much just a Sobel filter that is applied to the normal map of the scene. In order to get the Sobel filter working though, I had to first figure out how to do post-processing image filters in DirectX 10. Obviously the first step was to render a full-screen textured quad. In XNA, I "cheated" and just used a SpriteBatch. It's not the most efficient way, but it allowed for the fastest development.

There ARE sprites in DX10, but I figured I might as well do it the "proper" way this time. So, I set up my own vertex format (with Position and Texure Coords), created my own vertex buffer (with the 4 vertices positioned in clipspace), and my own index buffer. I then wrote a simple shader that simply passed the vertices through the vertex shader, and applied the texture in the pixel shader. I fired up the app for the first time and Vista completely froze! Ctrl-Alt-Del didn't work, the mouse didn't move, nothing. So I shut it down and restarted and I got the Blue Screen of Death! I booted back into Safe mode and it said that my video card driver had gotten screwed up. I was able to boot back into Windows regularly the next time without changing anything.

I set a breakpoint in my code and stepped through the Update/Render loop about a dozen times without any errors, so I just let it run free again. The exact same problems occurred again, thus causing a reboot. I had no idea what I was doing wrong to screw up my computer, so I looked at several of the DX tutorials line by line. I realized that I had forgotten to tell the video card what my vertex input layout was. I added in the 3 lines of code to do it, and bam, everything worked! So I now have full screen textured quads working in DX10. An added bonus is since I positioned the vertices in clipspace, then it works on any resolution without requiring any updates or changes, plus there are no matrix multiplications that are needed to be done.

The next task I have to work on is to get mutiple render targets working in DX10. I need this because as I mentioned earlier, my goal is to render a normal map of the scene, and then pass that normal map into the Sobel filter shader.

Friday, September 26, 2008

DirectX 10

I've decided to finally start dabbling with DirectX 10. The main reason behind it is that I want to work with Geometry Shaders. There is no official C# wrapper around DX10, but I did manage to find two third-party ones. One had not been updated since April 2006, so I threw that one out the window. The other, SlimDX, has been in very active development and a new version was just released this month. I downloaded it and played with some of the samples and was rather impressed. However, SlimDX also provides a wrapper around DX9 and it is very clear that that was the priority of the project. There are quite a few things missing from the DX10 wrapper that would have been very useful.

So, instead of fighting through a third-party C# wrapper, I figured I might as well go back to the straight C++ version. This has the advantage of having tons of official samples and tutorials from Microsoft. However, this has the major disadvantage that I have not worked in C++ directly for almost 4 years. It is rather rough going back to C++ from C#. There were so many helpful things built into C# that aided in rapid development. I mean, C++ doesn't even have a string type, you have to use char arrays. (Edit: I forgot that it does support strings, but you have to include the header for it first. Doh!) I feel like I'm stepping back into the dark ages! :-P

I'm hoping to have just the basic C++ code to draw a model using a shader, and then all of the rest of my work will be in HLSL. We shall see!

(Edit: I found a really helpful C++ tutorial for getting me back up to speed. http://www.cplusplus.com/doc/tutorial/ )

Sunday, September 14, 2008

Switching Gears

So I was chatting with one of my friends about what was up next on the todo list for my procedural planet. I explained that I was working on fog and that after that I would focus on water. He suggested that I get HDR and Depth of Field put in next. I then mentioned how I want to get Atmospheric Scattering in as well.

With all of these features laid out, the project seems rather daunting. That is easily several months worth of work right there. (I still do have a full time job!) I started thinking about how to make it easier, yet still make it stand out. I mean no matter how many awesome looking things I put in, I will never even come close to the level of Crysis. I am only one guy! So, I decided I should focus more on something that would be feasible for a team of one to accomplish in a lifetime.

Time for a personal history lesson!

Back when I first started fiddling around with XNA (over 2 years ago now), I also started looking into shaders for the first time. I have always been interested in Non-Photorealistic Rendering (cel shading, painterly rendering, hatching, etc), so I thought that might be a good subject to test with shaders. In my research, I came across a great paper about doing real-time pencil sketch rendering. http://cg.postech.ac.kr/research/pencil_rendering/

In January 2007, I began work on my own pencil shader. I actually started with 4 separate shaders that would require all models to be rendered 4 times. Fairly quickly I realised that I could speed things up greatly by having 1 shader with 4 passes. I tinkered around with it, and other things, for several months before I found out about using multiple render targets. Using this, I was able to get it down to 3 passes. Not long after that, I demoed the shader in a job interview and I actually got a job as an XNA Game Developer! That job lasted until the company got bought out by a larger company. Toward the end of my employment there, I started getting into procedural generatation, and that's how I came to have the procedural planet that I have today.

As you know, I have a new laptop now, and during the transfer of files from my old laptop, I saw my old pencil shader again. It had gone untouched since the day I demoed it in my job interview. Yesterday I finally decided to take all of the code and update it to XNA 2.0 and clean up what I could. I realized that I had learned quite a bit over the last year about both XNA and C#, so I was able to make the code much cleaner. Not only that, but I was also able to get the shader down to 2 passes! This gave me a 100fps speed boost. (On my GeForce 9800 GT, the 4 pass = 470fps, 3 pass = 480fps, 2 pass = 580fps.)

This has really given me a desire to work on NPR stuff again. So, currently my next goal is to have a procedural planet that is rendered using my pencil shader.

Tuesday, September 2, 2008

Procedural Texturing

Well I am now producing what in my opinion are decent procedural textures for the planetary terrain. They are generated per pixel every frame. This has the advantage of allowing infinite resolution as well as eliminating any texture coordinate warpings around a sphere, since the textures are actually 3D.

As I mentioned before, this does require a significant performance hit. I have managed to simplify the texture generation to the point of having reasonable framerates while maintaining decent texture quality. What I am doing is calculating 4 separate textures using 2 octaves of turbulence noise at different scales for each one. These 4 textures represent sand, grass, rock, and snow. They are then blended together based upon the terrain height to generate a final texture.

I am getting anywhere between 25-50 frames per second depending upon how many pixels of the terrain are taking up the viewport. The average for normal travel is 35 frames per second.

Here are some screenshots of the results.

Monday, September 1, 2008

New Laptop, New Opportunities

It's been awhile since I've written an update. As expected I got slightly side-tracked by Too Human, but what was an even worse offender was Mass Effect. I picked it up at a sale at Toys R Us, and I became addicted to it. I am currently working on my third playthrough of the game.

I finally received my new laptop 3 days ago. I haven't had much time to do programming on it because I had to first install all of my desired applications, transfer all of my files from my old laptop, and then prepare my old laptop for my sister.

I did manage to run a "benchmark" on the new laptop though. In the past, I wrote an XNA app that generates a texture using Perlin Noise in a pixel shader. This is usually a good evaluator of a GPU's performance. Here are some specs:

Generating 12 octaves of 1024x1024 3D fractional Brownian motion Perlin Noise:
8600 GT ~20 frames per second
9800 GT ~80 frames per second

Yep! My new laptop's GPU is 4 times faster than my desktop! Just to put a little perspective on it, these results mean that my new GPU was performing over 1 billion perlin noise calculations a second!

I also started fiddling around briefly with generating procedural textures for the planetary terrain. My idea was to use 4 different noise calculations per pixel and then blend the results together based on the planetary height of the pixel. As expected though, there was a severe penalty for doing a noise calculation per pixel. I never did get up to 4 calculations. Here are my initial findings, on my 9800 GT of course!

~120 fps using 4 existing textures
~40 fps using 1 8-octave ridged multifractal per pixel
~30 fps using 2 8-octave ridged multifractals per pixel

The performance hit is not linear though, so I expect that if I did 4 noise calcs, I would be getting about 20fps. However, that is on a 9800 GT, which is one of the best cards on the market now. Sure there are about half a dozen better cards, but how many people really have them?

I do have some ideas to potentially speed it up though. I could reduce the number of octaves, but then increase the scale. I could also try to make a fancy gradient so that I don't have to use as many different noise calcs.

Here are some screenshots showing different simple gradients using just 1 noise calc.

Sunday, August 17, 2008

Relaxing

I decided that since I had accomplished so much by pretty much working non-stop for almost a month, I was going to take a break for a bit. So, I have not worked on my procedural planet for this past week. Of course, I could never really break myself away from it, so I have been reading books and websites to research the next phase. I even put together an HLSL project to do some experimentation. So, don't worry, progress is still being made, just at a much slower rate now. I'm guessing I will start working on it again this week. After I get my new laptop (sometime after the 28th) I'm hoping to really pick up the pace again. Although, I do have a word of warning: I may get distracted with DirectX 10 and Geometry Shaders. We shall see. (Not to mention that Too Human comes out on Tuesday, Tales of Vesperia next week, Infinite Undiscovery the week after, and Spore the week after that! Too many good games coming out so soon.)

Until next time.

Sunday, August 10, 2008

Success!

I have completed the items on my to-do list!

I finally stopped the terrain from disappearing (for the most part ... I still had two instances where it happened). I discovered that I had a key subtraction backwards. How it even worked at all before, I will never know.

I got each terrain level to move at its own stepping distance by storing and updating a world matrix for each level. Luckily my CPU was hardly being used at all so I could easily spare the CPU cycles to do these extra calculations.

I fixed the gap between separate levels by simply making the square mesh slightly larger. Previously, the mesh would extend from (-1, -1) to (1, 1), but I updated it so that the range is (-1.25, -1.25) to (1.25, 1.25).

Now for what you've been waiting for: a new video. In fact this time I have two new videos for your enjoyment. One showing off the new terrain, and another showing it in wireframe mode.

Normal Terrain:


Wireframe:


On a separate note I just ordered a new laptop, so I will finally be able to do development on my laptop again. My current laptop only has a GeForce 6100, so it couldn't handle my shaders anymore (no unified pipeline). My new laptop has a GeForce 9800 GT which is about 20 times faster than my current GPU. It will be awesome!

Friday, August 8, 2008

Huh?

Well I beat Braid last night. I think it took me about 6 hours total to beat it. It's a very good game and I highly recommend it. That is, if you enjoy "action puzzle" type games (think along the lines of Portal). There are 60 puzzles in the game and I was able to do 59 of them myself. I couldn't figure out one of them and I tried it for an hour. I finally looked online and I learned about a dynamic of the game that I wasn't even aware of.

I had read a posting on the XNA forums about how matrix transformations can cause slight errors over time, so I added code after my transformations to renormalize the vectors and even recalculate the cross products in order to maintain orthogonality. However, when I fired up the program, the terrain was disappearing even more! The angle between the two vectors is still randomly coming up as NaN and Pi.

Sigh, well I guess I know what I'm working on this weekend.

Thursday, August 7, 2008

To-Do List

Here is a list of things that I want to get working before I create the next video:

- Fix disappearing terrain
- Make each level move at it's own step size (right now they all move at the smallest step size)
- Fix gaps between changing detail levels (you can see the gaps in the last screenshot I posted)

As for the disappearing terrain, in my previous post you will see that I've narrowed the problem down to the angle calculation. I've read the documentation on acos() and it says that NaN is returned as a result if the input parameter is < -1 or > +1. I'm passing in the dot product of what should be two normal vectors, thus the input should never exceed 1. Apparently I'm not sending in true normal vectors, so I'm going to add more normalization calculations, just to make sure they are coming out normal. That should remove the NaN results, but hopefully it fixes the Pi results as well. (I'm crossing my fingers tightly.)

I'll give it a try when I get home.

Wednesday, August 6, 2008

No real progress

I have been busy playing Braid, which is a great game that just got released on the Xbox Live Arcade. Therefore, I haven't really put much time into the procedural planet.

I did put in some code that would always display the calculated angle between the camera and the terrain mesh in order to see if that was causing the disappearing errors. I actually believe it is, although I'm not quite sure why.

First, I was surprised to see that probably a quarter of the time, the result was NaN. Even more surprising is that my terrain works perfectly even when the result is NaN.

The errors occur when the result suddenly comes back as pi randomly. I should never be getting an angle that big back. Heck I shouldn't ever be getting pi/2.

I will continue to investigate this problem (when I have time between Braid of course).

Monday, August 4, 2008

Nothing is easy

I thought that my camera was moving way too slowly after I increased the size to 6.5 million meters, so I increased its default speed. I also have a couple of keys that act as multipliers in order to get really high speeds. Unfortunately I realized that my terrain couldn't keep up with me because I was moving too fast. This is because the terrain would only move a maximum of 1 step each loop. So, if I traveled 3 steps' distance in one update loop, the terrain would fall 2 steps behind.

I wrote a solution to this by calculating the number of steps the camera has moved and then rotate the terrain the appropriate number of steps. This worked great and the terrain always kept up with the camera. BUT! Yes, the big dreaded but. The terrain randomly disappears. It will be fine for quite a while and then flash on and off like crazy. Even worse, sometimes it completely disappears and doesn't return, no matter which way I turn or move.

I have no idea what is causing this problem and I have tried debugging the code for awhile now. So either I find a fix to this bug, or I'll have to limit the speed to be slow enough that the terrain can keep up.

I feel bad for not having any media for so long, so here is a screenshot to appease the horde. I desperately need to get better texturing on the terrain, but that will come after I get the terrain working to my satisfaction.

Saturday, August 2, 2008

Everything revolves around the camera

Well I fixed the problems I was having with the camera. First, let me explain why I was having trouble. I had my planet centered at the origin (0,0,0) and I gave it a radius of about 6.5 million meters. I initialized the camera position to be right at ground level of the planet (6.5 million meters plus the height of the tallest mountain [I use 100,000 meters]). My camera always looks at a spot 1 meter in front of its position. Once you are dealing with positions up in the millions (about 6.6 million), the accuracy is lost and so it doesn't always register that 1 meter difference.

I found an article online where someone was having similar issues with having the sun at the origin and having the camera out at Earth's distance (about 150 billion meters = 1 AU). He fixed the problem by always making the camera situated at the origin and just positioning all objects based on the camera position. I thought I would give that a try.

I didn't have translation code for my planet yet, so I had to first write that and get it debugged. Once I had that working, it was pretty simple to switch the coordinates around (I added 2 lines of code and commented 1 out). Now my camera works perfectly even when dealing with a planet with a radius of 6.5 million.

I also managed to tweak the Z-fighting occuring with the depth buffer. So it looks alot better than what it did, but it's still not perfect and there are still triangles fighting with each other on the horizon. As soon as I have that fixed, I'll post some more screenshots.

Thursday, July 31, 2008

Planets are Too Big!

Last night I decided to increase the scale of the planet to Earth's size. The original size of the radius was 60,000 meters. Earth's radius is about 6.5 million meters. So, I had to make my planet over 100 times larger.

Unfortunately, all sorts of problems arose from attempting to do this. First, my depth buffer went to crap. If I wanted to be able to see anything from even a low altitude, I had to extend the far clipping plane back to at least 1 million meters. This resulted in all sorts of errors with triangles that were fighting over their depths because the range was trying to be stretched over such a long distance.

The second problem that occurred was that my camera was rotating in "steps". If I moved the mouse up, instead of smoothly rotating the view upwards, it would snap every Pi/4 or so. This snapping problem was alleviated by shrinking the size of the planet. Shrunk in half it was still too bad to use, shrunk by 4x was much better, but the problem was completely gone if I shrunk by 10x.

I really need to fix all of these problems so that I can have Earth-sized planets. There are many other planets that are even larger than Earth so this is very important. I have found some articles online where people have found solutions to these problems. I'm just unsure of how feasible these solutions are for XNA (they are all OpenGL solutions).

On a side note, I switched my Perlin noise function from fBm to ridged multifractal and it looks much better. I am very impressed with the terrain results. I would post screenshots, but with the depth issues and such, I figured I would wait.

Wednesday, July 30, 2008

Go Simple, Go Fast

I've made the executive decision to just go with the simple spherical lighting. It's already working and it's 3 times faster than the surface normal generation method. Besides, it will be simple to calculate the surface normal in a geometry shader later. Unfortunately XNA doesn't support geometry shaders currently. However if it ever adds that functionality in the future, I'll be able to add very realistic lighting rather quickly.

I'm now going to move onto something else. What that is exactly, I'm still not sure of. I have a list of TODOs, but I need to determine which one is a higher priority right now.

Tuesday, July 29, 2008

Do You See the Light?

I promised myself that I wouldn't post another update until I had lighting on my planetary terrain. I have partially fulfilled that promise.

While I did successfully get lighting on the terrain, it is not as realistic as I want it. I currently have two solutions. The first is that I'm simply illuminating the planet as a sphere. While this looks great in places where the light is coming straight down, it looks bad on the edges where there should be long shadows. (Ignore the square-ish terrain.)


















The second solution I have actually tries to calculate the surface normal in the vertex shader. I estimate where the neighboring vertices are to the right and above the current vertex. I calculate the two edge vectors and then take the cross product of them to acquire the normal of the surface. While this all sounds good and makes sense in theory, it isn't yielding the correct results. I have random shadow and light "splotches" scattering over the planet, except for one quarter-sphere (is that what a half of a hemisphere is called?) that turns out all black.


















On a sidenote that's rather strange, if I use the XNA screenshot component that I wrote, the shadows all come out as white. If I use Alt-Print Screen and capture the whole window, it comes out as black (which is what is actually displayed). Check out the white versions:
[Edit: Ha ha! I didn't realize until I posted the screenshots that they aren't coming out as white, they are coming out as transparent, which makes much more sense!)

Wednesday, July 23, 2008

Rotation Solved!

Well I implemented my pseudocode in XNA last night, made just some minor changes, and then I had a working rotating mesh. The guidance I received from Steve Hazen on the official XNA forums really helped me out and pointed me in the right direction.

I would post the code here, but I don't know how to post it without it looking atrocious.

I have the code posted in the XNA forums:
http://forums.xna.com/forums/p/14547/76204.aspx

As you can see from the final postings, Steve recommends some tweaks to the code that would increase accuracy and make it more efficient.

I'm really happy to finally have this problem solved. It took me four days to finally get a working solution. Now I can move on to the next item to implement. It's going to be tough as well, but no where near as tough as the rotation. At least that's what I'm hoping.

Until next time...

Tuesday, July 22, 2008

Rotation Solution?

I think I may have a solution to the rotation problems I was having. I have pseudocode scrawled out on some paper I found. (Along with several drawings of 3D axes with angles, cones, and cameras in various positions.)

My goal tonight is to implement the pseudocode in XNA and experiment with it. I'll report my findings tomorrow.

Until next time...

Monday, July 21, 2008

Rotation Issues

I have been working hard on coming up with a new LOD algorithm that eliminates that pesky "water" effect. I think I have a pretty good idea for a system that would not only remove that problem, but would also make the vertex shader faster.

Unfortunately I am currently stuck facing some problems with rotations. I have a mesh that I want to rotate around a sphere so that is always points at the camera. I could create a billboard matrix and use that, but there is a problem. That would create the water effect again. What I need to do is move the mesh in steps. If the camera moves beyond an angle threshold for either left/right or up/down, then the mesh should be moved one step in the appropriate direction.

I decided to use spherical coordinates to calculate the horizontal and vertical angles of the camera. This works great until you reach the north or south poles. Once you cross the poles, the mesh is rotated 180 degrees.

Here is the reason why that happens:

The "vertical" angle, theta, ranges from 0 to pi. Where 0 is pointing down the -Y axis and pi is pointing up the +Y axis. The "horizontal" angle, phi, ranges from 0 to 2pi. Where 0 is pointing along the +X axis, pi/2 is pointing along the +Z axis, pi is pointing along the -X axis, and 3pi/2 is pointing along the -Z axis.

In the case when the camera passes over the north (+Y) pole, theta is at its peak (pi) but then starts decreasing as you continue to the other side. Whereas phi essentially gets +pi added to it (or -pi depending up where the pole is crossed). That is why the object is being rotated 180 degrees (pi radians).

Whew!

The problem is that I don't know how to fix this rotation. I have been tinkering around with it for the last several days to no avail. I certainly hope I come up with a solution soon. I really want to continue work on my new LOD algorithm.

Until next time...

Sunday, July 20, 2008

LOD Algorithm

I thought I would explain the algorithm I used for the planetary LOD in more detail.

When you first start the program, two meshes are generated: a cone and a ring. Both of these meshes are configurable at creation. For a cone, you can define how many "slices" it is broken into (imagine a pie and cutting into equal slices). For a ring, you can define how many slices as well as how many inner rings the ring has. For example, you can have a ring mesh that is made up of five inner rings and split into forty-five slices. Both of these meshes are scaled to a unit sphere (its actual just a hemisphere) which is centered at the the origin and "points" out along the -Z axis.

At the lowest level of detail, the planet is just a cone. If the level is increased, the cone is shrunk in half and a ring is drawn attached to the bottom of the cone. If the level is increased again, the cone is shrunk in half again, the existing ring is shrunk in half, and another new ring is attached to the bottom of the existing ring. This process continues until the highest level of detail is reached.



















The resizing of the meshes is all done in the vertex shader. The terrain height is also calculated in the vertex shader via 8 octaves of fBm Perlin noise. This means that after the mesh generation in the beginning, the CPU does practically nothing. It just keeps track of what the current level is, and updates the shader parameter as necessary.

The hemisphere is updated every frame to be centered at the camera by simply calculating a billboard matrix as the world matrix of the hemisphere.

Well that should give you a decent grasp of how my LOD algorithm functions. As I mentioned in my previous post, it's not perfect because the centering of the hemisphere every frame really has a nasty looking "water" effect. I'm already at work on fixes for that, so if I get that working, I'll post what I did.

I'll leave you with a quick video of my LOD algorithm at work.

Saturday, July 19, 2008

Planetary LOD

I have been spending the last week writing my own implementation of an LOD system for a planet. It is inspired by Spherical Clipmaps, but it is not an actual implementation of them.

As I said, I began writing it last week (exactly a week ago today) and I finally have it all up and running (using real-time 3D Perlin Noise in the Vertex Shader and texture blending, no less!). Unfortunately, I have it running too smoothly, if you can believe that. I have the vertices being updated and centered under the camera every frame. This leads to the terrain looking similar to flowing water as you fly around the planet. As long as the camera is stationary, everything looks great. You can spin the camera around and look at all of the terrain around you. As soon as you start moving though, the "water" effect is very apparent.

I first tried to fix it by having the terrain position updated only once a second, but that looked terrible. So, I ended up saving the old camera position and then calculating the angle between the current camera position and the old position each frame. If the angle becomes greater than a threshold (I used something like Pi/64, yes that's sixty-four!) then I update the terrain position. This looks alot better than the time-based updating and it removes the "water" effect. Unfortunately, it also makes the terrain very "poppy". For example, if you see a mountain in the distance and you start to fly toward it, you would suddenly see more detail "pop" in as you got closer.

I think what I have now is pretty decent, but I want to have very smooth terrain, with no popping and no water effect. So, I'm kind of back to the drawing board trying to think of a system that fixes both of those problems. Not everything I have now is throw-away code though. I think I should be able to carry over alot from this project to the updated LOD system.

Friday, July 18, 2008

First Blog Entry

Greetings! Here is my first blog post .... um ever!

I will attempt to use this space to talk about some of the graphics projects that I am working on. Hopefully I actually stay devoted to it. I always seem to slack off on things similar to this.

It's your duty as a reader to reprimand me if I fall behind on updates!

Until next time...