This is the result of some experimentation with an antialiasing technique based on finding edges in an input image. In particular we take the color input and use it to find a gradient direction vector that we then use to determine which samples to blur the current pixel with. You can download the demo app here (requires DX10); sadly the imagery is a bit rudimentary - I thought about including some screenshots from a popular game or something but then thought better of it.

The controls are fairly basic:

R - View raw, unprocessed image
A - View edge antialiasing
B - View simple blur with 4 neighbors
E - View edge detection output
Left/Right Arrow - Cycle test image

I was prompted to do this after reading this recent thread over at GameDev. Unfortunately, overall I would say I find the results disappointing; in its current incarnation I don't really feel like it is all that much better than a simple blur, particularly considering the additional cost. I could see how maybe in an actual game environment I might be able to improve the quality some by using the additional information afforded by the depth buffer.

For now I think I will move on and take a stab at Morphological Antialiasing and figuring out what these guys are doing. In addition to the Intel paper, I came across this small bit of information on an existing GPU implementation. I have a few ideas to try out now - at the very least it should provide for some interesting experimentation.

As always, please contact me if you have issues running the demo. My little toy graphics engine is still in its infancy, so I would expect it doesn't work perfectly and I would love to know when it breaks.